2026-03-10T05:40:56.261 INFO:root:teuthology version: 1.2.4.dev6+g1c580df7a 2026-03-10T05:40:56.265 DEBUG:teuthology.report:Pushing job info to http://localhost:8080 2026-03-10T05:40:56.286 INFO:teuthology.run:Config: archive_path: /archive/kyr-2026-03-10_01:00:38-orch-squid-none-default-vps/919 branch: squid description: orch/cephadm/upgrade/{1-start-distro/1-start-ubuntu_22.04 2-repo_digest/repo_digest 3-upgrade/simple 4-wait 5-upgrade-ls agent/off mon_election/connectivity} email: null first_in_suite: false flavor: default job_id: '919' last_in_suite: false machine_type: vps name: kyr-2026-03-10_01:00:38-orch-squid-none-default-vps no_nested_subset: false os_type: ubuntu os_version: '22.04' overrides: admin_socket: branch: squid ansible.cephlab: branch: main skip_tags: nagios,monitoring-scripts,hostname,pubkeys,zap,sudoers,kerberos,ntp-client,resolvconf,cpan,nfs vars: timezone: UTC ceph: conf: global: mon election default strategy: 3 mgr: debug mgr: 20 debug ms: 1 mgr/cephadm/use_agent: false mon: debug mon: 20 debug ms: 1 debug paxos: 20 osd: debug ms: 1 debug osd: 20 osd mclock iops capacity threshold hdd: 49000 flavor: default log-ignorelist: - \(MDS_ALL_DOWN\) - \(MDS_UP_LESS_THAN_MAX\) - CEPHADM_STRAY_DAEMON - CEPHADM_FAILED_DAEMON - CEPHADM_AGENT_DOWN log-only-match: - CEPHADM_ sha1: e911bdebe5c8faa3800735d1568fcdca65db60df ceph-deploy: conf: client: log file: /var/log/ceph/ceph-$name.$pid.log mon: {} install: ceph: flavor: default sha1: e911bdebe5c8faa3800735d1568fcdca65db60df extra_system_packages: deb: - python3-xmltodict - python3-jmespath rpm: - bzip2 - perl-Test-Harness - python3-xmltodict - python3-jmespath workunit: branch: tt-squid sha1: 75a68fd8ca3f918fe9466b4c0bb385b7fc260a9b owner: kyr priority: 1000 repo: https://github.com/ceph/ceph.git roles: - - mon.a - mon.c - mgr.y - osd.0 - osd.1 - osd.2 - osd.3 - client.0 - node-exporter.a - alertmanager.a - - mon.b - mgr.x - osd.4 - osd.5 - osd.6 - osd.7 - client.1 - prometheus.a - grafana.a - node-exporter.b seed: 8043 sha1: e911bdebe5c8faa3800735d1568fcdca65db60df sleep_before_teardown: 0 subset: 1/64 suite: orch suite_branch: tt-squid suite_path: /home/teuthos/src/github.com_kshtsk_ceph_75a68fd8ca3f918fe9466b4c0bb385b7fc260a9b/qa suite_relpath: qa suite_repo: https://github.com/kshtsk/ceph.git suite_sha1: 75a68fd8ca3f918fe9466b4c0bb385b7fc260a9b targets: vm02.local: ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBCEsp5JobYLXdW0zTWQDTODqynbGSo1tOnFp3kiNXMIiy7p00vGi3yFe8rzbX5tgASIfp1Aslf1BAg8MagI2kJg= vm05.local: ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBJ3MIBSH6+RhMQZxw28cznBoPZYcqfK+U+NOFTkvXQdgcM+Nf/cU2IraFDZIoRad/JjWtI08kKvfc5wfDiRDI0w= tasks: - cephadm: cephadm_branch: v17.2.0 cephadm_git_url: https://github.com/ceph/ceph image: quay.io/ceph/ceph:v17.2.0 - cephadm.shell: mon.a: - ceph config set mgr mgr/cephadm/use_repo_digest false --force - cephadm.shell: env: - sha1 mon.a: - radosgw-admin realm create --rgw-realm=r --default - radosgw-admin zonegroup create --rgw-zonegroup=default --master --default - radosgw-admin zone create --rgw-zonegroup=default --rgw-zone=z --master --default - radosgw-admin period update --rgw-realm=r --commit - ceph orch apply rgw foo --realm r --zone z --placement=2 --port=8000 - ceph orch apply rgw smpl - ceph osd pool create foo - rbd pool init foo - ceph orch apply iscsi foo u p - sleep 120 - ceph config set mon mon_warn_on_insecure_global_id_reclaim false --force - ceph config set mon mon_warn_on_insecure_global_id_reclaim_allowed false --force - ceph config set global log_to_journald false --force - ceph orch upgrade start --image quay.ceph.io/ceph-ci/ceph:$sha1 - cephadm.shell: env: - sha1 mon.a: - while ceph orch upgrade status | jq '.in_progress' | grep true && ! ceph orch upgrade status | jq '.message' | grep Error ; do ceph orch ps ; ceph versions ; ceph orch upgrade status ; ceph health detail ; sleep 30 ; done - ceph orch ps - ceph versions - echo "wait for servicemap items w/ changing names to refresh" - sleep 60 - ceph orch ps - ceph versions - ceph orch upgrade status - ceph health detail - ceph versions | jq -e '.overall | length == 1' - ceph versions | jq -e '.overall | keys' | grep $sha1 - ceph orch ls | grep '^osd ' - cephadm.shell: mon.a: - ceph orch upgrade ls - ceph orch upgrade ls --image quay.io/ceph/ceph --show-all-versions | grep 16.2.0 - ceph orch upgrade ls --image quay.io/ceph/ceph --tags | grep v16.2.2 teuthology: fragments_dropped: [] meta: {} postmerge: [] teuthology_branch: clyso-debian-13 teuthology_repo: https://github.com/clyso/teuthology teuthology_sha1: 1c580df7a9c7c2aadc272da296344fd99f27c444 timestamp: 2026-03-10_01:00:38 tube: vps user: kyr verbose: false worker_log: /home/teuthos/.teuthology/dispatcher/dispatcher.vps.611473 2026-03-10T05:40:56.286 INFO:teuthology.run:suite_path is set to /home/teuthos/src/github.com_kshtsk_ceph_75a68fd8ca3f918fe9466b4c0bb385b7fc260a9b/qa; will attempt to use it 2026-03-10T05:40:56.286 INFO:teuthology.run:Found tasks at /home/teuthos/src/github.com_kshtsk_ceph_75a68fd8ca3f918fe9466b4c0bb385b7fc260a9b/qa/tasks 2026-03-10T05:40:56.287 INFO:teuthology.run_tasks:Running task internal.check_packages... 2026-03-10T05:40:56.287 INFO:teuthology.task.internal:Checking packages... 2026-03-10T05:40:56.287 INFO:teuthology.task.internal:Checking packages for os_type 'ubuntu', flavor 'default' and ceph hash 'e911bdebe5c8faa3800735d1568fcdca65db60df' 2026-03-10T05:40:56.287 WARNING:teuthology.packaging:More than one of ref, tag, branch, or sha1 supplied; using branch 2026-03-10T05:40:56.287 INFO:teuthology.packaging:ref: None 2026-03-10T05:40:56.287 INFO:teuthology.packaging:tag: None 2026-03-10T05:40:56.287 INFO:teuthology.packaging:branch: squid 2026-03-10T05:40:56.287 INFO:teuthology.packaging:sha1: e911bdebe5c8faa3800735d1568fcdca65db60df 2026-03-10T05:40:56.287 DEBUG:teuthology.packaging:Querying https://shaman.ceph.com/api/search?status=ready&project=ceph&flavor=default&distros=ubuntu%2F22.04%2Fx86_64&ref=squid 2026-03-10T05:40:56.965 INFO:teuthology.task.internal:Found packages for ceph version 19.2.3-678-ge911bdeb-1jammy 2026-03-10T05:40:56.966 INFO:teuthology.run_tasks:Running task internal.buildpackages_prep... 2026-03-10T05:40:56.967 INFO:teuthology.task.internal:no buildpackages task found 2026-03-10T05:40:56.967 INFO:teuthology.run_tasks:Running task internal.save_config... 2026-03-10T05:40:56.967 INFO:teuthology.task.internal:Saving configuration 2026-03-10T05:40:56.972 INFO:teuthology.run_tasks:Running task internal.check_lock... 2026-03-10T05:40:56.973 INFO:teuthology.task.internal.check_lock:Checking locks... 2026-03-10T05:40:56.979 DEBUG:teuthology.task.internal.check_lock:machine status is {'name': 'vm02.local', 'description': '/archive/kyr-2026-03-10_01:00:38-orch-squid-none-default-vps/919', 'up': True, 'machine_type': 'vps', 'is_vm': True, 'vm_host': {'name': 'localhost', 'description': None, 'up': True, 'machine_type': 'libvirt', 'is_vm': False, 'vm_host': None, 'os_type': None, 'os_version': None, 'arch': None, 'locked': True, 'locked_since': None, 'locked_by': None, 'mac_address': None, 'ssh_pub_key': None}, 'os_type': 'ubuntu', 'os_version': '22.04', 'arch': 'x86_64', 'locked': True, 'locked_since': '2026-03-10 05:39:47.440929', 'locked_by': 'kyr', 'mac_address': '52:55:00:00:00:02', 'ssh_pub_key': 'ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBCEsp5JobYLXdW0zTWQDTODqynbGSo1tOnFp3kiNXMIiy7p00vGi3yFe8rzbX5tgASIfp1Aslf1BAg8MagI2kJg='} 2026-03-10T05:40:56.984 DEBUG:teuthology.task.internal.check_lock:machine status is {'name': 'vm05.local', 'description': '/archive/kyr-2026-03-10_01:00:38-orch-squid-none-default-vps/919', 'up': True, 'machine_type': 'vps', 'is_vm': True, 'vm_host': {'name': 'localhost', 'description': None, 'up': True, 'machine_type': 'libvirt', 'is_vm': False, 'vm_host': None, 'os_type': None, 'os_version': None, 'arch': None, 'locked': True, 'locked_since': None, 'locked_by': None, 'mac_address': None, 'ssh_pub_key': None}, 'os_type': 'ubuntu', 'os_version': '22.04', 'arch': 'x86_64', 'locked': True, 'locked_since': '2026-03-10 05:39:47.440473', 'locked_by': 'kyr', 'mac_address': '52:55:00:00:00:05', 'ssh_pub_key': 'ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBJ3MIBSH6+RhMQZxw28cznBoPZYcqfK+U+NOFTkvXQdgcM+Nf/cU2IraFDZIoRad/JjWtI08kKvfc5wfDiRDI0w='} 2026-03-10T05:40:56.984 INFO:teuthology.run_tasks:Running task internal.add_remotes... 2026-03-10T05:40:56.985 INFO:teuthology.task.internal:roles: ubuntu@vm02.local - ['mon.a', 'mon.c', 'mgr.y', 'osd.0', 'osd.1', 'osd.2', 'osd.3', 'client.0', 'node-exporter.a', 'alertmanager.a'] 2026-03-10T05:40:56.985 INFO:teuthology.task.internal:roles: ubuntu@vm05.local - ['mon.b', 'mgr.x', 'osd.4', 'osd.5', 'osd.6', 'osd.7', 'client.1', 'prometheus.a', 'grafana.a', 'node-exporter.b'] 2026-03-10T05:40:56.985 INFO:teuthology.run_tasks:Running task console_log... 2026-03-10T05:40:56.991 DEBUG:teuthology.task.console_log:vm02 does not support IPMI; excluding 2026-03-10T05:40:56.995 DEBUG:teuthology.task.console_log:vm05 does not support IPMI; excluding 2026-03-10T05:40:56.995 DEBUG:teuthology.exit:Installing handler: Handler(exiter=, func=.kill_console_loggers at 0x7f1bb24a3f40>, signals=[15]) 2026-03-10T05:40:56.995 INFO:teuthology.run_tasks:Running task internal.connect... 2026-03-10T05:40:56.996 INFO:teuthology.task.internal:Opening connections... 2026-03-10T05:40:56.996 DEBUG:teuthology.task.internal:connecting to ubuntu@vm02.local 2026-03-10T05:40:56.997 DEBUG:teuthology.orchestra.connection:{'hostname': 'vm02.local', 'username': 'ubuntu', 'timeout': 60} 2026-03-10T05:40:57.057 DEBUG:teuthology.task.internal:connecting to ubuntu@vm05.local 2026-03-10T05:40:57.057 DEBUG:teuthology.orchestra.connection:{'hostname': 'vm05.local', 'username': 'ubuntu', 'timeout': 60} 2026-03-10T05:40:57.114 INFO:teuthology.run_tasks:Running task internal.push_inventory... 2026-03-10T05:40:57.115 DEBUG:teuthology.orchestra.run.vm02:> uname -m 2026-03-10T05:40:57.122 INFO:teuthology.orchestra.run.vm02.stdout:x86_64 2026-03-10T05:40:57.122 DEBUG:teuthology.orchestra.run.vm02:> cat /etc/os-release 2026-03-10T05:40:57.167 INFO:teuthology.orchestra.run.vm02.stdout:PRETTY_NAME="Ubuntu 22.04.5 LTS" 2026-03-10T05:40:57.167 INFO:teuthology.orchestra.run.vm02.stdout:NAME="Ubuntu" 2026-03-10T05:40:57.167 INFO:teuthology.orchestra.run.vm02.stdout:VERSION_ID="22.04" 2026-03-10T05:40:57.167 INFO:teuthology.orchestra.run.vm02.stdout:VERSION="22.04.5 LTS (Jammy Jellyfish)" 2026-03-10T05:40:57.167 INFO:teuthology.orchestra.run.vm02.stdout:VERSION_CODENAME=jammy 2026-03-10T05:40:57.167 INFO:teuthology.orchestra.run.vm02.stdout:ID=ubuntu 2026-03-10T05:40:57.167 INFO:teuthology.orchestra.run.vm02.stdout:ID_LIKE=debian 2026-03-10T05:40:57.167 INFO:teuthology.orchestra.run.vm02.stdout:HOME_URL="https://www.ubuntu.com/" 2026-03-10T05:40:57.167 INFO:teuthology.orchestra.run.vm02.stdout:SUPPORT_URL="https://help.ubuntu.com/" 2026-03-10T05:40:57.167 INFO:teuthology.orchestra.run.vm02.stdout:BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/" 2026-03-10T05:40:57.167 INFO:teuthology.orchestra.run.vm02.stdout:PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy" 2026-03-10T05:40:57.167 INFO:teuthology.orchestra.run.vm02.stdout:UBUNTU_CODENAME=jammy 2026-03-10T05:40:57.167 INFO:teuthology.lock.ops:Updating vm02.local on lock server 2026-03-10T05:40:57.172 DEBUG:teuthology.orchestra.run.vm05:> uname -m 2026-03-10T05:40:57.177 INFO:teuthology.orchestra.run.vm05.stdout:x86_64 2026-03-10T05:40:57.177 DEBUG:teuthology.orchestra.run.vm05:> cat /etc/os-release 2026-03-10T05:40:57.221 INFO:teuthology.orchestra.run.vm05.stdout:PRETTY_NAME="Ubuntu 22.04.5 LTS" 2026-03-10T05:40:57.221 INFO:teuthology.orchestra.run.vm05.stdout:NAME="Ubuntu" 2026-03-10T05:40:57.221 INFO:teuthology.orchestra.run.vm05.stdout:VERSION_ID="22.04" 2026-03-10T05:40:57.221 INFO:teuthology.orchestra.run.vm05.stdout:VERSION="22.04.5 LTS (Jammy Jellyfish)" 2026-03-10T05:40:57.221 INFO:teuthology.orchestra.run.vm05.stdout:VERSION_CODENAME=jammy 2026-03-10T05:40:57.221 INFO:teuthology.orchestra.run.vm05.stdout:ID=ubuntu 2026-03-10T05:40:57.221 INFO:teuthology.orchestra.run.vm05.stdout:ID_LIKE=debian 2026-03-10T05:40:57.221 INFO:teuthology.orchestra.run.vm05.stdout:HOME_URL="https://www.ubuntu.com/" 2026-03-10T05:40:57.221 INFO:teuthology.orchestra.run.vm05.stdout:SUPPORT_URL="https://help.ubuntu.com/" 2026-03-10T05:40:57.221 INFO:teuthology.orchestra.run.vm05.stdout:BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/" 2026-03-10T05:40:57.221 INFO:teuthology.orchestra.run.vm05.stdout:PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy" 2026-03-10T05:40:57.221 INFO:teuthology.orchestra.run.vm05.stdout:UBUNTU_CODENAME=jammy 2026-03-10T05:40:57.221 INFO:teuthology.lock.ops:Updating vm05.local on lock server 2026-03-10T05:40:57.225 INFO:teuthology.run_tasks:Running task internal.serialize_remote_roles... 2026-03-10T05:40:57.227 INFO:teuthology.run_tasks:Running task internal.check_conflict... 2026-03-10T05:40:57.228 INFO:teuthology.task.internal:Checking for old test directory... 2026-03-10T05:40:57.228 DEBUG:teuthology.orchestra.run.vm02:> test '!' -e /home/ubuntu/cephtest 2026-03-10T05:40:57.228 DEBUG:teuthology.orchestra.run.vm05:> test '!' -e /home/ubuntu/cephtest 2026-03-10T05:40:57.264 INFO:teuthology.run_tasks:Running task internal.check_ceph_data... 2026-03-10T05:40:57.265 INFO:teuthology.task.internal:Checking for non-empty /var/lib/ceph... 2026-03-10T05:40:57.265 DEBUG:teuthology.orchestra.run.vm02:> test -z $(ls -A /var/lib/ceph) 2026-03-10T05:40:57.273 DEBUG:teuthology.orchestra.run.vm05:> test -z $(ls -A /var/lib/ceph) 2026-03-10T05:40:57.275 INFO:teuthology.orchestra.run.vm02.stderr:ls: cannot access '/var/lib/ceph': No such file or directory 2026-03-10T05:40:57.309 INFO:teuthology.orchestra.run.vm05.stderr:ls: cannot access '/var/lib/ceph': No such file or directory 2026-03-10T05:40:57.309 INFO:teuthology.run_tasks:Running task internal.vm_setup... 2026-03-10T05:40:57.316 DEBUG:teuthology.orchestra.run.vm02:> test -e /ceph-qa-ready 2026-03-10T05:40:57.319 DEBUG:teuthology.orchestra.run:got remote process result: 1 2026-03-10T05:40:57.553 DEBUG:teuthology.orchestra.run.vm05:> test -e /ceph-qa-ready 2026-03-10T05:40:57.556 DEBUG:teuthology.orchestra.run:got remote process result: 1 2026-03-10T05:40:57.777 INFO:teuthology.run_tasks:Running task internal.base... 2026-03-10T05:40:57.778 INFO:teuthology.task.internal:Creating test directory... 2026-03-10T05:40:57.778 DEBUG:teuthology.orchestra.run.vm02:> mkdir -p -m0755 -- /home/ubuntu/cephtest 2026-03-10T05:40:57.779 DEBUG:teuthology.orchestra.run.vm05:> mkdir -p -m0755 -- /home/ubuntu/cephtest 2026-03-10T05:40:57.782 INFO:teuthology.run_tasks:Running task internal.archive_upload... 2026-03-10T05:40:57.783 INFO:teuthology.run_tasks:Running task internal.archive... 2026-03-10T05:40:57.784 INFO:teuthology.task.internal:Creating archive directory... 2026-03-10T05:40:57.784 DEBUG:teuthology.orchestra.run.vm02:> install -d -m0755 -- /home/ubuntu/cephtest/archive 2026-03-10T05:40:57.825 DEBUG:teuthology.orchestra.run.vm05:> install -d -m0755 -- /home/ubuntu/cephtest/archive 2026-03-10T05:40:57.830 INFO:teuthology.run_tasks:Running task internal.coredump... 2026-03-10T05:40:57.831 INFO:teuthology.task.internal:Enabling coredump saving... 2026-03-10T05:40:57.831 DEBUG:teuthology.orchestra.run.vm02:> test -f /run/.containerenv -o -f /.dockerenv 2026-03-10T05:40:57.870 DEBUG:teuthology.orchestra.run:got remote process result: 1 2026-03-10T05:40:57.870 DEBUG:teuthology.orchestra.run.vm05:> test -f /run/.containerenv -o -f /.dockerenv 2026-03-10T05:40:57.872 DEBUG:teuthology.orchestra.run:got remote process result: 1 2026-03-10T05:40:57.872 DEBUG:teuthology.orchestra.run.vm02:> install -d -m0755 -- /home/ubuntu/cephtest/archive/coredump && sudo sysctl -w kernel.core_pattern=/home/ubuntu/cephtest/archive/coredump/%t.%p.core && echo kernel.core_pattern=/home/ubuntu/cephtest/archive/coredump/%t.%p.core | sudo tee -a /etc/sysctl.conf 2026-03-10T05:40:57.913 DEBUG:teuthology.orchestra.run.vm05:> install -d -m0755 -- /home/ubuntu/cephtest/archive/coredump && sudo sysctl -w kernel.core_pattern=/home/ubuntu/cephtest/archive/coredump/%t.%p.core && echo kernel.core_pattern=/home/ubuntu/cephtest/archive/coredump/%t.%p.core | sudo tee -a /etc/sysctl.conf 2026-03-10T05:40:57.920 INFO:teuthology.orchestra.run.vm02.stdout:kernel.core_pattern = /home/ubuntu/cephtest/archive/coredump/%t.%p.core 2026-03-10T05:40:57.921 INFO:teuthology.orchestra.run.vm05.stdout:kernel.core_pattern = /home/ubuntu/cephtest/archive/coredump/%t.%p.core 2026-03-10T05:40:57.925 INFO:teuthology.orchestra.run.vm02.stdout:kernel.core_pattern=/home/ubuntu/cephtest/archive/coredump/%t.%p.core 2026-03-10T05:40:57.926 INFO:teuthology.orchestra.run.vm05.stdout:kernel.core_pattern=/home/ubuntu/cephtest/archive/coredump/%t.%p.core 2026-03-10T05:40:57.926 INFO:teuthology.run_tasks:Running task internal.sudo... 2026-03-10T05:40:57.928 INFO:teuthology.task.internal:Configuring sudo... 2026-03-10T05:40:57.928 DEBUG:teuthology.orchestra.run.vm02:> sudo sed -i.orig.teuthology -e 's/^\([^#]*\) \(requiretty\)/\1 !\2/g' -e 's/^\([^#]*\) !\(visiblepw\)/\1 \2/g' /etc/sudoers 2026-03-10T05:40:57.968 DEBUG:teuthology.orchestra.run.vm05:> sudo sed -i.orig.teuthology -e 's/^\([^#]*\) \(requiretty\)/\1 !\2/g' -e 's/^\([^#]*\) !\(visiblepw\)/\1 \2/g' /etc/sudoers 2026-03-10T05:40:57.976 INFO:teuthology.run_tasks:Running task internal.syslog... 2026-03-10T05:40:57.978 INFO:teuthology.task.internal.syslog:Starting syslog monitoring... 2026-03-10T05:40:57.978 DEBUG:teuthology.orchestra.run.vm02:> mkdir -p -m0755 -- /home/ubuntu/cephtest/archive/syslog 2026-03-10T05:40:58.016 DEBUG:teuthology.orchestra.run.vm05:> mkdir -p -m0755 -- /home/ubuntu/cephtest/archive/syslog 2026-03-10T05:40:58.020 DEBUG:teuthology.orchestra.run.vm02:> install -m 666 /dev/null /home/ubuntu/cephtest/archive/syslog/kern.log 2026-03-10T05:40:58.062 DEBUG:teuthology.orchestra.run.vm02:> install -m 666 /dev/null /home/ubuntu/cephtest/archive/syslog/misc.log 2026-03-10T05:40:58.106 DEBUG:teuthology.orchestra.run.vm02:> set -ex 2026-03-10T05:40:58.106 DEBUG:teuthology.orchestra.run.vm02:> sudo dd of=/etc/rsyslog.d/80-cephtest.conf 2026-03-10T05:40:58.154 DEBUG:teuthology.orchestra.run.vm05:> install -m 666 /dev/null /home/ubuntu/cephtest/archive/syslog/kern.log 2026-03-10T05:40:58.158 DEBUG:teuthology.orchestra.run.vm05:> install -m 666 /dev/null /home/ubuntu/cephtest/archive/syslog/misc.log 2026-03-10T05:40:58.208 DEBUG:teuthology.orchestra.run.vm05:> set -ex 2026-03-10T05:40:58.209 DEBUG:teuthology.orchestra.run.vm05:> sudo dd of=/etc/rsyslog.d/80-cephtest.conf 2026-03-10T05:40:58.256 DEBUG:teuthology.orchestra.run.vm02:> sudo service rsyslog restart 2026-03-10T05:40:58.257 DEBUG:teuthology.orchestra.run.vm05:> sudo service rsyslog restart 2026-03-10T05:40:58.313 INFO:teuthology.run_tasks:Running task internal.timer... 2026-03-10T05:40:58.315 INFO:teuthology.task.internal:Starting timer... 2026-03-10T05:40:58.315 INFO:teuthology.run_tasks:Running task pcp... 2026-03-10T05:40:58.317 INFO:teuthology.run_tasks:Running task selinux... 2026-03-10T05:40:58.319 INFO:teuthology.task.selinux:Excluding vm02: VMs are not yet supported 2026-03-10T05:40:58.319 INFO:teuthology.task.selinux:Excluding vm05: VMs are not yet supported 2026-03-10T05:40:58.319 DEBUG:teuthology.task.selinux:Getting current SELinux state 2026-03-10T05:40:58.319 DEBUG:teuthology.task.selinux:Existing SELinux modes: {} 2026-03-10T05:40:58.319 INFO:teuthology.task.selinux:Putting SELinux into permissive mode 2026-03-10T05:40:58.320 INFO:teuthology.run_tasks:Running task ansible.cephlab... 2026-03-10T05:40:58.321 DEBUG:teuthology.task:Applying overrides for task ansible.cephlab: {'branch': 'main', 'skip_tags': 'nagios,monitoring-scripts,hostname,pubkeys,zap,sudoers,kerberos,ntp-client,resolvconf,cpan,nfs', 'vars': {'timezone': 'UTC'}} 2026-03-10T05:40:58.321 DEBUG:teuthology.repo_utils:Setting repo remote to https://github.com/ceph/ceph-cm-ansible.git 2026-03-10T05:40:58.322 INFO:teuthology.repo_utils:Fetching github.com_ceph_ceph-cm-ansible_main from origin 2026-03-10T05:40:58.912 DEBUG:teuthology.repo_utils:Resetting repo at /home/teuthos/src/github.com_ceph_ceph-cm-ansible_main to origin/main 2026-03-10T05:40:58.917 INFO:teuthology.task.ansible:Playbook: [{'import_playbook': 'ansible_managed.yml'}, {'import_playbook': 'teuthology.yml'}, {'hosts': 'testnodes', 'tasks': [{'set_fact': {'ran_from_cephlab_playbook': True}}]}, {'import_playbook': 'testnodes.yml'}, {'import_playbook': 'container-host.yml'}, {'import_playbook': 'cobbler.yml'}, {'import_playbook': 'paddles.yml'}, {'import_playbook': 'pulpito.yml'}, {'hosts': 'testnodes', 'become': True, 'tasks': [{'name': 'Touch /ceph-qa-ready', 'file': {'path': '/ceph-qa-ready', 'state': 'touch'}, 'when': 'ran_from_cephlab_playbook|bool'}]}] 2026-03-10T05:40:58.917 DEBUG:teuthology.task.ansible:Running ansible-playbook -v --extra-vars '{"ansible_ssh_user": "ubuntu", "timezone": "UTC"}' -i /tmp/teuth_ansible_inventoryp1439pgb --limit vm02.local,vm05.local /home/teuthos/src/github.com_ceph_ceph-cm-ansible_main/cephlab.yml --skip-tags nagios,monitoring-scripts,hostname,pubkeys,zap,sudoers,kerberos,ntp-client,resolvconf,cpan,nfs 2026-03-10T05:43:16.840 DEBUG:teuthology.task.ansible:Reconnecting to [Remote(name='ubuntu@vm02.local'), Remote(name='ubuntu@vm05.local')] 2026-03-10T05:43:16.841 INFO:teuthology.orchestra.remote:Trying to reconnect to host 'ubuntu@vm02.local' 2026-03-10T05:43:16.841 DEBUG:teuthology.orchestra.connection:{'hostname': 'vm02.local', 'username': 'ubuntu', 'timeout': 60} 2026-03-10T05:43:16.899 DEBUG:teuthology.orchestra.run.vm02:> true 2026-03-10T05:43:17.104 INFO:teuthology.orchestra.remote:Successfully reconnected to host 'ubuntu@vm02.local' 2026-03-10T05:43:17.105 INFO:teuthology.orchestra.remote:Trying to reconnect to host 'ubuntu@vm05.local' 2026-03-10T05:43:17.105 DEBUG:teuthology.orchestra.connection:{'hostname': 'vm05.local', 'username': 'ubuntu', 'timeout': 60} 2026-03-10T05:43:17.165 DEBUG:teuthology.orchestra.run.vm05:> true 2026-03-10T05:43:17.360 INFO:teuthology.orchestra.remote:Successfully reconnected to host 'ubuntu@vm05.local' 2026-03-10T05:43:17.360 INFO:teuthology.run_tasks:Running task clock... 2026-03-10T05:43:17.363 INFO:teuthology.task.clock:Syncing clocks and checking initial clock skew... 2026-03-10T05:43:17.363 INFO:teuthology.orchestra.run:Running command with timeout 360 2026-03-10T05:43:17.363 DEBUG:teuthology.orchestra.run.vm02:> sudo systemctl stop ntp.service || sudo systemctl stop ntpd.service || sudo systemctl stop chronyd.service ; sudo ntpd -gq || sudo chronyc makestep ; sudo systemctl start ntp.service || sudo systemctl start ntpd.service || sudo systemctl start chronyd.service ; PATH=/usr/bin:/usr/sbin ntpq -p || PATH=/usr/bin:/usr/sbin chronyc sources || true 2026-03-10T05:43:17.364 INFO:teuthology.orchestra.run:Running command with timeout 360 2026-03-10T05:43:17.364 DEBUG:teuthology.orchestra.run.vm05:> sudo systemctl stop ntp.service || sudo systemctl stop ntpd.service || sudo systemctl stop chronyd.service ; sudo ntpd -gq || sudo chronyc makestep ; sudo systemctl start ntp.service || sudo systemctl start ntpd.service || sudo systemctl start chronyd.service ; PATH=/usr/bin:/usr/sbin ntpq -p || PATH=/usr/bin:/usr/sbin chronyc sources || true 2026-03-10T05:43:17.379 INFO:teuthology.orchestra.run.vm02.stdout:10 Mar 05:43:17 ntpd[16099]: ntpd 4.2.8p15@1.3728-o Wed Feb 16 17:13:02 UTC 2022 (1): Starting 2026-03-10T05:43:17.379 INFO:teuthology.orchestra.run.vm02.stdout:10 Mar 05:43:17 ntpd[16099]: Command line: ntpd -gq 2026-03-10T05:43:17.379 INFO:teuthology.orchestra.run.vm02.stdout:10 Mar 05:43:17 ntpd[16099]: ---------------------------------------------------- 2026-03-10T05:43:17.379 INFO:teuthology.orchestra.run.vm02.stdout:10 Mar 05:43:17 ntpd[16099]: ntp-4 is maintained by Network Time Foundation, 2026-03-10T05:43:17.379 INFO:teuthology.orchestra.run.vm02.stdout:10 Mar 05:43:17 ntpd[16099]: Inc. (NTF), a non-profit 501(c)(3) public-benefit 2026-03-10T05:43:17.379 INFO:teuthology.orchestra.run.vm02.stdout:10 Mar 05:43:17 ntpd[16099]: corporation. Support and training for ntp-4 are 2026-03-10T05:43:17.379 INFO:teuthology.orchestra.run.vm02.stdout:10 Mar 05:43:17 ntpd[16099]: available at https://www.nwtime.org/support 2026-03-10T05:43:17.379 INFO:teuthology.orchestra.run.vm02.stdout:10 Mar 05:43:17 ntpd[16099]: ---------------------------------------------------- 2026-03-10T05:43:17.379 INFO:teuthology.orchestra.run.vm02.stdout:10 Mar 05:43:17 ntpd[16099]: proto: precision = 0.029 usec (-25) 2026-03-10T05:43:17.379 INFO:teuthology.orchestra.run.vm02.stdout:10 Mar 05:43:17 ntpd[16099]: basedate set to 2022-02-04 2026-03-10T05:43:17.379 INFO:teuthology.orchestra.run.vm02.stdout:10 Mar 05:43:17 ntpd[16099]: gps base set to 2022-02-06 (week 2196) 2026-03-10T05:43:17.379 INFO:teuthology.orchestra.run.vm02.stdout:10 Mar 05:43:17 ntpd[16099]: leapsecond file ('/usr/share/zoneinfo/leap-seconds.list'): good hash signature 2026-03-10T05:43:17.379 INFO:teuthology.orchestra.run.vm02.stdout:10 Mar 05:43:17 ntpd[16099]: leapsecond file ('/usr/share/zoneinfo/leap-seconds.list'): loaded, expire=2025-12-28T00:00:00Z last=2017-01-01T00:00:00Z ofs=37 2026-03-10T05:43:17.379 INFO:teuthology.orchestra.run.vm02.stdout:10 Mar 05:43:17 ntpd[16099]: Listen and drop on 0 v6wildcard [::]:123 2026-03-10T05:43:17.379 INFO:teuthology.orchestra.run.vm02.stdout:10 Mar 05:43:17 ntpd[16099]: Listen and drop on 1 v4wildcard 0.0.0.0:123 2026-03-10T05:43:17.379 INFO:teuthology.orchestra.run.vm02.stdout:10 Mar 05:43:17 ntpd[16099]: Listen normally on 2 lo 127.0.0.1:123 2026-03-10T05:43:17.379 INFO:teuthology.orchestra.run.vm02.stdout:10 Mar 05:43:17 ntpd[16099]: Listen normally on 3 ens3 192.168.123.102:123 2026-03-10T05:43:17.379 INFO:teuthology.orchestra.run.vm02.stdout:10 Mar 05:43:17 ntpd[16099]: Listen normally on 4 lo [::1]:123 2026-03-10T05:43:17.379 INFO:teuthology.orchestra.run.vm02.stdout:10 Mar 05:43:17 ntpd[16099]: Listen normally on 5 ens3 [fe80::5055:ff:fe00:2%2]:123 2026-03-10T05:43:17.379 INFO:teuthology.orchestra.run.vm02.stdout:10 Mar 05:43:17 ntpd[16099]: Listening on routing socket on fd #22 for interface updates 2026-03-10T05:43:17.379 INFO:teuthology.orchestra.run.vm02.stderr:10 Mar 05:43:17 ntpd[16099]: leapsecond file ('/usr/share/zoneinfo/leap-seconds.list'): expired 73 days ago 2026-03-10T05:43:17.416 INFO:teuthology.orchestra.run.vm05.stdout:10 Mar 05:43:17 ntpd[16110]: ntpd 4.2.8p15@1.3728-o Wed Feb 16 17:13:02 UTC 2022 (1): Starting 2026-03-10T05:43:17.416 INFO:teuthology.orchestra.run.vm05.stdout:10 Mar 05:43:17 ntpd[16110]: Command line: ntpd -gq 2026-03-10T05:43:17.416 INFO:teuthology.orchestra.run.vm05.stdout:10 Mar 05:43:17 ntpd[16110]: ---------------------------------------------------- 2026-03-10T05:43:17.416 INFO:teuthology.orchestra.run.vm05.stdout:10 Mar 05:43:17 ntpd[16110]: ntp-4 is maintained by Network Time Foundation, 2026-03-10T05:43:17.416 INFO:teuthology.orchestra.run.vm05.stdout:10 Mar 05:43:17 ntpd[16110]: Inc. (NTF), a non-profit 501(c)(3) public-benefit 2026-03-10T05:43:17.416 INFO:teuthology.orchestra.run.vm05.stdout:10 Mar 05:43:17 ntpd[16110]: corporation. Support and training for ntp-4 are 2026-03-10T05:43:17.416 INFO:teuthology.orchestra.run.vm05.stdout:10 Mar 05:43:17 ntpd[16110]: available at https://www.nwtime.org/support 2026-03-10T05:43:17.416 INFO:teuthology.orchestra.run.vm05.stdout:10 Mar 05:43:17 ntpd[16110]: ---------------------------------------------------- 2026-03-10T05:43:17.416 INFO:teuthology.orchestra.run.vm05.stdout:10 Mar 05:43:17 ntpd[16110]: proto: precision = 0.029 usec (-25) 2026-03-10T05:43:17.416 INFO:teuthology.orchestra.run.vm05.stdout:10 Mar 05:43:17 ntpd[16110]: basedate set to 2022-02-04 2026-03-10T05:43:17.416 INFO:teuthology.orchestra.run.vm05.stdout:10 Mar 05:43:17 ntpd[16110]: gps base set to 2022-02-06 (week 2196) 2026-03-10T05:43:17.416 INFO:teuthology.orchestra.run.vm05.stdout:10 Mar 05:43:17 ntpd[16110]: leapsecond file ('/usr/share/zoneinfo/leap-seconds.list'): good hash signature 2026-03-10T05:43:17.416 INFO:teuthology.orchestra.run.vm05.stdout:10 Mar 05:43:17 ntpd[16110]: leapsecond file ('/usr/share/zoneinfo/leap-seconds.list'): loaded, expire=2025-12-28T00:00:00Z last=2017-01-01T00:00:00Z ofs=37 2026-03-10T05:43:17.416 INFO:teuthology.orchestra.run.vm05.stdout:10 Mar 05:43:17 ntpd[16110]: Listen and drop on 0 v6wildcard [::]:123 2026-03-10T05:43:17.416 INFO:teuthology.orchestra.run.vm05.stdout:10 Mar 05:43:17 ntpd[16110]: Listen and drop on 1 v4wildcard 0.0.0.0:123 2026-03-10T05:43:17.416 INFO:teuthology.orchestra.run.vm05.stdout:10 Mar 05:43:17 ntpd[16110]: Listen normally on 2 lo 127.0.0.1:123 2026-03-10T05:43:17.416 INFO:teuthology.orchestra.run.vm05.stdout:10 Mar 05:43:17 ntpd[16110]: Listen normally on 3 ens3 192.168.123.105:123 2026-03-10T05:43:17.416 INFO:teuthology.orchestra.run.vm05.stdout:10 Mar 05:43:17 ntpd[16110]: Listen normally on 4 lo [::1]:123 2026-03-10T05:43:17.416 INFO:teuthology.orchestra.run.vm05.stdout:10 Mar 05:43:17 ntpd[16110]: Listen normally on 5 ens3 [fe80::5055:ff:fe00:5%2]:123 2026-03-10T05:43:17.416 INFO:teuthology.orchestra.run.vm05.stdout:10 Mar 05:43:17 ntpd[16110]: Listening on routing socket on fd #22 for interface updates 2026-03-10T05:43:17.416 INFO:teuthology.orchestra.run.vm05.stderr:10 Mar 05:43:17 ntpd[16110]: leapsecond file ('/usr/share/zoneinfo/leap-seconds.list'): expired 73 days ago 2026-03-10T05:43:18.379 INFO:teuthology.orchestra.run.vm02.stdout:10 Mar 05:43:18 ntpd[16099]: Soliciting pool server 46.224.156.215 2026-03-10T05:43:18.416 INFO:teuthology.orchestra.run.vm05.stdout:10 Mar 05:43:18 ntpd[16110]: Soliciting pool server 46.224.156.215 2026-03-10T05:43:19.377 INFO:teuthology.orchestra.run.vm02.stdout:10 Mar 05:43:19 ntpd[16099]: Soliciting pool server 134.60.1.27 2026-03-10T05:43:19.377 INFO:teuthology.orchestra.run.vm02.stdout:10 Mar 05:43:19 ntpd[16099]: Soliciting pool server 141.84.43.73 2026-03-10T05:43:19.415 INFO:teuthology.orchestra.run.vm05.stdout:10 Mar 05:43:19 ntpd[16110]: Soliciting pool server 134.60.1.27 2026-03-10T05:43:19.415 INFO:teuthology.orchestra.run.vm05.stdout:10 Mar 05:43:19 ntpd[16110]: Soliciting pool server 141.84.43.73 2026-03-10T05:43:20.376 INFO:teuthology.orchestra.run.vm02.stdout:10 Mar 05:43:20 ntpd[16099]: Soliciting pool server 185.252.140.126 2026-03-10T05:43:20.376 INFO:teuthology.orchestra.run.vm02.stdout:10 Mar 05:43:20 ntpd[16099]: Soliciting pool server 194.36.144.87 2026-03-10T05:43:20.377 INFO:teuthology.orchestra.run.vm02.stdout:10 Mar 05:43:20 ntpd[16099]: Soliciting pool server 18.192.244.117 2026-03-10T05:43:20.415 INFO:teuthology.orchestra.run.vm05.stdout:10 Mar 05:43:20 ntpd[16110]: Soliciting pool server 185.252.140.126 2026-03-10T05:43:20.415 INFO:teuthology.orchestra.run.vm05.stdout:10 Mar 05:43:20 ntpd[16110]: Soliciting pool server 194.36.144.87 2026-03-10T05:43:20.416 INFO:teuthology.orchestra.run.vm05.stdout:10 Mar 05:43:20 ntpd[16110]: Soliciting pool server 18.192.244.117 2026-03-10T05:43:21.376 INFO:teuthology.orchestra.run.vm02.stdout:10 Mar 05:43:21 ntpd[16099]: Soliciting pool server 217.154.182.60 2026-03-10T05:43:21.376 INFO:teuthology.orchestra.run.vm02.stdout:10 Mar 05:43:21 ntpd[16099]: Soliciting pool server 185.252.140.125 2026-03-10T05:43:21.376 INFO:teuthology.orchestra.run.vm02.stdout:10 Mar 05:43:21 ntpd[16099]: Soliciting pool server 82.165.178.31 2026-03-10T05:43:21.376 INFO:teuthology.orchestra.run.vm02.stdout:10 Mar 05:43:21 ntpd[16099]: Soliciting pool server 157.90.16.149 2026-03-10T05:43:21.415 INFO:teuthology.orchestra.run.vm05.stdout:10 Mar 05:43:21 ntpd[16110]: Soliciting pool server 217.154.182.60 2026-03-10T05:43:21.415 INFO:teuthology.orchestra.run.vm05.stdout:10 Mar 05:43:21 ntpd[16110]: Soliciting pool server 185.252.140.125 2026-03-10T05:43:21.415 INFO:teuthology.orchestra.run.vm05.stdout:10 Mar 05:43:21 ntpd[16110]: Soliciting pool server 82.165.178.31 2026-03-10T05:43:21.415 INFO:teuthology.orchestra.run.vm05.stdout:10 Mar 05:43:21 ntpd[16110]: Soliciting pool server 157.90.16.149 2026-03-10T05:43:22.375 INFO:teuthology.orchestra.run.vm02.stdout:10 Mar 05:43:22 ntpd[16099]: Soliciting pool server 212.132.108.186 2026-03-10T05:43:22.375 INFO:teuthology.orchestra.run.vm02.stdout:10 Mar 05:43:22 ntpd[16099]: Soliciting pool server 168.119.211.223 2026-03-10T05:43:22.375 INFO:teuthology.orchestra.run.vm02.stdout:10 Mar 05:43:22 ntpd[16099]: Soliciting pool server 94.130.23.46 2026-03-10T05:43:22.376 INFO:teuthology.orchestra.run.vm02.stdout:10 Mar 05:43:22 ntpd[16099]: Soliciting pool server 185.125.190.56 2026-03-10T05:43:22.415 INFO:teuthology.orchestra.run.vm05.stdout:10 Mar 05:43:22 ntpd[16110]: Soliciting pool server 212.132.108.186 2026-03-10T05:43:22.415 INFO:teuthology.orchestra.run.vm05.stdout:10 Mar 05:43:22 ntpd[16110]: Soliciting pool server 168.119.211.223 2026-03-10T05:43:22.415 INFO:teuthology.orchestra.run.vm05.stdout:10 Mar 05:43:22 ntpd[16110]: Soliciting pool server 94.130.23.46 2026-03-10T05:43:22.415 INFO:teuthology.orchestra.run.vm05.stdout:10 Mar 05:43:22 ntpd[16110]: Soliciting pool server 185.125.190.56 2026-03-10T05:43:23.375 INFO:teuthology.orchestra.run.vm02.stdout:10 Mar 05:43:23 ntpd[16099]: Soliciting pool server 185.125.190.57 2026-03-10T05:43:23.375 INFO:teuthology.orchestra.run.vm02.stdout:10 Mar 05:43:23 ntpd[16099]: Soliciting pool server 213.172.105.106 2026-03-10T05:43:23.375 INFO:teuthology.orchestra.run.vm02.stdout:10 Mar 05:43:23 ntpd[16099]: Soliciting pool server 139.162.187.236 2026-03-10T05:43:23.415 INFO:teuthology.orchestra.run.vm05.stdout:10 Mar 05:43:23 ntpd[16110]: Soliciting pool server 185.125.190.57 2026-03-10T05:43:23.415 INFO:teuthology.orchestra.run.vm05.stdout:10 Mar 05:43:23 ntpd[16110]: Soliciting pool server 213.172.105.106 2026-03-10T05:43:23.415 INFO:teuthology.orchestra.run.vm05.stdout:10 Mar 05:43:23 ntpd[16110]: Soliciting pool server 139.162.187.236 2026-03-10T05:43:26.443 INFO:teuthology.orchestra.run.vm05.stdout:10 Mar 05:43:26 ntpd[16110]: ntpd: time slew -0.000539 s 2026-03-10T05:43:26.443 INFO:teuthology.orchestra.run.vm05.stdout:ntpd: time slew -0.000539s 2026-03-10T05:43:26.465 INFO:teuthology.orchestra.run.vm05.stdout: remote refid st t when poll reach delay offset jitter 2026-03-10T05:43:26.465 INFO:teuthology.orchestra.run.vm05.stdout:============================================================================== 2026-03-10T05:43:26.465 INFO:teuthology.orchestra.run.vm05.stdout: 0.ubuntu.pool.n .POOL. 16 p - 64 0 0.000 +0.000 0.000 2026-03-10T05:43:26.465 INFO:teuthology.orchestra.run.vm05.stdout: 1.ubuntu.pool.n .POOL. 16 p - 64 0 0.000 +0.000 0.000 2026-03-10T05:43:26.465 INFO:teuthology.orchestra.run.vm05.stdout: 2.ubuntu.pool.n .POOL. 16 p - 64 0 0.000 +0.000 0.000 2026-03-10T05:43:26.465 INFO:teuthology.orchestra.run.vm05.stdout: 3.ubuntu.pool.n .POOL. 16 p - 64 0 0.000 +0.000 0.000 2026-03-10T05:43:26.465 INFO:teuthology.orchestra.run.vm05.stdout: ntp.ubuntu.com .POOL. 16 p - 64 0 0.000 +0.000 0.000 2026-03-10T05:43:28.396 INFO:teuthology.orchestra.run.vm02.stdout:10 Mar 05:43:28 ntpd[16099]: ntpd: time slew +0.003659 s 2026-03-10T05:43:28.397 INFO:teuthology.orchestra.run.vm02.stdout:ntpd: time slew +0.003659s 2026-03-10T05:43:28.416 INFO:teuthology.orchestra.run.vm02.stdout: remote refid st t when poll reach delay offset jitter 2026-03-10T05:43:28.417 INFO:teuthology.orchestra.run.vm02.stdout:============================================================================== 2026-03-10T05:43:28.417 INFO:teuthology.orchestra.run.vm02.stdout: 0.ubuntu.pool.n .POOL. 16 p - 64 0 0.000 +0.000 0.000 2026-03-10T05:43:28.417 INFO:teuthology.orchestra.run.vm02.stdout: 1.ubuntu.pool.n .POOL. 16 p - 64 0 0.000 +0.000 0.000 2026-03-10T05:43:28.417 INFO:teuthology.orchestra.run.vm02.stdout: 2.ubuntu.pool.n .POOL. 16 p - 64 0 0.000 +0.000 0.000 2026-03-10T05:43:28.417 INFO:teuthology.orchestra.run.vm02.stdout: 3.ubuntu.pool.n .POOL. 16 p - 64 0 0.000 +0.000 0.000 2026-03-10T05:43:28.417 INFO:teuthology.orchestra.run.vm02.stdout: ntp.ubuntu.com .POOL. 16 p - 64 0 0.000 +0.000 0.000 2026-03-10T05:43:28.417 INFO:teuthology.run_tasks:Running task cephadm... 2026-03-10T05:43:28.461 INFO:tasks.cephadm:Config: {'cephadm_branch': 'v17.2.0', 'cephadm_git_url': 'https://github.com/ceph/ceph', 'image': 'quay.io/ceph/ceph:v17.2.0', 'conf': {'global': {'mon election default strategy': 3}, 'mgr': {'debug mgr': 20, 'debug ms': 1, 'mgr/cephadm/use_agent': False}, 'mon': {'debug mon': 20, 'debug ms': 1, 'debug paxos': 20}, 'osd': {'debug ms': 1, 'debug osd': 20, 'osd mclock iops capacity threshold hdd': 49000}}, 'flavor': 'default', 'log-ignorelist': ['\\(MDS_ALL_DOWN\\)', '\\(MDS_UP_LESS_THAN_MAX\\)', 'CEPHADM_STRAY_DAEMON', 'CEPHADM_FAILED_DAEMON', 'CEPHADM_AGENT_DOWN'], 'log-only-match': ['CEPHADM_'], 'sha1': 'e911bdebe5c8faa3800735d1568fcdca65db60df'} 2026-03-10T05:43:28.462 INFO:tasks.cephadm:Cluster image is quay.io/ceph/ceph:v17.2.0 2026-03-10T05:43:28.462 INFO:tasks.cephadm:Cluster fsid is 107483ae-1c44-11f1-b530-c1172cd6122a 2026-03-10T05:43:28.462 INFO:tasks.cephadm:Choosing monitor IPs and ports... 2026-03-10T05:43:28.462 INFO:tasks.cephadm:Monitor IPs: {'mon.a': '192.168.123.102', 'mon.c': '[v2:192.168.123.102:3301,v1:192.168.123.102:6790]', 'mon.b': '192.168.123.105'} 2026-03-10T05:43:28.462 INFO:tasks.cephadm:First mon is mon.a on vm02 2026-03-10T05:43:28.462 INFO:tasks.cephadm:First mgr is y 2026-03-10T05:43:28.462 INFO:tasks.cephadm:Normalizing hostnames... 2026-03-10T05:43:28.462 DEBUG:teuthology.orchestra.run.vm02:> sudo hostname $(hostname -s) 2026-03-10T05:43:28.469 DEBUG:teuthology.orchestra.run.vm05:> sudo hostname $(hostname -s) 2026-03-10T05:43:28.476 INFO:tasks.cephadm:Downloading cephadm (repo https://github.com/ceph/ceph ref v17.2.0)... 2026-03-10T05:43:28.476 DEBUG:teuthology.orchestra.run.vm02:> curl --silent https://raw.githubusercontent.com/ceph/ceph/v17.2.0/src/cephadm/cephadm > /home/ubuntu/cephtest/cephadm && ls -l /home/ubuntu/cephtest/cephadm 2026-03-10T05:43:28.725 INFO:teuthology.orchestra.run.vm02.stdout:-rw-rw-r-- 1 ubuntu ubuntu 320521 Mar 10 05:43 /home/ubuntu/cephtest/cephadm 2026-03-10T05:43:28.725 DEBUG:teuthology.orchestra.run.vm05:> curl --silent https://raw.githubusercontent.com/ceph/ceph/v17.2.0/src/cephadm/cephadm > /home/ubuntu/cephtest/cephadm && ls -l /home/ubuntu/cephtest/cephadm 2026-03-10T05:43:28.809 INFO:teuthology.orchestra.run.vm05.stdout:-rw-rw-r-- 1 ubuntu ubuntu 320521 Mar 10 05:43 /home/ubuntu/cephtest/cephadm 2026-03-10T05:43:28.809 DEBUG:teuthology.orchestra.run.vm02:> test -s /home/ubuntu/cephtest/cephadm && test $(stat -c%s /home/ubuntu/cephtest/cephadm) -gt 1000 && chmod +x /home/ubuntu/cephtest/cephadm 2026-03-10T05:43:28.812 DEBUG:teuthology.orchestra.run.vm05:> test -s /home/ubuntu/cephtest/cephadm && test $(stat -c%s /home/ubuntu/cephtest/cephadm) -gt 1000 && chmod +x /home/ubuntu/cephtest/cephadm 2026-03-10T05:43:28.819 INFO:tasks.cephadm:Pulling image quay.io/ceph/ceph:v17.2.0 on all hosts... 2026-03-10T05:43:28.819 DEBUG:teuthology.orchestra.run.vm02:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 pull 2026-03-10T05:43:28.857 DEBUG:teuthology.orchestra.run.vm05:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 pull 2026-03-10T05:43:28.938 INFO:teuthology.orchestra.run.vm02.stderr:Pulling container image quay.io/ceph/ceph:v17.2.0... 2026-03-10T05:43:28.938 INFO:teuthology.orchestra.run.vm05.stderr:Pulling container image quay.io/ceph/ceph:v17.2.0... 2026-03-10T05:43:48.858 INFO:teuthology.orchestra.run.vm05.stdout:{ 2026-03-10T05:43:48.858 INFO:teuthology.orchestra.run.vm05.stdout: "ceph_version": "ceph version 17.2.0 (43e2e60a7559d3f46c9d53f1ca875fd499a1e35e) quincy (stable)", 2026-03-10T05:43:48.858 INFO:teuthology.orchestra.run.vm05.stdout: "image_id": "e1d6a67b021eb077ee22bf650f1a9fb1980a2cf5c36bdb9cba9eac6de8f702d9", 2026-03-10T05:43:48.858 INFO:teuthology.orchestra.run.vm05.stdout: "repo_digests": [ 2026-03-10T05:43:48.858 INFO:teuthology.orchestra.run.vm05.stdout: "quay.io/ceph/ceph@sha256:12a0a4f43413fd97a14a3d47a3451b2d2df50020835bb93db666209f3f77617a" 2026-03-10T05:43:48.858 INFO:teuthology.orchestra.run.vm05.stdout: ] 2026-03-10T05:43:48.858 INFO:teuthology.orchestra.run.vm05.stdout:} 2026-03-10T05:43:48.873 INFO:teuthology.orchestra.run.vm02.stdout:{ 2026-03-10T05:43:48.873 INFO:teuthology.orchestra.run.vm02.stdout: "ceph_version": "ceph version 17.2.0 (43e2e60a7559d3f46c9d53f1ca875fd499a1e35e) quincy (stable)", 2026-03-10T05:43:48.873 INFO:teuthology.orchestra.run.vm02.stdout: "image_id": "e1d6a67b021eb077ee22bf650f1a9fb1980a2cf5c36bdb9cba9eac6de8f702d9", 2026-03-10T05:43:48.873 INFO:teuthology.orchestra.run.vm02.stdout: "repo_digests": [ 2026-03-10T05:43:48.873 INFO:teuthology.orchestra.run.vm02.stdout: "quay.io/ceph/ceph@sha256:12a0a4f43413fd97a14a3d47a3451b2d2df50020835bb93db666209f3f77617a" 2026-03-10T05:43:48.873 INFO:teuthology.orchestra.run.vm02.stdout: ] 2026-03-10T05:43:48.873 INFO:teuthology.orchestra.run.vm02.stdout:} 2026-03-10T05:43:48.884 DEBUG:teuthology.orchestra.run.vm02:> sudo mkdir -p /etc/ceph 2026-03-10T05:43:48.891 DEBUG:teuthology.orchestra.run.vm05:> sudo mkdir -p /etc/ceph 2026-03-10T05:43:48.898 DEBUG:teuthology.orchestra.run.vm02:> sudo chmod 777 /etc/ceph 2026-03-10T05:43:48.940 DEBUG:teuthology.orchestra.run.vm05:> sudo chmod 777 /etc/ceph 2026-03-10T05:43:48.948 INFO:tasks.cephadm:Writing seed config... 2026-03-10T05:43:48.949 INFO:tasks.cephadm: override: [global] mon election default strategy = 3 2026-03-10T05:43:48.949 INFO:tasks.cephadm: override: [mgr] debug mgr = 20 2026-03-10T05:43:48.949 INFO:tasks.cephadm: override: [mgr] debug ms = 1 2026-03-10T05:43:48.949 INFO:tasks.cephadm: override: [mgr] mgr/cephadm/use_agent = False 2026-03-10T05:43:48.949 INFO:tasks.cephadm: override: [mon] debug mon = 20 2026-03-10T05:43:48.949 INFO:tasks.cephadm: override: [mon] debug ms = 1 2026-03-10T05:43:48.949 INFO:tasks.cephadm: override: [mon] debug paxos = 20 2026-03-10T05:43:48.949 INFO:tasks.cephadm: override: [osd] debug ms = 1 2026-03-10T05:43:48.949 INFO:tasks.cephadm: override: [osd] debug osd = 20 2026-03-10T05:43:48.949 INFO:tasks.cephadm: override: [osd] osd mclock iops capacity threshold hdd = 49000 2026-03-10T05:43:48.949 DEBUG:teuthology.orchestra.run.vm02:> set -ex 2026-03-10T05:43:48.949 DEBUG:teuthology.orchestra.run.vm02:> dd of=/home/ubuntu/cephtest/seed.ceph.conf 2026-03-10T05:43:48.984 DEBUG:tasks.cephadm:Final config: [global] # make logging friendly to teuthology log_to_file = true log_to_stderr = false log to journald = false mon cluster log to file = true mon cluster log file level = debug mon clock drift allowed = 1.000 # replicate across OSDs, not hosts osd crush chooseleaf type = 0 #osd pool default size = 2 osd pool default erasure code profile = plugin=jerasure technique=reed_sol_van k=2 m=1 crush-failure-domain=osd # enable some debugging auth debug = true ms die on old message = true ms die on bug = true debug asserts on shutdown = true # adjust warnings mon max pg per osd = 10000# >= luminous mon pg warn max object skew = 0 mon osd allow primary affinity = true mon osd allow pg remap = true mon warn on legacy crush tunables = false mon warn on crush straw calc version zero = false mon warn on no sortbitwise = false mon warn on osd down out interval zero = false mon warn on too few osds = false mon_warn_on_pool_pg_num_not_power_of_two = false # disable pg_autoscaler by default for new pools osd_pool_default_pg_autoscale_mode = off # tests delete pools mon allow pool delete = true fsid = 107483ae-1c44-11f1-b530-c1172cd6122a mon election default strategy = 3 [osd] osd scrub load threshold = 5.0 osd scrub max interval = 600 osd mclock profile = high_recovery_ops osd recover clone overlap = true osd recovery max chunk = 1048576 osd deep scrub update digest min age = 30 osd map max advance = 10 osd memory target autotune = true # debugging osd debug shutdown = true osd debug op order = true osd debug verify stray on activate = true osd debug pg log writeout = true osd debug verify cached snaps = true osd debug verify missing on start = true osd debug misdirected ops = true osd op queue = debug_random osd op queue cut off = debug_random osd shutdown pgref assert = true bdev debug aio = true osd sloppy crc = true debug ms = 1 debug osd = 20 osd mclock iops capacity threshold hdd = 49000 [mgr] mon reweight min pgs per osd = 4 mon reweight min bytes per osd = 10 mgr/telemetry/nag = false debug mgr = 20 debug ms = 1 mgr/cephadm/use_agent = False [mon] mon data avail warn = 5 mon mgr mkfs grace = 240 mon reweight min pgs per osd = 4 mon osd reporter subtree level = osd mon osd prime pg temp = true mon reweight min bytes per osd = 10 # rotate auth tickets quickly to exercise renewal paths auth mon ticket ttl = 660# 11m auth service ticket ttl = 240# 4m # don't complain about global id reclaim mon_warn_on_insecure_global_id_reclaim = false mon_warn_on_insecure_global_id_reclaim_allowed = false debug mon = 20 debug ms = 1 debug paxos = 20 [client.rgw] rgw cache enabled = true rgw enable ops log = true rgw enable usage log = true 2026-03-10T05:43:48.984 DEBUG:teuthology.orchestra.run.vm02:mon.a> sudo journalctl -f -n 0 -u ceph-107483ae-1c44-11f1-b530-c1172cd6122a@mon.a.service 2026-03-10T05:43:49.026 DEBUG:teuthology.orchestra.run.vm02:mgr.y> sudo journalctl -f -n 0 -u ceph-107483ae-1c44-11f1-b530-c1172cd6122a@mgr.y.service 2026-03-10T05:43:49.070 INFO:tasks.cephadm:Bootstrapping... 2026-03-10T05:43:49.070 DEBUG:teuthology.orchestra.run.vm02:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 -v bootstrap --fsid 107483ae-1c44-11f1-b530-c1172cd6122a --config /home/ubuntu/cephtest/seed.ceph.conf --output-config /etc/ceph/ceph.conf --output-keyring /etc/ceph/ceph.client.admin.keyring --output-pub-ssh-key /home/ubuntu/cephtest/ceph.pub --mon-id a --mgr-id y --orphan-initial-daemons --skip-monitoring-stack --mon-ip 192.168.123.102 --skip-admin-label && sudo chmod +r /etc/ceph/ceph.client.admin.keyring 2026-03-10T05:43:49.186 INFO:teuthology.orchestra.run.vm02.stderr:-------------------------------------------------------------------------------- 2026-03-10T05:43:49.186 INFO:teuthology.orchestra.run.vm02.stderr:cephadm ['--image', 'quay.io/ceph/ceph:v17.2.0', '-v', 'bootstrap', '--fsid', '107483ae-1c44-11f1-b530-c1172cd6122a', '--config', '/home/ubuntu/cephtest/seed.ceph.conf', '--output-config', '/etc/ceph/ceph.conf', '--output-keyring', '/etc/ceph/ceph.client.admin.keyring', '--output-pub-ssh-key', '/home/ubuntu/cephtest/ceph.pub', '--mon-id', 'a', '--mgr-id', 'y', '--orphan-initial-daemons', '--skip-monitoring-stack', '--mon-ip', '192.168.123.102', '--skip-admin-label'] 2026-03-10T05:43:49.186 INFO:teuthology.orchestra.run.vm02.stderr:Verifying podman|docker is present... 2026-03-10T05:43:49.186 INFO:teuthology.orchestra.run.vm02.stderr:Verifying lvm2 is present... 2026-03-10T05:43:49.186 INFO:teuthology.orchestra.run.vm02.stderr:Verifying time synchronization is in place... 2026-03-10T05:43:49.188 INFO:teuthology.orchestra.run.vm02.stderr:systemctl: Failed to get unit file state for chrony.service: No such file or directory 2026-03-10T05:43:49.190 INFO:teuthology.orchestra.run.vm02.stderr:systemctl: inactive 2026-03-10T05:43:49.192 INFO:teuthology.orchestra.run.vm02.stderr:systemctl: Failed to get unit file state for chronyd.service: No such file or directory 2026-03-10T05:43:49.194 INFO:teuthology.orchestra.run.vm02.stderr:systemctl: inactive 2026-03-10T05:43:49.196 INFO:teuthology.orchestra.run.vm02.stderr:systemctl: masked 2026-03-10T05:43:49.198 INFO:teuthology.orchestra.run.vm02.stderr:systemctl: inactive 2026-03-10T05:43:49.200 INFO:teuthology.orchestra.run.vm02.stderr:systemctl: Failed to get unit file state for ntpd.service: No such file or directory 2026-03-10T05:43:49.202 INFO:teuthology.orchestra.run.vm02.stderr:systemctl: inactive 2026-03-10T05:43:49.205 INFO:teuthology.orchestra.run.vm02.stderr:systemctl: enabled 2026-03-10T05:43:49.207 INFO:teuthology.orchestra.run.vm02.stderr:systemctl: active 2026-03-10T05:43:49.207 INFO:teuthology.orchestra.run.vm02.stderr:Unit ntp.service is enabled and running 2026-03-10T05:43:49.207 INFO:teuthology.orchestra.run.vm02.stderr:Repeating the final host check... 2026-03-10T05:43:49.207 INFO:teuthology.orchestra.run.vm02.stderr:docker (/usr/bin/docker) is present 2026-03-10T05:43:49.207 INFO:teuthology.orchestra.run.vm02.stderr:systemctl is present 2026-03-10T05:43:49.207 INFO:teuthology.orchestra.run.vm02.stderr:lvcreate is present 2026-03-10T05:43:49.209 INFO:teuthology.orchestra.run.vm02.stderr:systemctl: Failed to get unit file state for chrony.service: No such file or directory 2026-03-10T05:43:49.211 INFO:teuthology.orchestra.run.vm02.stderr:systemctl: inactive 2026-03-10T05:43:49.212 INFO:teuthology.orchestra.run.vm02.stderr:systemctl: Failed to get unit file state for chronyd.service: No such file or directory 2026-03-10T05:43:49.214 INFO:teuthology.orchestra.run.vm02.stderr:systemctl: inactive 2026-03-10T05:43:49.216 INFO:teuthology.orchestra.run.vm02.stderr:systemctl: masked 2026-03-10T05:43:49.218 INFO:teuthology.orchestra.run.vm02.stderr:systemctl: inactive 2026-03-10T05:43:49.220 INFO:teuthology.orchestra.run.vm02.stderr:systemctl: Failed to get unit file state for ntpd.service: No such file or directory 2026-03-10T05:43:49.222 INFO:teuthology.orchestra.run.vm02.stderr:systemctl: inactive 2026-03-10T05:43:49.225 INFO:teuthology.orchestra.run.vm02.stderr:systemctl: enabled 2026-03-10T05:43:49.227 INFO:teuthology.orchestra.run.vm02.stderr:systemctl: active 2026-03-10T05:43:49.227 INFO:teuthology.orchestra.run.vm02.stderr:Unit ntp.service is enabled and running 2026-03-10T05:43:49.227 INFO:teuthology.orchestra.run.vm02.stderr:Host looks OK 2026-03-10T05:43:49.227 INFO:teuthology.orchestra.run.vm02.stderr:Cluster fsid: 107483ae-1c44-11f1-b530-c1172cd6122a 2026-03-10T05:43:49.227 INFO:teuthology.orchestra.run.vm02.stderr:Acquiring lock 139732602315744 on /run/cephadm/107483ae-1c44-11f1-b530-c1172cd6122a.lock 2026-03-10T05:43:49.227 INFO:teuthology.orchestra.run.vm02.stderr:Lock 139732602315744 acquired on /run/cephadm/107483ae-1c44-11f1-b530-c1172cd6122a.lock 2026-03-10T05:43:49.227 INFO:teuthology.orchestra.run.vm02.stderr:Verifying IP 192.168.123.102 port 3300 ... 2026-03-10T05:43:49.227 INFO:teuthology.orchestra.run.vm02.stderr:Verifying IP 192.168.123.102 port 6789 ... 2026-03-10T05:43:49.227 INFO:teuthology.orchestra.run.vm02.stderr:Base mon IP is 192.168.123.102, final addrv is [v2:192.168.123.102:3300,v1:192.168.123.102:6789] 2026-03-10T05:43:49.228 INFO:teuthology.orchestra.run.vm02.stderr:/usr/sbin/ip: default via 192.168.123.1 dev ens3 proto dhcp src 192.168.123.102 metric 100 2026-03-10T05:43:49.228 INFO:teuthology.orchestra.run.vm02.stderr:/usr/sbin/ip: 172.17.0.0/16 dev docker0 proto kernel scope link src 172.17.0.1 linkdown 2026-03-10T05:43:49.228 INFO:teuthology.orchestra.run.vm02.stderr:/usr/sbin/ip: 192.168.123.0/24 dev ens3 proto kernel scope link src 192.168.123.102 metric 100 2026-03-10T05:43:49.228 INFO:teuthology.orchestra.run.vm02.stderr:/usr/sbin/ip: 192.168.123.1 dev ens3 proto dhcp scope link src 192.168.123.102 metric 100 2026-03-10T05:43:49.229 INFO:teuthology.orchestra.run.vm02.stderr:/usr/sbin/ip: ::1 dev lo proto kernel metric 256 pref medium 2026-03-10T05:43:49.229 INFO:teuthology.orchestra.run.vm02.stderr:/usr/sbin/ip: fe80::/64 dev ens3 proto kernel metric 256 pref medium 2026-03-10T05:43:49.230 INFO:teuthology.orchestra.run.vm02.stderr:/usr/sbin/ip: 1: lo: mtu 65536 state UNKNOWN qlen 1000 2026-03-10T05:43:49.230 INFO:teuthology.orchestra.run.vm02.stderr:/usr/sbin/ip: inet6 ::1/128 scope host 2026-03-10T05:43:49.230 INFO:teuthology.orchestra.run.vm02.stderr:/usr/sbin/ip: valid_lft forever preferred_lft forever 2026-03-10T05:43:49.230 INFO:teuthology.orchestra.run.vm02.stderr:/usr/sbin/ip: 2: ens3: mtu 1500 state UP qlen 1000 2026-03-10T05:43:49.230 INFO:teuthology.orchestra.run.vm02.stderr:/usr/sbin/ip: inet6 fe80::5055:ff:fe00:2/64 scope link 2026-03-10T05:43:49.230 INFO:teuthology.orchestra.run.vm02.stderr:/usr/sbin/ip: valid_lft forever preferred_lft forever 2026-03-10T05:43:49.231 INFO:teuthology.orchestra.run.vm02.stderr:Mon IP `192.168.123.102` is in CIDR network `192.168.123.0/24` 2026-03-10T05:43:49.231 INFO:teuthology.orchestra.run.vm02.stderr:- internal network (--cluster-network) has not been provided, OSD replication will default to the public_network 2026-03-10T05:43:49.231 INFO:teuthology.orchestra.run.vm02.stderr:Pulling container image quay.io/ceph/ceph:v17.2.0... 2026-03-10T05:43:50.294 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/docker: v17.2.0: Pulling from ceph/ceph 2026-03-10T05:43:50.297 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/docker: Digest: sha256:12a0a4f43413fd97a14a3d47a3451b2d2df50020835bb93db666209f3f77617a 2026-03-10T05:43:50.297 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/docker: Status: Image is up to date for quay.io/ceph/ceph:v17.2.0 2026-03-10T05:43:50.297 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/docker: quay.io/ceph/ceph:v17.2.0 2026-03-10T05:43:50.421 INFO:teuthology.orchestra.run.vm02.stderr:ceph: ceph version 17.2.0 (43e2e60a7559d3f46c9d53f1ca875fd499a1e35e) quincy (stable) 2026-03-10T05:43:50.450 INFO:teuthology.orchestra.run.vm02.stderr:Ceph version: ceph version 17.2.0 (43e2e60a7559d3f46c9d53f1ca875fd499a1e35e) quincy (stable) 2026-03-10T05:43:50.450 INFO:teuthology.orchestra.run.vm02.stderr:Extracting ceph user uid/gid from container image... 2026-03-10T05:43:50.507 INFO:teuthology.orchestra.run.vm02.stderr:stat: 167 167 2026-03-10T05:43:50.528 INFO:teuthology.orchestra.run.vm02.stderr:Creating initial keys... 2026-03-10T05:43:50.593 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph-authtool: AQCWr69pVkJiIxAA3oTalaeUSLdizcRCCdRHyQ== 2026-03-10T05:43:50.685 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph-authtool: AQCWr69pIJ3gKBAAgUCSUYX3L11ni6dKSLwmSw== 2026-03-10T05:43:50.777 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph-authtool: AQCWr69pSehfLhAAiZVLKC5x65yl+oUXvhpkWQ== 2026-03-10T05:43:50.799 INFO:teuthology.orchestra.run.vm02.stderr:Creating initial monmap... 2026-03-10T05:43:50.866 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/monmaptool: /usr/bin/monmaptool: monmap file /tmp/monmap 2026-03-10T05:43:50.866 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/monmaptool: setting min_mon_release = octopus 2026-03-10T05:43:50.866 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/monmaptool: /usr/bin/monmaptool: set fsid to 107483ae-1c44-11f1-b530-c1172cd6122a 2026-03-10T05:43:50.866 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/monmaptool: /usr/bin/monmaptool: writing epoch 0 to /tmp/monmap (1 monitors) 2026-03-10T05:43:50.892 INFO:teuthology.orchestra.run.vm02.stderr:monmaptool for a [v2:192.168.123.102:3300,v1:192.168.123.102:6789] on /usr/bin/monmaptool: monmap file /tmp/monmap 2026-03-10T05:43:50.892 INFO:teuthology.orchestra.run.vm02.stderr:setting min_mon_release = octopus 2026-03-10T05:43:50.892 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/monmaptool: set fsid to 107483ae-1c44-11f1-b530-c1172cd6122a 2026-03-10T05:43:50.892 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/monmaptool: writing epoch 0 to /tmp/monmap (1 monitors) 2026-03-10T05:43:50.892 INFO:teuthology.orchestra.run.vm02.stderr: 2026-03-10T05:43:50.892 INFO:teuthology.orchestra.run.vm02.stderr:Creating mon... 2026-03-10T05:43:50.969 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph-mon: debug 2026-03-10T05:43:50.963+0000 7fe69b16e880 0 set uid:gid to 167:167 (ceph:ceph) 2026-03-10T05:43:50.969 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph-mon: debug 2026-03-10T05:43:50.963+0000 7fe69b16e880 1 imported monmap: 2026-03-10T05:43:50.969 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph-mon: epoch 0 2026-03-10T05:43:50.969 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph-mon: fsid 107483ae-1c44-11f1-b530-c1172cd6122a 2026-03-10T05:43:50.969 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph-mon: last_changed 2026-03-10T05:43:50.866640+0000 2026-03-10T05:43:50.969 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph-mon: created 2026-03-10T05:43:50.866640+0000 2026-03-10T05:43:50.969 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph-mon: min_mon_release 15 (octopus) 2026-03-10T05:43:50.969 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph-mon: election_strategy: 1 2026-03-10T05:43:50.969 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph-mon: 0: [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] mon.a 2026-03-10T05:43:50.969 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph-mon: 2026-03-10T05:43:50.969 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph-mon: debug 2026-03-10T05:43:50.963+0000 7fe69b16e880 0 /usr/bin/ceph-mon: set fsid to 107483ae-1c44-11f1-b530-c1172cd6122a 2026-03-10T05:43:50.970 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph-mon: debug 2026-03-10T05:43:50.967+0000 7fe69b16e880 4 rocksdb: RocksDB version: 6.15.5 2026-03-10T05:43:50.970 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph-mon: 2026-03-10T05:43:50.970 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph-mon: debug 2026-03-10T05:43:50.967+0000 7fe69b16e880 4 rocksdb: Git sha rocksdb_build_git_sha:@0@ 2026-03-10T05:43:50.970 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph-mon: debug 2026-03-10T05:43:50.967+0000 7fe69b16e880 4 rocksdb: Compile date Apr 18 2022 2026-03-10T05:43:50.971 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph-mon: debug 2026-03-10T05:43:50.967+0000 7fe69b16e880 4 rocksdb: DB SUMMARY 2026-03-10T05:43:50.971 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph-mon: 2026-03-10T05:43:50.971 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph-mon: debug 2026-03-10T05:43:50.967+0000 7fe69b16e880 4 rocksdb: DB Session ID: LN1DZ35RB80I90K6ZZHR 2026-03-10T05:43:50.971 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph-mon: 2026-03-10T05:43:50.971 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph-mon: debug 2026-03-10T05:43:50.967+0000 7fe69b16e880 4 rocksdb: SST files in /var/lib/ceph/mon/ceph-a/store.db dir, Total Num: 0, files: 2026-03-10T05:43:50.971 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph-mon: 2026-03-10T05:43:50.971 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph-mon: debug 2026-03-10T05:43:50.967+0000 7fe69b16e880 4 rocksdb: Write Ahead Log file in /var/lib/ceph/mon/ceph-a/store.db: 2026-03-10T05:43:50.971 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph-mon: 2026-03-10T05:43:50.971 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph-mon: debug 2026-03-10T05:43:50.967+0000 7fe69b16e880 4 rocksdb: Options.error_if_exists: 0 2026-03-10T05:43:50.971 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph-mon: debug 2026-03-10T05:43:50.967+0000 7fe69b16e880 4 rocksdb: Options.create_if_missing: 1 2026-03-10T05:43:50.971 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph-mon: debug 2026-03-10T05:43:50.967+0000 7fe69b16e880 4 rocksdb: Options.paranoid_checks: 1 2026-03-10T05:43:50.971 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph-mon: debug 2026-03-10T05:43:50.967+0000 7fe69b16e880 4 rocksdb: Options.track_and_verify_wals_in_manifest: 0 2026-03-10T05:43:50.971 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph-mon: debug 2026-03-10T05:43:50.967+0000 7fe69b16e880 4 rocksdb: Options.env: 0x55f39ccc6860 2026-03-10T05:43:50.971 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph-mon: debug 2026-03-10T05:43:50.967+0000 7fe69b16e880 4 rocksdb: Options.fs: Posix File System 2026-03-10T05:43:50.971 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph-mon: debug 2026-03-10T05:43:50.967+0000 7fe69b16e880 4 rocksdb: Options.info_log: 0x55f3aae01320 2026-03-10T05:43:50.971 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph-mon: debug 2026-03-10T05:43:50.967+0000 7fe69b16e880 4 rocksdb: Options.max_file_opening_threads: 16 2026-03-10T05:43:50.971 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph-mon: debug 2026-03-10T05:43:50.967+0000 7fe69b16e880 4 rocksdb: Options.statistics: (nil) 2026-03-10T05:43:50.971 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph-mon: debug 2026-03-10T05:43:50.967+0000 7fe69b16e880 4 rocksdb: Options.use_fsync: 0 2026-03-10T05:43:50.971 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph-mon: debug 2026-03-10T05:43:50.967+0000 7fe69b16e880 4 rocksdb: Options.max_log_file_size: 0 2026-03-10T05:43:50.971 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph-mon: debug 2026-03-10T05:43:50.967+0000 7fe69b16e880 4 rocksdb: Options.max_manifest_file_size: 1073741824 2026-03-10T05:43:50.971 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph-mon: debug 2026-03-10T05:43:50.967+0000 7fe69b16e880 4 rocksdb: Options.log_file_time_to_roll: 0 2026-03-10T05:43:50.971 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph-mon: debug 2026-03-10T05:43:50.967+0000 7fe69b16e880 4 rocksdb: Options.keep_log_file_num: 1000 2026-03-10T05:43:50.972 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph-mon: debug 2026-03-10T05:43:50.967+0000 7fe69b16e880 4 rocksdb: Options.recycle_log_file_num: 0 2026-03-10T05:43:50.972 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph-mon: debug 2026-03-10T05:43:50.967+0000 7fe69b16e880 4 rocksdb: Options.allow_fallocate: 1 2026-03-10T05:43:50.972 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph-mon: debug 2026-03-10T05:43:50.967+0000 7fe69b16e880 4 rocksdb: Options.allow_mmap_reads: 0 2026-03-10T05:43:50.972 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph-mon: debug 2026-03-10T05:43:50.967+0000 7fe69b16e880 4 rocksdb: Options.allow_mmap_writes: 0 2026-03-10T05:43:50.972 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph-mon: debug 2026-03-10T05:43:50.967+0000 7fe69b16e880 4 rocksdb: Options.use_direct_reads: 0 2026-03-10T05:43:50.972 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph-mon: debug 2026-03-10T05:43:50.967+0000 7fe69b16e880 4 rocksdb: Options.use_direct_io_for_flush_and_compaction: 0 2026-03-10T05:43:50.972 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph-mon: debug 2026-03-10T05:43:50.967+0000 7fe69b16e880 4 rocksdb: Options.create_missing_column_families: 0 2026-03-10T05:43:50.972 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph-mon: debug 2026-03-10T05:43:50.967+0000 7fe69b16e880 4 rocksdb: Options.db_log_dir: 2026-03-10T05:43:50.972 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph-mon: debug 2026-03-10T05:43:50.967+0000 7fe69b16e880 4 rocksdb: Options.wal_dir: /var/lib/ceph/mon/ceph-a/store.db 2026-03-10T05:43:50.972 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph-mon: debug 2026-03-10T05:43:50.967+0000 7fe69b16e880 4 rocksdb: Options.table_cache_numshardbits: 6 2026-03-10T05:43:50.972 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph-mon: debug 2026-03-10T05:43:50.967+0000 7fe69b16e880 4 rocksdb: Options.WAL_ttl_seconds: 0 2026-03-10T05:43:50.972 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph-mon: debug 2026-03-10T05:43:50.967+0000 7fe69b16e880 4 rocksdb: Options.WAL_size_limit_MB: 0 2026-03-10T05:43:50.972 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph-mon: debug 2026-03-10T05:43:50.967+0000 7fe69b16e880 4 rocksdb: Options.max_write_batch_group_size_bytes: 1048576 2026-03-10T05:43:50.972 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph-mon: debug 2026-03-10T05:43:50.967+0000 7fe69b16e880 4 rocksdb: Options.manifest_preallocation_size: 4194304 2026-03-10T05:43:50.972 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph-mon: debug 2026-03-10T05:43:50.967+0000 7fe69b16e880 4 rocksdb: Options.is_fd_close_on_exec: 1 2026-03-10T05:43:50.972 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph-mon: debug 2026-03-10T05:43:50.967+0000 7fe69b16e880 4 rocksdb: Options.advise_random_on_open: 1 2026-03-10T05:43:50.972 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph-mon: debug 2026-03-10T05:43:50.967+0000 7fe69b16e880 4 rocksdb: Options.db_write_buffer_size: 0 2026-03-10T05:43:50.972 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph-mon: debug 2026-03-10T05:43:50.967+0000 7fe69b16e880 4 rocksdb: Options.write_buffer_manager: 0x55f3ab0a1950 2026-03-10T05:43:50.972 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph-mon: debug 2026-03-10T05:43:50.967+0000 7fe69b16e880 4 rocksdb: Options.access_hint_on_compaction_start: 1 2026-03-10T05:43:50.972 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph-mon: debug 2026-03-10T05:43:50.967+0000 7fe69b16e880 4 rocksdb: Options.new_table_reader_for_compaction_inputs: 0 2026-03-10T05:43:50.972 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph-mon: debug 2026-03-10T05:43:50.967+0000 7fe69b16e880 4 rocksdb: Options.random_access_max_buffer_size: 1048576 2026-03-10T05:43:50.972 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph-mon: debug 2026-03-10T05:43:50.967+0000 7fe69b16e880 4 rocksdb: Options.use_adaptive_mutex: 0 2026-03-10T05:43:50.972 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph-mon: debug 2026-03-10T05:43:50.967+0000 7fe69b16e880 4 rocksdb: Options.rate_limiter: (nil) 2026-03-10T05:43:50.972 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph-mon: debug 2026-03-10T05:43:50.967+0000 7fe69b16e880 4 rocksdb: Options.sst_file_manager.rate_bytes_per_sec: 0 2026-03-10T05:43:50.972 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph-mon: debug 2026-03-10T05:43:50.967+0000 7fe69b16e880 4 rocksdb: Options.wal_recovery_mode: 2 2026-03-10T05:43:50.972 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph-mon: debug 2026-03-10T05:43:50.967+0000 7fe69b16e880 4 rocksdb: Options.enable_thread_tracking: 0 2026-03-10T05:43:50.972 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph-mon: debug 2026-03-10T05:43:50.967+0000 7fe69b16e880 4 rocksdb: Options.enable_pipelined_write: 0 2026-03-10T05:43:50.972 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph-mon: debug 2026-03-10T05:43:50.967+0000 7fe69b16e880 4 rocksdb: Options.unordered_write: 0 2026-03-10T05:43:50.972 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph-mon: debug 2026-03-10T05:43:50.967+0000 7fe69b16e880 4 rocksdb: Options.allow_concurrent_memtable_write: 1 2026-03-10T05:43:50.973 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph-mon: debug 2026-03-10T05:43:50.967+0000 7fe69b16e880 4 rocksdb: Options.enable_write_thread_adaptive_yield: 1 2026-03-10T05:43:50.973 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph-mon: debug 2026-03-10T05:43:50.967+0000 7fe69b16e880 4 rocksdb: Options.write_thread_max_yield_usec: 100 2026-03-10T05:43:50.973 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph-mon: debug 2026-03-10T05:43:50.967+0000 7fe69b16e880 4 rocksdb: Options.write_thread_slow_yield_usec: 3 2026-03-10T05:43:50.973 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph-mon: debug 2026-03-10T05:43:50.967+0000 7fe69b16e880 4 rocksdb: Options.row_cache: None 2026-03-10T05:43:50.973 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph-mon: debug 2026-03-10T05:43:50.967+0000 7fe69b16e880 4 rocksdb: Options.wal_filter: None 2026-03-10T05:43:50.973 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph-mon: debug 2026-03-10T05:43:50.967+0000 7fe69b16e880 4 rocksdb: Options.avoid_flush_during_recovery: 0 2026-03-10T05:43:50.973 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph-mon: debug 2026-03-10T05:43:50.967+0000 7fe69b16e880 4 rocksdb: Options.allow_ingest_behind: 0 2026-03-10T05:43:50.973 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph-mon: debug 2026-03-10T05:43:50.967+0000 7fe69b16e880 4 rocksdb: Options.preserve_deletes: 0 2026-03-10T05:43:50.973 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph-mon: debug 2026-03-10T05:43:50.967+0000 7fe69b16e880 4 rocksdb: Options.two_write_queues: 0 2026-03-10T05:43:50.973 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph-mon: debug 2026-03-10T05:43:50.967+0000 7fe69b16e880 4 rocksdb: Options.manual_wal_flush: 0 2026-03-10T05:43:50.973 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph-mon: debug 2026-03-10T05:43:50.967+0000 7fe69b16e880 4 rocksdb: Options.atomic_flush: 0 2026-03-10T05:43:50.973 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph-mon: debug 2026-03-10T05:43:50.967+0000 7fe69b16e880 4 rocksdb: Options.avoid_unnecessary_blocking_io: 0 2026-03-10T05:43:50.973 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph-mon: debug 2026-03-10T05:43:50.967+0000 7fe69b16e880 4 rocksdb: Options.persist_stats_to_disk: 0 2026-03-10T05:43:50.973 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph-mon: debug 2026-03-10T05:43:50.967+0000 7fe69b16e880 4 rocksdb: Options.write_dbid_to_manifest: 0 2026-03-10T05:43:50.973 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph-mon: debug 2026-03-10T05:43:50.967+0000 7fe69b16e880 4 rocksdb: Options.log_readahead_size: 0 2026-03-10T05:43:50.973 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph-mon: debug 2026-03-10T05:43:50.967+0000 7fe69b16e880 4 rocksdb: Options.file_checksum_gen_factory: Unknown 2026-03-10T05:43:50.973 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph-mon: debug 2026-03-10T05:43:50.967+0000 7fe69b16e880 4 rocksdb: Options.best_efforts_recovery: 0 2026-03-10T05:43:50.973 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph-mon: debug 2026-03-10T05:43:50.967+0000 7fe69b16e880 4 rocksdb: Options.max_bgerror_resume_count: 2147483647 2026-03-10T05:43:50.973 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph-mon: debug 2026-03-10T05:43:50.967+0000 7fe69b16e880 4 rocksdb: Options.bgerror_resume_retry_interval: 1000000 2026-03-10T05:43:50.973 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph-mon: debug 2026-03-10T05:43:50.967+0000 7fe69b16e880 4 rocksdb: Options.allow_data_in_errors: 0 2026-03-10T05:43:50.973 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph-mon: debug 2026-03-10T05:43:50.967+0000 7fe69b16e880 4 rocksdb: Options.db_host_id: __hostname__ 2026-03-10T05:43:50.973 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph-mon: debug 2026-03-10T05:43:50.967+0000 7fe69b16e880 4 rocksdb: Options.max_background_jobs: 2 2026-03-10T05:43:50.973 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph-mon: debug 2026-03-10T05:43:50.967+0000 7fe69b16e880 4 rocksdb: Options.max_background_compactions: -1 2026-03-10T05:43:50.973 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph-mon: debug 2026-03-10T05:43:50.967+0000 7fe69b16e880 4 rocksdb: Options.max_subcompactions: 1 2026-03-10T05:43:50.973 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph-mon: debug 2026-03-10T05:43:50.967+0000 7fe69b16e880 4 rocksdb: Options.avoid_flush_during_shutdown: 0 2026-03-10T05:43:50.973 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph-mon: debug 2026-03-10T05:43:50.967+0000 7fe69b16e880 4 rocksdb: Options.writable_file_max_buffer_size: 1048576 2026-03-10T05:43:50.973 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph-mon: debug 2026-03-10T05:43:50.967+0000 7fe69b16e880 4 rocksdb: Options.delayed_write_rate : 16777216 2026-03-10T05:43:50.973 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph-mon: debug 2026-03-10T05:43:50.967+0000 7fe69b16e880 4 rocksdb: Options.max_total_wal_size: 0 2026-03-10T05:43:50.973 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph-mon: debug 2026-03-10T05:43:50.967+0000 7fe69b16e880 4 rocksdb: Options.delete_obsolete_files_period_micros: 21600000000 2026-03-10T05:43:50.973 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph-mon: debug 2026-03-10T05:43:50.967+0000 7fe69b16e880 4 rocksdb: Options.stats_dump_period_sec: 600 2026-03-10T05:43:50.973 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph-mon: debug 2026-03-10T05:43:50.967+0000 7fe69b16e880 4 rocksdb: Options.stats_persist_period_sec: 600 2026-03-10T05:43:50.973 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph-mon: debug 2026-03-10T05:43:50.967+0000 7fe69b16e880 4 rocksdb: Options.stats_history_buffer_size: 1048576 2026-03-10T05:43:50.974 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph-mon: debug 2026-03-10T05:43:50.967+0000 7fe69b16e880 4 rocksdb: Options.max_open_files: -1 2026-03-10T05:43:50.974 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph-mon: debug 2026-03-10T05:43:50.967+0000 7fe69b16e880 4 rocksdb: Options.bytes_per_sync: 0 2026-03-10T05:43:50.974 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph-mon: debug 2026-03-10T05:43:50.967+0000 7fe69b16e880 4 rocksdb: Options.wal_bytes_per_sync: 0 2026-03-10T05:43:50.974 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph-mon: debug 2026-03-10T05:43:50.967+0000 7fe69b16e880 4 rocksdb: Options.strict_bytes_per_sync: 0 2026-03-10T05:43:50.974 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph-mon: debug 2026-03-10T05:43:50.967+0000 7fe69b16e880 4 rocksdb: Options.compaction_readahead_size: 0 2026-03-10T05:43:50.974 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph-mon: debug 2026-03-10T05:43:50.967+0000 7fe69b16e880 4 rocksdb: Options.max_background_flushes: -1 2026-03-10T05:43:50.974 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph-mon: debug 2026-03-10T05:43:50.967+0000 7fe69b16e880 4 rocksdb: Compression algorithms supported: 2026-03-10T05:43:50.974 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph-mon: debug 2026-03-10T05:43:50.967+0000 7fe69b16e880 4 rocksdb: kZSTDNotFinalCompression supported: 0 2026-03-10T05:43:50.974 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph-mon: debug 2026-03-10T05:43:50.967+0000 7fe69b16e880 4 rocksdb: kZSTD supported: 0 2026-03-10T05:43:50.974 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph-mon: debug 2026-03-10T05:43:50.967+0000 7fe69b16e880 4 rocksdb: kXpressCompression supported: 0 2026-03-10T05:43:50.974 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph-mon: debug 2026-03-10T05:43:50.967+0000 7fe69b16e880 4 rocksdb: kLZ4HCCompression supported: 1 2026-03-10T05:43:50.974 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph-mon: debug 2026-03-10T05:43:50.967+0000 7fe69b16e880 4 rocksdb: kLZ4Compression supported: 1 2026-03-10T05:43:50.974 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph-mon: debug 2026-03-10T05:43:50.967+0000 7fe69b16e880 4 rocksdb: kBZip2Compression supported: 0 2026-03-10T05:43:50.974 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph-mon: debug 2026-03-10T05:43:50.967+0000 7fe69b16e880 4 rocksdb: kZlibCompression supported: 1 2026-03-10T05:43:50.974 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph-mon: debug 2026-03-10T05:43:50.967+0000 7fe69b16e880 4 rocksdb: kSnappyCompression supported: 1 2026-03-10T05:43:50.974 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph-mon: debug 2026-03-10T05:43:50.967+0000 7fe69b16e880 4 rocksdb: Fast CRC32 supported: Supported on x86 2026-03-10T05:43:50.974 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph-mon: debug 2026-03-10T05:43:50.967+0000 7fe69b16e880 4 rocksdb: [db/db_impl/db_impl_open.cc:281] Creating manifest 1 2026-03-10T05:43:50.974 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph-mon: 2026-03-10T05:43:50.976 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph-mon: debug 2026-03-10T05:43:50.971+0000 7fe69b16e880 4 rocksdb: [db/version_set.cc:4725] Recovering from manifest file: /var/lib/ceph/mon/ceph-a/store.db/MANIFEST-000001 2026-03-10T05:43:50.976 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph-mon: 2026-03-10T05:43:50.976 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph-mon: debug 2026-03-10T05:43:50.971+0000 7fe69b16e880 4 rocksdb: [db/column_family.cc:597] --------------- Options for column family [default]: 2026-03-10T05:43:50.976 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph-mon: 2026-03-10T05:43:50.976 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph-mon: debug 2026-03-10T05:43:50.971+0000 7fe69b16e880 4 rocksdb: Options.comparator: leveldb.BytewiseComparator 2026-03-10T05:43:50.976 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph-mon: debug 2026-03-10T05:43:50.971+0000 7fe69b16e880 4 rocksdb: Options.merge_operator: 2026-03-10T05:43:50.976 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph-mon: debug 2026-03-10T05:43:50.971+0000 7fe69b16e880 4 rocksdb: Options.compaction_filter: None 2026-03-10T05:43:50.976 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph-mon: debug 2026-03-10T05:43:50.971+0000 7fe69b16e880 4 rocksdb: Options.compaction_filter_factory: None 2026-03-10T05:43:50.976 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph-mon: debug 2026-03-10T05:43:50.971+0000 7fe69b16e880 4 rocksdb: Options.sst_partitioner_factory: None 2026-03-10T05:43:50.977 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph-mon: debug 2026-03-10T05:43:50.971+0000 7fe69b16e880 4 rocksdb: Options.memtable_factory: SkipListFactory 2026-03-10T05:43:50.977 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph-mon: debug 2026-03-10T05:43:50.971+0000 7fe69b16e880 4 rocksdb: Options.table_factory: BlockBasedTable 2026-03-10T05:43:50.977 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph-mon: debug 2026-03-10T05:43:50.971+0000 7fe69b16e880 4 rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55f3aadcad10) 2026-03-10T05:43:50.977 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph-mon: cache_index_and_filter_blocks: 1 2026-03-10T05:43:50.977 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph-mon: cache_index_and_filter_blocks_with_high_priority: 0 2026-03-10T05:43:50.977 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph-mon: pin_l0_filter_and_index_blocks_in_cache: 0 2026-03-10T05:43:50.977 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph-mon: pin_top_level_index_and_filter: 1 2026-03-10T05:43:50.977 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph-mon: index_type: 0 2026-03-10T05:43:50.977 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph-mon: data_block_index_type: 0 2026-03-10T05:43:50.977 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph-mon: index_shortening: 1 2026-03-10T05:43:50.977 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph-mon: data_block_hash_table_util_ratio: 0.750000 2026-03-10T05:43:50.977 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph-mon: hash_index_allow_collision: 1 2026-03-10T05:43:50.977 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph-mon: checksum: 1 2026-03-10T05:43:50.977 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph-mon: no_block_cache: 0 2026-03-10T05:43:50.977 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph-mon: block_cache: 0x55f3aae32170 2026-03-10T05:43:50.977 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph-mon: block_cache_name: BinnedLRUCache 2026-03-10T05:43:50.977 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph-mon: block_cache_options: 2026-03-10T05:43:50.977 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph-mon: capacity : 536870912 2026-03-10T05:43:50.977 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph-mon: num_shard_bits : 4 2026-03-10T05:43:50.977 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph-mon: strict_capacity_limit : 0 2026-03-10T05:43:50.977 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph-mon: high_pri_pool_ratio: 0.000 2026-03-10T05:43:50.977 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph-mon: block_cache_compressed: (nil) 2026-03-10T05:43:50.977 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph-mon: persistent_cache: (nil) 2026-03-10T05:43:50.977 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph-mon: block_size: 4096 2026-03-10T05:43:50.977 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph-mon: block_size_deviation: 10 2026-03-10T05:43:50.977 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph-mon: block_restart_interval: 16 2026-03-10T05:43:50.977 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph-mon: index_block_restart_interval: 1 2026-03-10T05:43:50.977 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph-mon: metadata_block_size: 4096 2026-03-10T05:43:50.977 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph-mon: partition_filters: 0 2026-03-10T05:43:50.977 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph-mon: use_delta_encoding: 1 2026-03-10T05:43:50.977 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph-mon: filter_policy: rocksdb.BuiltinBloomFilter 2026-03-10T05:43:50.977 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph-mon: whole_key_filtering: 1 2026-03-10T05:43:50.977 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph-mon: verify_compression: 0 2026-03-10T05:43:50.977 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph-mon: read_amp_bytes_per_bit: 0 2026-03-10T05:43:50.977 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph-mon: format_version: 4 2026-03-10T05:43:50.977 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph-mon: enable_index_compression: 1 2026-03-10T05:43:50.977 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph-mon: block_align: 0 2026-03-10T05:43:50.977 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph-mon: 2026-03-10T05:43:50.977 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph-mon: debug 2026-03-10T05:43:50.971+0000 7fe69b16e880 4 rocksdb: Options.write_buffer_size: 33554432 2026-03-10T05:43:50.977 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph-mon: debug 2026-03-10T05:43:50.971+0000 7fe69b16e880 4 rocksdb: Options.max_write_buffer_number: 2 2026-03-10T05:43:50.977 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph-mon: debug 2026-03-10T05:43:50.971+0000 7fe69b16e880 4 rocksdb: Options.compression: NoCompression 2026-03-10T05:43:50.977 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph-mon: debug 2026-03-10T05:43:50.971+0000 7fe69b16e880 4 rocksdb: Options.bottommost_compression: Disabled 2026-03-10T05:43:50.977 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph-mon: debug 2026-03-10T05:43:50.971+0000 7fe69b16e880 4 rocksdb: Options.prefix_extractor: nullptr 2026-03-10T05:43:50.977 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph-mon: debug 2026-03-10T05:43:50.971+0000 7fe69b16e880 4 rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr 2026-03-10T05:43:50.977 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph-mon: debug 2026-03-10T05:43:50.971+0000 7fe69b16e880 4 rocksdb: Options.num_levels: 7 2026-03-10T05:43:50.977 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph-mon: debug 2026-03-10T05:43:50.971+0000 7fe69b16e880 4 rocksdb: Options.min_write_buffer_number_to_merge: 1 2026-03-10T05:43:50.977 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph-mon: debug 2026-03-10T05:43:50.971+0000 7fe69b16e880 4 rocksdb: Options.max_write_buffer_number_to_maintain: 0 2026-03-10T05:43:50.977 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph-mon: debug 2026-03-10T05:43:50.971+0000 7fe69b16e880 4 rocksdb: Options.max_write_buffer_size_to_maintain: 0 2026-03-10T05:43:50.977 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph-mon: debug 2026-03-10T05:43:50.971+0000 7fe69b16e880 4 rocksdb: Options.bottommost_compression_opts.window_bits: -14 2026-03-10T05:43:50.977 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph-mon: debug 2026-03-10T05:43:50.971+0000 7fe69b16e880 4 rocksdb: Options.bottommost_compression_opts.level: 32767 2026-03-10T05:43:50.977 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph-mon: debug 2026-03-10T05:43:50.971+0000 7fe69b16e880 4 rocksdb: Options.bottommost_compression_opts.strategy: 0 2026-03-10T05:43:50.977 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph-mon: debug 2026-03-10T05:43:50.971+0000 7fe69b16e880 4 rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 2026-03-10T05:43:50.977 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph-mon: debug 2026-03-10T05:43:50.971+0000 7fe69b16e880 4 rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 2026-03-10T05:43:50.977 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph-mon: debug 2026-03-10T05:43:50.971+0000 7fe69b16e880 4 rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 2026-03-10T05:43:50.977 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph-mon: debug 2026-03-10T05:43:50.971+0000 7fe69b16e880 4 rocksdb: Options.bottommost_compression_opts.enabled: false 2026-03-10T05:43:50.977 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph-mon: debug 2026-03-10T05:43:50.971+0000 7fe69b16e880 4 rocksdb: Options.compression_opts.window_bits: -14 2026-03-10T05:43:50.977 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph-mon: debug 2026-03-10T05:43:50.971+0000 7fe69b16e880 4 rocksdb: Options.compression_opts.level: 32767 2026-03-10T05:43:50.977 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph-mon: debug 2026-03-10T05:43:50.971+0000 7fe69b16e880 4 rocksdb: Options.compression_opts.strategy: 0 2026-03-10T05:43:50.977 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph-mon: debug 2026-03-10T05:43:50.971+0000 7fe69b16e880 4 rocksdb: Options.compression_opts.max_dict_bytes: 0 2026-03-10T05:43:50.977 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph-mon: debug 2026-03-10T05:43:50.971+0000 7fe69b16e880 4 rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 2026-03-10T05:43:50.977 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph-mon: debug 2026-03-10T05:43:50.971+0000 7fe69b16e880 4 rocksdb: Options.compression_opts.parallel_threads: 1 2026-03-10T05:43:50.977 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph-mon: debug 2026-03-10T05:43:50.971+0000 7fe69b16e880 4 rocksdb: Options.compression_opts.enabled: false 2026-03-10T05:43:50.977 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph-mon: debug 2026-03-10T05:43:50.971+0000 7fe69b16e880 4 rocksdb: Options.level0_file_num_compaction_trigger: 4 2026-03-10T05:43:50.977 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph-mon: debug 2026-03-10T05:43:50.971+0000 7fe69b16e880 4 rocksdb: Options.level0_slowdown_writes_trigger: 20 2026-03-10T05:43:50.977 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph-mon: debug 2026-03-10T05:43:50.971+0000 7fe69b16e880 4 rocksdb: Options.level0_stop_writes_trigger: 36 2026-03-10T05:43:50.977 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph-mon: debug 2026-03-10T05:43:50.971+0000 7fe69b16e880 4 rocksdb: Options.target_file_size_base: 67108864 2026-03-10T05:43:50.977 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph-mon: debug 2026-03-10T05:43:50.971+0000 7fe69b16e880 4 rocksdb: Options.target_file_size_multiplier: 1 2026-03-10T05:43:50.978 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph-mon: debug 2026-03-10T05:43:50.971+0000 7fe69b16e880 4 rocksdb: Options.max_bytes_for_level_base: 268435456 2026-03-10T05:43:50.978 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph-mon: debug 2026-03-10T05:43:50.971+0000 7fe69b16e880 4 rocksdb: Options.level_compaction_dynamic_level_bytes: 1 2026-03-10T05:43:50.978 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph-mon: debug 2026-03-10T05:43:50.971+0000 7fe69b16e880 4 rocksdb: Options.max_bytes_for_level_multiplier: 10.000000 2026-03-10T05:43:50.978 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph-mon: debug 2026-03-10T05:43:50.971+0000 7fe69b16e880 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 2026-03-10T05:43:50.978 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph-mon: debug 2026-03-10T05:43:50.971+0000 7fe69b16e880 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 2026-03-10T05:43:50.978 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph-mon: debug 2026-03-10T05:43:50.971+0000 7fe69b16e880 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 2026-03-10T05:43:50.978 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph-mon: debug 2026-03-10T05:43:50.971+0000 7fe69b16e880 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 2026-03-10T05:43:50.978 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph-mon: debug 2026-03-10T05:43:50.971+0000 7fe69b16e880 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 2026-03-10T05:43:50.978 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph-mon: debug 2026-03-10T05:43:50.971+0000 7fe69b16e880 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 2026-03-10T05:43:50.978 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph-mon: debug 2026-03-10T05:43:50.971+0000 7fe69b16e880 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 2026-03-10T05:43:50.978 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph-mon: debug 2026-03-10T05:43:50.971+0000 7fe69b16e880 4 rocksdb: Options.max_sequential_skip_in_iterations: 8 2026-03-10T05:43:50.978 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph-mon: debug 2026-03-10T05:43:50.971+0000 7fe69b16e880 4 rocksdb: Options.max_compaction_bytes: 1677721600 2026-03-10T05:43:50.978 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph-mon: debug 2026-03-10T05:43:50.971+0000 7fe69b16e880 4 rocksdb: Options.arena_block_size: 4194304 2026-03-10T05:43:50.978 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph-mon: debug 2026-03-10T05:43:50.971+0000 7fe69b16e880 4 rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 2026-03-10T05:43:50.978 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph-mon: debug 2026-03-10T05:43:50.971+0000 7fe69b16e880 4 rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 2026-03-10T05:43:50.978 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph-mon: debug 2026-03-10T05:43:50.971+0000 7fe69b16e880 4 rocksdb: Options.rate_limit_delay_max_milliseconds: 100 2026-03-10T05:43:50.978 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph-mon: debug 2026-03-10T05:43:50.971+0000 7fe69b16e880 4 rocksdb: Options.disable_auto_compactions: 0 2026-03-10T05:43:50.978 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph-mon: debug 2026-03-10T05:43:50.971+0000 7fe69b16e880 4 rocksdb: Options.compaction_style: kCompactionStyleLevel 2026-03-10T05:43:50.978 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph-mon: debug 2026-03-10T05:43:50.971+0000 7fe69b16e880 4 rocksdb: Options.compaction_pri: kMinOverlappingRatio 2026-03-10T05:43:50.978 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph-mon: debug 2026-03-10T05:43:50.971+0000 7fe69b16e880 4 rocksdb: Options.compaction_options_universal.size_ratio: 1 2026-03-10T05:43:50.978 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph-mon: debug 2026-03-10T05:43:50.971+0000 7fe69b16e880 4 rocksdb: Options.compaction_options_universal.min_merge_width: 2 2026-03-10T05:43:50.978 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph-mon: debug 2026-03-10T05:43:50.971+0000 7fe69b16e880 4 rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 2026-03-10T05:43:50.978 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph-mon: debug 2026-03-10T05:43:50.971+0000 7fe69b16e880 4 rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 2026-03-10T05:43:50.978 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph-mon: debug 2026-03-10T05:43:50.971+0000 7fe69b16e880 4 rocksdb: Options.compaction_options_universal.compression_size_percent: -1 2026-03-10T05:43:50.978 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph-mon: debug 2026-03-10T05:43:50.971+0000 7fe69b16e880 4 rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize 2026-03-10T05:43:50.978 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph-mon: debug 2026-03-10T05:43:50.971+0000 7fe69b16e880 4 rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 2026-03-10T05:43:50.978 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph-mon: debug 2026-03-10T05:43:50.971+0000 7fe69b16e880 4 rocksdb: Options.compaction_options_fifo.allow_compaction: 0 2026-03-10T05:43:50.978 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph-mon: debug 2026-03-10T05:43:50.971+0000 7fe69b16e880 4 rocksdb: Options.table_properties_collectors: 2026-03-10T05:43:50.978 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph-mon: debug 2026-03-10T05:43:50.971+0000 7fe69b16e880 4 rocksdb: Options.inplace_update_support: 0 2026-03-10T05:43:50.978 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph-mon: debug 2026-03-10T05:43:50.971+0000 7fe69b16e880 4 rocksdb: Options.inplace_update_num_locks: 10000 2026-03-10T05:43:50.978 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph-mon: debug 2026-03-10T05:43:50.971+0000 7fe69b16e880 4 rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 2026-03-10T05:43:50.978 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph-mon: debug 2026-03-10T05:43:50.971+0000 7fe69b16e880 4 rocksdb: Options.memtable_whole_key_filtering: 0 2026-03-10T05:43:50.978 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph-mon: debug 2026-03-10T05:43:50.971+0000 7fe69b16e880 4 rocksdb: Options.memtable_huge_page_size: 0 2026-03-10T05:43:50.978 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph-mon: debug 2026-03-10T05:43:50.971+0000 7fe69b16e880 4 rocksdb: Options.bloom_locality: 0 2026-03-10T05:43:50.978 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph-mon: debug 2026-03-10T05:43:50.971+0000 7fe69b16e880 4 rocksdb: Options.max_successive_merges: 0 2026-03-10T05:43:50.978 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph-mon: debug 2026-03-10T05:43:50.971+0000 7fe69b16e880 4 rocksdb: Options.optimize_filters_for_hits: 0 2026-03-10T05:43:50.978 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph-mon: debug 2026-03-10T05:43:50.971+0000 7fe69b16e880 4 rocksdb: Options.paranoid_file_checks: 0 2026-03-10T05:43:50.978 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph-mon: debug 2026-03-10T05:43:50.971+0000 7fe69b16e880 4 rocksdb: Options.force_consistency_checks: 1 2026-03-10T05:43:50.978 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph-mon: debug 2026-03-10T05:43:50.971+0000 7fe69b16e880 4 rocksdb: Options.report_bg_io_stats: 0 2026-03-10T05:43:50.978 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph-mon: debug 2026-03-10T05:43:50.971+0000 7fe69b16e880 4 rocksdb: Options.ttl: 2592000 2026-03-10T05:43:50.978 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph-mon: debug 2026-03-10T05:43:50.971+0000 7fe69b16e880 4 rocksdb: Options.periodic_compaction_seconds: 0 2026-03-10T05:43:50.978 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph-mon: debug 2026-03-10T05:43:50.971+0000 7fe69b16e880 4 rocksdb: Options.enable_blob_files: false 2026-03-10T05:43:50.978 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph-mon: debug 2026-03-10T05:43:50.971+0000 7fe69b16e880 4 rocksdb: Options.min_blob_size: 0 2026-03-10T05:43:50.978 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph-mon: debug 2026-03-10T05:43:50.971+0000 7fe69b16e880 4 rocksdb: Options.blob_file_size: 268435456 2026-03-10T05:43:50.978 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph-mon: debug 2026-03-10T05:43:50.971+0000 7fe69b16e880 4 rocksdb: Options.blob_compression_type: NoCompression 2026-03-10T05:43:50.978 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph-mon: debug 2026-03-10T05:43:50.971+0000 7fe69b16e880 4 rocksdb: Options.enable_blob_garbage_collection: false 2026-03-10T05:43:50.978 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph-mon: debug 2026-03-10T05:43:50.971+0000 7fe69b16e880 4 rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 2026-03-10T05:43:50.978 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph-mon: debug 2026-03-10T05:43:50.971+0000 7fe69b16e880 4 rocksdb: [db/version_set.cc:4773] Recovered from manifest file:/var/lib/ceph/mon/ceph-a/store.db/MANIFEST-000001 succeeded,manifest_file_number is 1, next_file_number is 3, last_sequence is 0, log_number is 0,prev_log_number is 0,max_column_family is 0,min_log_number_to_keep is 0 2026-03-10T05:43:50.978 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph-mon: 2026-03-10T05:43:50.978 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph-mon: debug 2026-03-10T05:43:50.971+0000 7fe69b16e880 4 rocksdb: [db/version_set.cc:4782] Column family [default] (ID 0), log number is 0 2026-03-10T05:43:50.978 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph-mon: 2026-03-10T05:43:50.978 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph-mon: debug 2026-03-10T05:43:50.971+0000 7fe69b16e880 4 rocksdb: [db/version_set.cc:4083] Creating manifest 3 2026-03-10T05:43:50.978 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph-mon: 2026-03-10T05:43:50.979 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph-mon: debug 2026-03-10T05:43:50.975+0000 7fe69b16e880 4 rocksdb: [db/db_impl/db_impl_open.cc:1701] SstFileManager instance 0x55f3aae18700 2026-03-10T05:43:50.979 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph-mon: debug 2026-03-10T05:43:50.975+0000 7fe69b16e880 4 rocksdb: DB pointer 0x55f3aae8c000 2026-03-10T05:43:50.979 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph-mon: debug 2026-03-10T05:43:50.975+0000 7fe68cd58700 4 rocksdb: [db/db_impl/db_impl.cc:902] ------- DUMPING STATS ------- 2026-03-10T05:43:50.980 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph-mon: debug 2026-03-10T05:43:50.975+0000 7fe68cd58700 4 rocksdb: [db/db_impl/db_impl.cc:903] 2026-03-10T05:43:50.980 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph-mon: ** DB Stats ** 2026-03-10T05:43:50.980 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph-mon: Uptime(secs): 0.0 total, 0.0 interval 2026-03-10T05:43:50.980 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph-mon: Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s 2026-03-10T05:43:50.980 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph-mon: Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s 2026-03-10T05:43:50.980 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph-mon: Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent 2026-03-10T05:43:50.980 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph-mon: Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s 2026-03-10T05:43:50.980 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph-mon: Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 MB, 0.00 MB/s 2026-03-10T05:43:50.980 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph-mon: Interval stall: 00:00:0.000 H:M:S, 0.0 percent 2026-03-10T05:43:50.980 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph-mon: 2026-03-10T05:43:50.980 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph-mon: ** Compaction Stats [default] ** 2026-03-10T05:43:50.980 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph-mon: Level Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop 2026-03-10T05:43:50.980 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph-mon: ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------- 2026-03-10T05:43:50.980 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph-mon: Sum 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 2026-03-10T05:43:50.980 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph-mon: Int 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 2026-03-10T05:43:50.980 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph-mon: 2026-03-10T05:43:50.980 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph-mon: ** Compaction Stats [default] ** 2026-03-10T05:43:50.980 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph-mon: Priority Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop 2026-03-10T05:43:50.980 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph-mon: ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- 2026-03-10T05:43:50.980 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph-mon: Uptime(secs): 0.0 total, 0.0 interval 2026-03-10T05:43:50.980 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph-mon: Flush(GB): cumulative 0.000, interval 0.000 2026-03-10T05:43:50.980 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph-mon: AddFile(GB): cumulative 0.000, interval 0.000 2026-03-10T05:43:50.980 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph-mon: AddFile(Total Files): cumulative 0, interval 0 2026-03-10T05:43:50.980 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph-mon: AddFile(L0 Files): cumulative 0, interval 0 2026-03-10T05:43:50.981 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph-mon: AddFile(Keys): cumulative 0, interval 0 2026-03-10T05:43:50.981 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph-mon: Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds 2026-03-10T05:43:50.981 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph-mon: Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds 2026-03-10T05:43:50.981 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph-mon: Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count 2026-03-10T05:43:50.981 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph-mon: 2026-03-10T05:43:50.981 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph-mon: ** File Read Latency Histogram By Level [default] ** 2026-03-10T05:43:50.981 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph-mon: 2026-03-10T05:43:50.981 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph-mon: ** Compaction Stats [default] ** 2026-03-10T05:43:50.981 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph-mon: Level Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop 2026-03-10T05:43:50.981 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph-mon: ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------- 2026-03-10T05:43:50.981 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph-mon: Sum 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 2026-03-10T05:43:50.981 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph-mon: Int 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 2026-03-10T05:43:50.981 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph-mon: 2026-03-10T05:43:50.981 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph-mon: ** Compaction Stats [default] ** 2026-03-10T05:43:50.981 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph-mon: Priority Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop 2026-03-10T05:43:50.981 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph-mon: ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- 2026-03-10T05:43:50.981 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph-mon: Uptime(secs): 0.0 total, 0.0 interval 2026-03-10T05:43:50.981 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph-mon: Flush(GB): cumulative 0.000, interval 0.000 2026-03-10T05:43:50.981 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph-mon: AddFile(GB): cumulative 0.000, interval 0.000 2026-03-10T05:43:50.981 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph-mon: AddFile(Total Files): cumulative 0, interval 0 2026-03-10T05:43:50.981 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph-mon: AddFile(L0 Files): cumulative 0, interval 0 2026-03-10T05:43:50.981 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph-mon: AddFile(Keys): cumulative 0, interval 0 2026-03-10T05:43:50.981 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph-mon: Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds 2026-03-10T05:43:50.981 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph-mon: Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds 2026-03-10T05:43:50.981 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph-mon: Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count 2026-03-10T05:43:50.981 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph-mon: 2026-03-10T05:43:50.981 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph-mon: ** File Read Latency Histogram By Level [default] ** 2026-03-10T05:43:50.981 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph-mon: 2026-03-10T05:43:50.982 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph-mon: debug 2026-03-10T05:43:50.979+0000 7fe69b16e880 4 rocksdb: [db/db_impl/db_impl.cc:447] Shutdown: canceling all background work 2026-03-10T05:43:50.982 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph-mon: debug 2026-03-10T05:43:50.979+0000 7fe69b16e880 4 rocksdb: [db/db_impl/db_impl.cc:625] Shutdown complete 2026-03-10T05:43:50.982 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph-mon: debug 2026-03-10T05:43:50.979+0000 7fe69b16e880 0 /usr/bin/ceph-mon: created monfs at /var/lib/ceph/mon/ceph-a for mon.a 2026-03-10T05:43:51.031 INFO:teuthology.orchestra.run.vm02.stderr:create mon.a on 2026-03-10T05:43:51.170 INFO:teuthology.orchestra.run.vm02.stderr:systemctl: Created symlink /etc/systemd/system/multi-user.target.wants/ceph.target → /etc/systemd/system/ceph.target. 2026-03-10T05:43:51.320 INFO:teuthology.orchestra.run.vm02.stderr:systemctl: Created symlink /etc/systemd/system/multi-user.target.wants/ceph-107483ae-1c44-11f1-b530-c1172cd6122a.target → /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a.target. 2026-03-10T05:43:51.320 INFO:teuthology.orchestra.run.vm02.stderr:systemctl: Created symlink /etc/systemd/system/ceph.target.wants/ceph-107483ae-1c44-11f1-b530-c1172cd6122a.target → /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a.target. 2026-03-10T05:43:51.656 INFO:teuthology.orchestra.run.vm02.stderr:systemctl: Failed to reset failed state of unit ceph-107483ae-1c44-11f1-b530-c1172cd6122a@mon.a.service: Unit ceph-107483ae-1c44-11f1-b530-c1172cd6122a@mon.a.service not loaded. 2026-03-10T05:43:51.659 INFO:teuthology.orchestra.run.vm02.stderr:systemctl: Created symlink /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a.target.wants/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@mon.a.service → /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service. 2026-03-10T05:43:51.814 INFO:teuthology.orchestra.run.vm02.stderr:firewalld does not appear to be present 2026-03-10T05:43:51.814 INFO:teuthology.orchestra.run.vm02.stderr:Not possible to enable service . firewalld.service is not available 2026-03-10T05:43:51.814 INFO:teuthology.orchestra.run.vm02.stderr:Waiting for mon to start... 2026-03-10T05:43:51.814 INFO:teuthology.orchestra.run.vm02.stderr:Waiting for mon... 2026-03-10T05:43:52.017 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph: cluster: 2026-03-10T05:43:52.017 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph: id: 107483ae-1c44-11f1-b530-c1172cd6122a 2026-03-10T05:43:52.017 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph: health: HEALTH_OK 2026-03-10T05:43:52.017 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph: 2026-03-10T05:43:52.017 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph: services: 2026-03-10T05:43:52.017 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph: mon: 1 daemons, quorum a (age 0.0684218s) 2026-03-10T05:43:52.017 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph: mgr: no daemons active 2026-03-10T05:43:52.017 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph: osd: 0 osds: 0 up, 0 in 2026-03-10T05:43:52.017 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph: 2026-03-10T05:43:52.017 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph: data: 2026-03-10T05:43:52.017 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph: pools: 0 pools, 0 pgs 2026-03-10T05:43:52.017 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph: objects: 0 objects, 0 B 2026-03-10T05:43:52.017 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph: usage: 0 B used, 0 B / 0 B avail 2026-03-10T05:43:52.017 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph: pgs: 2026-03-10T05:43:52.017 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph: 2026-03-10T05:43:52.025 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:43:51 vm02 bash[17020]: cluster 2026-03-10T05:43:51.945297+0000 mon.a (mon.0) 1 : cluster [INF] mon.a is new leader, mons a in quorum (ranks 0) 2026-03-10T05:43:52.049 INFO:teuthology.orchestra.run.vm02.stderr:mon is available 2026-03-10T05:43:52.049 INFO:teuthology.orchestra.run.vm02.stderr:Assimilating anything we can from ceph.conf... 2026-03-10T05:43:52.200 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph: 2026-03-10T05:43:52.201 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph: [global] 2026-03-10T05:43:52.201 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph: fsid = 107483ae-1c44-11f1-b530-c1172cd6122a 2026-03-10T05:43:52.201 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph: mon_host = [v2:192.168.123.102:3300,v1:192.168.123.102:6789] 2026-03-10T05:43:52.201 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph: mon_osd_allow_pg_remap = true 2026-03-10T05:43:52.201 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph: mon_osd_allow_primary_affinity = true 2026-03-10T05:43:52.201 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph: mon_warn_on_no_sortbitwise = false 2026-03-10T05:43:52.201 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph: osd_crush_chooseleaf_type = 0 2026-03-10T05:43:52.201 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph: 2026-03-10T05:43:52.201 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph: [mgr] 2026-03-10T05:43:52.201 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph: mgr/cephadm/use_agent = False 2026-03-10T05:43:52.201 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph: mgr/telemetry/nag = false 2026-03-10T05:43:52.201 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph: 2026-03-10T05:43:52.201 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph: [osd] 2026-03-10T05:43:52.201 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph: osd_map_max_advance = 10 2026-03-10T05:43:52.201 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph: osd_mclock_iops_capacity_threshold_hdd = 49000 2026-03-10T05:43:52.201 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph: osd_sloppy_crc = true 2026-03-10T05:43:52.232 INFO:teuthology.orchestra.run.vm02.stderr:Generating new minimal ceph.conf... 2026-03-10T05:43:52.419 INFO:teuthology.orchestra.run.vm02.stderr:Restarting the monitor... 2026-03-10T05:43:52.532 INFO:teuthology.orchestra.run.vm02.stderr:Setting mon public_network to 192.168.123.0/24 2026-03-10T05:43:52.643 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:43:52 vm02 systemd[1]: Stopping Ceph mon.a for 107483ae-1c44-11f1-b530-c1172cd6122a... 2026-03-10T05:43:52.643 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:43:52 vm02 bash[17388]: Error response from daemon: No such container: ceph-107483ae-1c44-11f1-b530-c1172cd6122a-mon.a 2026-03-10T05:43:52.643 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:43:52 vm02 bash[17020]: debug 2026-03-10T05:43:52.435+0000 7f9280661700 -1 received signal: Terminated from /sbin/docker-init -- /usr/bin/ceph-mon -n mon.a -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-stderr=true --default-log-stderr-prefix=debug --default-mon-cluster-log-to-file=false --default-mon-cluster-log-to-stderr=true (PID: 1) UID: 0 2026-03-10T05:43:52.643 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:43:52 vm02 bash[17020]: debug 2026-03-10T05:43:52.435+0000 7f9280661700 -1 mon.a@0(leader) e1 *** Got Signal Terminated *** 2026-03-10T05:43:52.643 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:43:52 vm02 bash[17395]: ceph-107483ae-1c44-11f1-b530-c1172cd6122a-mon-a 2026-03-10T05:43:52.643 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:43:52 vm02 bash[17428]: Error response from daemon: No such container: ceph-107483ae-1c44-11f1-b530-c1172cd6122a-mon.a 2026-03-10T05:43:52.643 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:43:52 vm02 systemd[1]: ceph-107483ae-1c44-11f1-b530-c1172cd6122a@mon.a.service: Deactivated successfully. 2026-03-10T05:43:52.643 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:43:52 vm02 systemd[1]: Stopped Ceph mon.a for 107483ae-1c44-11f1-b530-c1172cd6122a. 2026-03-10T05:43:52.643 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:43:52 vm02 systemd[1]: Started Ceph mon.a for 107483ae-1c44-11f1-b530-c1172cd6122a. 2026-03-10T05:43:52.795 INFO:teuthology.orchestra.run.vm02.stderr:Wrote config to /etc/ceph/ceph.conf 2026-03-10T05:43:52.795 INFO:teuthology.orchestra.run.vm02.stderr:Wrote keyring to /etc/ceph/ceph.client.admin.keyring 2026-03-10T05:43:52.795 INFO:teuthology.orchestra.run.vm02.stderr:Creating mgr... 2026-03-10T05:43:52.795 INFO:teuthology.orchestra.run.vm02.stderr:Verifying port 9283 ... 2026-03-10T05:43:52.907 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:43:52 vm02 bash[17462]: debug 2026-03-10T05:43:52.639+0000 7f01d0d87880 0 set uid:gid to 167:167 (ceph:ceph) 2026-03-10T05:43:52.907 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:43:52 vm02 bash[17462]: debug 2026-03-10T05:43:52.639+0000 7f01d0d87880 0 ceph version 17.2.0 (43e2e60a7559d3f46c9d53f1ca875fd499a1e35e) quincy (stable), process ceph-mon, pid 7 2026-03-10T05:43:52.907 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:43:52 vm02 bash[17462]: debug 2026-03-10T05:43:52.639+0000 7f01d0d87880 0 pidfile_write: ignore empty --pid-file 2026-03-10T05:43:52.907 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:43:52 vm02 bash[17462]: debug 2026-03-10T05:43:52.643+0000 7f01d0d87880 0 load: jerasure load: lrc 2026-03-10T05:43:52.907 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:43:52 vm02 bash[17462]: debug 2026-03-10T05:43:52.647+0000 7f01d0d87880 4 rocksdb: RocksDB version: 6.15.5 2026-03-10T05:43:52.907 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:43:52 vm02 bash[17462]: debug 2026-03-10T05:43:52.647+0000 7f01d0d87880 4 rocksdb: Git sha rocksdb_build_git_sha:@0@ 2026-03-10T05:43:52.907 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:43:52 vm02 bash[17462]: debug 2026-03-10T05:43:52.647+0000 7f01d0d87880 4 rocksdb: Compile date Apr 18 2022 2026-03-10T05:43:52.907 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:43:52 vm02 bash[17462]: debug 2026-03-10T05:43:52.647+0000 7f01d0d87880 4 rocksdb: DB SUMMARY 2026-03-10T05:43:52.907 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:43:52 vm02 bash[17462]: debug 2026-03-10T05:43:52.647+0000 7f01d0d87880 4 rocksdb: DB Session ID: LTGOBLXBDWQTT7GDMIRH 2026-03-10T05:43:52.907 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:43:52 vm02 bash[17462]: debug 2026-03-10T05:43:52.647+0000 7f01d0d87880 4 rocksdb: CURRENT file: CURRENT 2026-03-10T05:43:52.907 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:43:52 vm02 bash[17462]: debug 2026-03-10T05:43:52.647+0000 7f01d0d87880 4 rocksdb: IDENTITY file: IDENTITY 2026-03-10T05:43:52.907 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:43:52 vm02 bash[17462]: debug 2026-03-10T05:43:52.647+0000 7f01d0d87880 4 rocksdb: MANIFEST file: MANIFEST-000009 size: 131 Bytes 2026-03-10T05:43:52.907 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:43:52 vm02 bash[17462]: debug 2026-03-10T05:43:52.647+0000 7f01d0d87880 4 rocksdb: SST files in /var/lib/ceph/mon/ceph-a/store.db dir, Total Num: 1, files: 000008.sst 2026-03-10T05:43:52.907 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:43:52 vm02 bash[17462]: debug 2026-03-10T05:43:52.647+0000 7f01d0d87880 4 rocksdb: Write Ahead Log file in /var/lib/ceph/mon/ceph-a/store.db: 000010.log size: 73715 ; 2026-03-10T05:43:52.907 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:43:52 vm02 bash[17462]: debug 2026-03-10T05:43:52.647+0000 7f01d0d87880 4 rocksdb: Options.error_if_exists: 0 2026-03-10T05:43:52.907 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:43:52 vm02 bash[17462]: debug 2026-03-10T05:43:52.647+0000 7f01d0d87880 4 rocksdb: Options.create_if_missing: 0 2026-03-10T05:43:52.907 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:43:52 vm02 bash[17462]: debug 2026-03-10T05:43:52.647+0000 7f01d0d87880 4 rocksdb: Options.paranoid_checks: 1 2026-03-10T05:43:52.907 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:43:52 vm02 bash[17462]: debug 2026-03-10T05:43:52.647+0000 7f01d0d87880 4 rocksdb: Options.track_and_verify_wals_in_manifest: 0 2026-03-10T05:43:52.907 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:43:52 vm02 bash[17462]: debug 2026-03-10T05:43:52.647+0000 7f01d0d87880 4 rocksdb: Options.env: 0x5649356c6860 2026-03-10T05:43:52.907 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:43:52 vm02 bash[17462]: debug 2026-03-10T05:43:52.647+0000 7f01d0d87880 4 rocksdb: Options.fs: Posix File System 2026-03-10T05:43:52.907 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:43:52 vm02 bash[17462]: debug 2026-03-10T05:43:52.647+0000 7f01d0d87880 4 rocksdb: Options.info_log: 0x56495c24fe00 2026-03-10T05:43:52.907 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:43:52 vm02 bash[17462]: debug 2026-03-10T05:43:52.647+0000 7f01d0d87880 4 rocksdb: Options.max_file_opening_threads: 16 2026-03-10T05:43:52.907 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:43:52 vm02 bash[17462]: debug 2026-03-10T05:43:52.647+0000 7f01d0d87880 4 rocksdb: Options.statistics: (nil) 2026-03-10T05:43:52.907 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:43:52 vm02 bash[17462]: debug 2026-03-10T05:43:52.647+0000 7f01d0d87880 4 rocksdb: Options.use_fsync: 0 2026-03-10T05:43:52.907 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:43:52 vm02 bash[17462]: debug 2026-03-10T05:43:52.647+0000 7f01d0d87880 4 rocksdb: Options.max_log_file_size: 0 2026-03-10T05:43:52.907 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:43:52 vm02 bash[17462]: debug 2026-03-10T05:43:52.647+0000 7f01d0d87880 4 rocksdb: Options.max_manifest_file_size: 1073741824 2026-03-10T05:43:52.907 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:43:52 vm02 bash[17462]: debug 2026-03-10T05:43:52.647+0000 7f01d0d87880 4 rocksdb: Options.log_file_time_to_roll: 0 2026-03-10T05:43:52.907 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:43:52 vm02 bash[17462]: debug 2026-03-10T05:43:52.647+0000 7f01d0d87880 4 rocksdb: Options.keep_log_file_num: 1000 2026-03-10T05:43:52.907 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:43:52 vm02 bash[17462]: debug 2026-03-10T05:43:52.647+0000 7f01d0d87880 4 rocksdb: Options.recycle_log_file_num: 0 2026-03-10T05:43:52.907 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:43:52 vm02 bash[17462]: debug 2026-03-10T05:43:52.647+0000 7f01d0d87880 4 rocksdb: Options.allow_fallocate: 1 2026-03-10T05:43:52.907 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:43:52 vm02 bash[17462]: debug 2026-03-10T05:43:52.647+0000 7f01d0d87880 4 rocksdb: Options.allow_mmap_reads: 0 2026-03-10T05:43:52.907 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:43:52 vm02 bash[17462]: debug 2026-03-10T05:43:52.647+0000 7f01d0d87880 4 rocksdb: Options.allow_mmap_writes: 0 2026-03-10T05:43:52.907 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:43:52 vm02 bash[17462]: debug 2026-03-10T05:43:52.647+0000 7f01d0d87880 4 rocksdb: Options.use_direct_reads: 0 2026-03-10T05:43:52.907 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:43:52 vm02 bash[17462]: debug 2026-03-10T05:43:52.647+0000 7f01d0d87880 4 rocksdb: Options.use_direct_io_for_flush_and_compaction: 0 2026-03-10T05:43:52.907 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:43:52 vm02 bash[17462]: debug 2026-03-10T05:43:52.647+0000 7f01d0d87880 4 rocksdb: Options.create_missing_column_families: 0 2026-03-10T05:43:52.907 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:43:52 vm02 bash[17462]: debug 2026-03-10T05:43:52.647+0000 7f01d0d87880 4 rocksdb: Options.db_log_dir: 2026-03-10T05:43:52.907 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:43:52 vm02 bash[17462]: debug 2026-03-10T05:43:52.647+0000 7f01d0d87880 4 rocksdb: Options.wal_dir: /var/lib/ceph/mon/ceph-a/store.db 2026-03-10T05:43:52.907 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:43:52 vm02 bash[17462]: debug 2026-03-10T05:43:52.647+0000 7f01d0d87880 4 rocksdb: Options.table_cache_numshardbits: 6 2026-03-10T05:43:52.907 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:43:52 vm02 bash[17462]: debug 2026-03-10T05:43:52.647+0000 7f01d0d87880 4 rocksdb: Options.WAL_ttl_seconds: 0 2026-03-10T05:43:52.907 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:43:52 vm02 bash[17462]: debug 2026-03-10T05:43:52.647+0000 7f01d0d87880 4 rocksdb: Options.WAL_size_limit_MB: 0 2026-03-10T05:43:52.907 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:43:52 vm02 bash[17462]: debug 2026-03-10T05:43:52.647+0000 7f01d0d87880 4 rocksdb: Options.max_write_batch_group_size_bytes: 1048576 2026-03-10T05:43:52.907 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:43:52 vm02 bash[17462]: debug 2026-03-10T05:43:52.647+0000 7f01d0d87880 4 rocksdb: Options.manifest_preallocation_size: 4194304 2026-03-10T05:43:52.907 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:43:52 vm02 bash[17462]: debug 2026-03-10T05:43:52.647+0000 7f01d0d87880 4 rocksdb: Options.is_fd_close_on_exec: 1 2026-03-10T05:43:52.907 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:43:52 vm02 bash[17462]: debug 2026-03-10T05:43:52.647+0000 7f01d0d87880 4 rocksdb: Options.advise_random_on_open: 1 2026-03-10T05:43:52.908 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:43:52 vm02 bash[17462]: debug 2026-03-10T05:43:52.647+0000 7f01d0d87880 4 rocksdb: Options.db_write_buffer_size: 0 2026-03-10T05:43:52.908 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:43:52 vm02 bash[17462]: debug 2026-03-10T05:43:52.647+0000 7f01d0d87880 4 rocksdb: Options.write_buffer_manager: 0x56495c340270 2026-03-10T05:43:52.908 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:43:52 vm02 bash[17462]: debug 2026-03-10T05:43:52.647+0000 7f01d0d87880 4 rocksdb: Options.access_hint_on_compaction_start: 1 2026-03-10T05:43:52.908 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:43:52 vm02 bash[17462]: debug 2026-03-10T05:43:52.647+0000 7f01d0d87880 4 rocksdb: Options.new_table_reader_for_compaction_inputs: 0 2026-03-10T05:43:52.908 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:43:52 vm02 bash[17462]: debug 2026-03-10T05:43:52.647+0000 7f01d0d87880 4 rocksdb: Options.random_access_max_buffer_size: 1048576 2026-03-10T05:43:52.908 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:43:52 vm02 bash[17462]: debug 2026-03-10T05:43:52.647+0000 7f01d0d87880 4 rocksdb: Options.use_adaptive_mutex: 0 2026-03-10T05:43:52.908 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:43:52 vm02 bash[17462]: debug 2026-03-10T05:43:52.647+0000 7f01d0d87880 4 rocksdb: Options.rate_limiter: (nil) 2026-03-10T05:43:52.908 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:43:52 vm02 bash[17462]: debug 2026-03-10T05:43:52.647+0000 7f01d0d87880 4 rocksdb: Options.sst_file_manager.rate_bytes_per_sec: 0 2026-03-10T05:43:52.908 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:43:52 vm02 bash[17462]: debug 2026-03-10T05:43:52.647+0000 7f01d0d87880 4 rocksdb: Options.wal_recovery_mode: 2 2026-03-10T05:43:52.908 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:43:52 vm02 bash[17462]: debug 2026-03-10T05:43:52.647+0000 7f01d0d87880 4 rocksdb: Options.enable_thread_tracking: 0 2026-03-10T05:43:52.908 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:43:52 vm02 bash[17462]: debug 2026-03-10T05:43:52.647+0000 7f01d0d87880 4 rocksdb: Options.enable_pipelined_write: 0 2026-03-10T05:43:52.908 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:43:52 vm02 bash[17462]: debug 2026-03-10T05:43:52.647+0000 7f01d0d87880 4 rocksdb: Options.unordered_write: 0 2026-03-10T05:43:52.908 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:43:52 vm02 bash[17462]: debug 2026-03-10T05:43:52.647+0000 7f01d0d87880 4 rocksdb: Options.allow_concurrent_memtable_write: 1 2026-03-10T05:43:52.908 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:43:52 vm02 bash[17462]: debug 2026-03-10T05:43:52.647+0000 7f01d0d87880 4 rocksdb: Options.enable_write_thread_adaptive_yield: 1 2026-03-10T05:43:52.908 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:43:52 vm02 bash[17462]: debug 2026-03-10T05:43:52.647+0000 7f01d0d87880 4 rocksdb: Options.write_thread_max_yield_usec: 100 2026-03-10T05:43:52.908 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:43:52 vm02 bash[17462]: debug 2026-03-10T05:43:52.647+0000 7f01d0d87880 4 rocksdb: Options.write_thread_slow_yield_usec: 3 2026-03-10T05:43:52.908 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:43:52 vm02 bash[17462]: debug 2026-03-10T05:43:52.647+0000 7f01d0d87880 4 rocksdb: Options.row_cache: None 2026-03-10T05:43:52.908 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:43:52 vm02 bash[17462]: debug 2026-03-10T05:43:52.647+0000 7f01d0d87880 4 rocksdb: Options.wal_filter: None 2026-03-10T05:43:52.908 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:43:52 vm02 bash[17462]: debug 2026-03-10T05:43:52.647+0000 7f01d0d87880 4 rocksdb: Options.avoid_flush_during_recovery: 0 2026-03-10T05:43:52.908 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:43:52 vm02 bash[17462]: debug 2026-03-10T05:43:52.647+0000 7f01d0d87880 4 rocksdb: Options.allow_ingest_behind: 0 2026-03-10T05:43:52.908 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:43:52 vm02 bash[17462]: debug 2026-03-10T05:43:52.647+0000 7f01d0d87880 4 rocksdb: Options.preserve_deletes: 0 2026-03-10T05:43:52.908 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:43:52 vm02 bash[17462]: debug 2026-03-10T05:43:52.647+0000 7f01d0d87880 4 rocksdb: Options.two_write_queues: 0 2026-03-10T05:43:52.908 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:43:52 vm02 bash[17462]: debug 2026-03-10T05:43:52.647+0000 7f01d0d87880 4 rocksdb: Options.manual_wal_flush: 0 2026-03-10T05:43:52.908 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:43:52 vm02 bash[17462]: debug 2026-03-10T05:43:52.647+0000 7f01d0d87880 4 rocksdb: Options.atomic_flush: 0 2026-03-10T05:43:52.908 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:43:52 vm02 bash[17462]: debug 2026-03-10T05:43:52.647+0000 7f01d0d87880 4 rocksdb: Options.avoid_unnecessary_blocking_io: 0 2026-03-10T05:43:52.908 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:43:52 vm02 bash[17462]: debug 2026-03-10T05:43:52.647+0000 7f01d0d87880 4 rocksdb: Options.persist_stats_to_disk: 0 2026-03-10T05:43:52.908 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:43:52 vm02 bash[17462]: debug 2026-03-10T05:43:52.647+0000 7f01d0d87880 4 rocksdb: Options.write_dbid_to_manifest: 0 2026-03-10T05:43:52.908 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:43:52 vm02 bash[17462]: debug 2026-03-10T05:43:52.647+0000 7f01d0d87880 4 rocksdb: Options.log_readahead_size: 0 2026-03-10T05:43:52.908 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:43:52 vm02 bash[17462]: debug 2026-03-10T05:43:52.647+0000 7f01d0d87880 4 rocksdb: Options.file_checksum_gen_factory: Unknown 2026-03-10T05:43:52.908 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:43:52 vm02 bash[17462]: debug 2026-03-10T05:43:52.647+0000 7f01d0d87880 4 rocksdb: Options.best_efforts_recovery: 0 2026-03-10T05:43:52.908 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:43:52 vm02 bash[17462]: debug 2026-03-10T05:43:52.647+0000 7f01d0d87880 4 rocksdb: Options.max_bgerror_resume_count: 2147483647 2026-03-10T05:43:52.908 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:43:52 vm02 bash[17462]: debug 2026-03-10T05:43:52.647+0000 7f01d0d87880 4 rocksdb: Options.bgerror_resume_retry_interval: 1000000 2026-03-10T05:43:52.908 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:43:52 vm02 bash[17462]: debug 2026-03-10T05:43:52.647+0000 7f01d0d87880 4 rocksdb: Options.allow_data_in_errors: 0 2026-03-10T05:43:52.908 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:43:52 vm02 bash[17462]: debug 2026-03-10T05:43:52.647+0000 7f01d0d87880 4 rocksdb: Options.db_host_id: __hostname__ 2026-03-10T05:43:52.908 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:43:52 vm02 bash[17462]: debug 2026-03-10T05:43:52.647+0000 7f01d0d87880 4 rocksdb: Options.max_background_jobs: 2 2026-03-10T05:43:52.908 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:43:52 vm02 bash[17462]: debug 2026-03-10T05:43:52.647+0000 7f01d0d87880 4 rocksdb: Options.max_background_compactions: -1 2026-03-10T05:43:52.908 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:43:52 vm02 bash[17462]: debug 2026-03-10T05:43:52.647+0000 7f01d0d87880 4 rocksdb: Options.max_subcompactions: 1 2026-03-10T05:43:52.908 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:43:52 vm02 bash[17462]: debug 2026-03-10T05:43:52.647+0000 7f01d0d87880 4 rocksdb: Options.avoid_flush_during_shutdown: 0 2026-03-10T05:43:52.908 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:43:52 vm02 bash[17462]: debug 2026-03-10T05:43:52.647+0000 7f01d0d87880 4 rocksdb: Options.writable_file_max_buffer_size: 1048576 2026-03-10T05:43:52.908 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:43:52 vm02 bash[17462]: debug 2026-03-10T05:43:52.647+0000 7f01d0d87880 4 rocksdb: Options.delayed_write_rate : 16777216 2026-03-10T05:43:52.908 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:43:52 vm02 bash[17462]: debug 2026-03-10T05:43:52.647+0000 7f01d0d87880 4 rocksdb: Options.max_total_wal_size: 0 2026-03-10T05:43:52.908 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:43:52 vm02 bash[17462]: debug 2026-03-10T05:43:52.647+0000 7f01d0d87880 4 rocksdb: Options.delete_obsolete_files_period_micros: 21600000000 2026-03-10T05:43:52.908 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:43:52 vm02 bash[17462]: debug 2026-03-10T05:43:52.647+0000 7f01d0d87880 4 rocksdb: Options.stats_dump_period_sec: 600 2026-03-10T05:43:52.908 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:43:52 vm02 bash[17462]: debug 2026-03-10T05:43:52.647+0000 7f01d0d87880 4 rocksdb: Options.stats_persist_period_sec: 600 2026-03-10T05:43:52.908 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:43:52 vm02 bash[17462]: debug 2026-03-10T05:43:52.647+0000 7f01d0d87880 4 rocksdb: Options.stats_history_buffer_size: 1048576 2026-03-10T05:43:52.908 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:43:52 vm02 bash[17462]: debug 2026-03-10T05:43:52.647+0000 7f01d0d87880 4 rocksdb: Options.max_open_files: -1 2026-03-10T05:43:52.908 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:43:52 vm02 bash[17462]: debug 2026-03-10T05:43:52.647+0000 7f01d0d87880 4 rocksdb: Options.bytes_per_sync: 0 2026-03-10T05:43:52.908 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:43:52 vm02 bash[17462]: debug 2026-03-10T05:43:52.647+0000 7f01d0d87880 4 rocksdb: Options.wal_bytes_per_sync: 0 2026-03-10T05:43:52.908 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:43:52 vm02 bash[17462]: debug 2026-03-10T05:43:52.647+0000 7f01d0d87880 4 rocksdb: Options.strict_bytes_per_sync: 0 2026-03-10T05:43:52.908 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:43:52 vm02 bash[17462]: debug 2026-03-10T05:43:52.647+0000 7f01d0d87880 4 rocksdb: Options.compaction_readahead_size: 0 2026-03-10T05:43:52.908 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:43:52 vm02 bash[17462]: debug 2026-03-10T05:43:52.647+0000 7f01d0d87880 4 rocksdb: Options.max_background_flushes: -1 2026-03-10T05:43:52.909 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:43:52 vm02 bash[17462]: debug 2026-03-10T05:43:52.647+0000 7f01d0d87880 4 rocksdb: Compression algorithms supported: 2026-03-10T05:43:52.909 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:43:52 vm02 bash[17462]: debug 2026-03-10T05:43:52.647+0000 7f01d0d87880 4 rocksdb: kZSTDNotFinalCompression supported: 0 2026-03-10T05:43:52.909 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:43:52 vm02 bash[17462]: debug 2026-03-10T05:43:52.647+0000 7f01d0d87880 4 rocksdb: kZSTD supported: 0 2026-03-10T05:43:52.909 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:43:52 vm02 bash[17462]: debug 2026-03-10T05:43:52.647+0000 7f01d0d87880 4 rocksdb: kXpressCompression supported: 0 2026-03-10T05:43:52.909 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:43:52 vm02 bash[17462]: debug 2026-03-10T05:43:52.647+0000 7f01d0d87880 4 rocksdb: kLZ4HCCompression supported: 1 2026-03-10T05:43:52.909 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:43:52 vm02 bash[17462]: debug 2026-03-10T05:43:52.647+0000 7f01d0d87880 4 rocksdb: kLZ4Compression supported: 1 2026-03-10T05:43:52.909 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:43:52 vm02 bash[17462]: debug 2026-03-10T05:43:52.647+0000 7f01d0d87880 4 rocksdb: kBZip2Compression supported: 0 2026-03-10T05:43:52.909 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:43:52 vm02 bash[17462]: debug 2026-03-10T05:43:52.647+0000 7f01d0d87880 4 rocksdb: kZlibCompression supported: 1 2026-03-10T05:43:52.909 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:43:52 vm02 bash[17462]: debug 2026-03-10T05:43:52.647+0000 7f01d0d87880 4 rocksdb: kSnappyCompression supported: 1 2026-03-10T05:43:52.909 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:43:52 vm02 bash[17462]: debug 2026-03-10T05:43:52.647+0000 7f01d0d87880 4 rocksdb: Fast CRC32 supported: Supported on x86 2026-03-10T05:43:52.909 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:43:52 vm02 bash[17462]: debug 2026-03-10T05:43:52.647+0000 7f01d0d87880 4 rocksdb: [db/version_set.cc:4725] Recovering from manifest file: /var/lib/ceph/mon/ceph-a/store.db/MANIFEST-000009 2026-03-10T05:43:52.909 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:43:52 vm02 bash[17462]: debug 2026-03-10T05:43:52.647+0000 7f01d0d87880 4 rocksdb: [db/column_family.cc:597] --------------- Options for column family [default]: 2026-03-10T05:43:52.909 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:43:52 vm02 bash[17462]: debug 2026-03-10T05:43:52.647+0000 7f01d0d87880 4 rocksdb: Options.comparator: leveldb.BytewiseComparator 2026-03-10T05:43:52.909 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:43:52 vm02 bash[17462]: debug 2026-03-10T05:43:52.647+0000 7f01d0d87880 4 rocksdb: Options.merge_operator: 2026-03-10T05:43:52.909 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:43:52 vm02 bash[17462]: debug 2026-03-10T05:43:52.647+0000 7f01d0d87880 4 rocksdb: Options.compaction_filter: None 2026-03-10T05:43:52.909 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:43:52 vm02 bash[17462]: debug 2026-03-10T05:43:52.647+0000 7f01d0d87880 4 rocksdb: Options.compaction_filter_factory: None 2026-03-10T05:43:52.909 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:43:52 vm02 bash[17462]: debug 2026-03-10T05:43:52.647+0000 7f01d0d87880 4 rocksdb: Options.sst_partitioner_factory: None 2026-03-10T05:43:52.909 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:43:52 vm02 bash[17462]: debug 2026-03-10T05:43:52.647+0000 7f01d0d87880 4 rocksdb: Options.memtable_factory: SkipListFactory 2026-03-10T05:43:52.909 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:43:52 vm02 bash[17462]: debug 2026-03-10T05:43:52.647+0000 7f01d0d87880 4 rocksdb: Options.table_factory: BlockBasedTable 2026-03-10T05:43:52.909 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:43:52 vm02 bash[17462]: debug 2026-03-10T05:43:52.647+0000 7f01d0d87880 4 rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x56495c21dd00) 2026-03-10T05:43:52.909 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:43:52 vm02 bash[17462]: cache_index_and_filter_blocks: 1 2026-03-10T05:43:52.909 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:43:52 vm02 bash[17462]: cache_index_and_filter_blocks_with_high_priority: 0 2026-03-10T05:43:52.909 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:43:52 vm02 bash[17462]: pin_l0_filter_and_index_blocks_in_cache: 0 2026-03-10T05:43:52.909 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:43:52 vm02 bash[17462]: pin_top_level_index_and_filter: 1 2026-03-10T05:43:52.909 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:43:52 vm02 bash[17462]: index_type: 0 2026-03-10T05:43:52.909 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:43:52 vm02 bash[17462]: data_block_index_type: 0 2026-03-10T05:43:52.909 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:43:52 vm02 bash[17462]: index_shortening: 1 2026-03-10T05:43:52.909 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:43:52 vm02 bash[17462]: data_block_hash_table_util_ratio: 0.750000 2026-03-10T05:43:52.909 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:43:52 vm02 bash[17462]: hash_index_allow_collision: 1 2026-03-10T05:43:52.909 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:43:52 vm02 bash[17462]: checksum: 1 2026-03-10T05:43:52.909 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:43:52 vm02 bash[17462]: no_block_cache: 0 2026-03-10T05:43:52.909 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:43:52 vm02 bash[17462]: block_cache: 0x56495c284170 2026-03-10T05:43:52.909 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:43:52 vm02 bash[17462]: block_cache_name: BinnedLRUCache 2026-03-10T05:43:52.909 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:43:52 vm02 bash[17462]: block_cache_options: 2026-03-10T05:43:52.909 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:43:52 vm02 bash[17462]: capacity : 536870912 2026-03-10T05:43:52.909 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:43:52 vm02 bash[17462]: num_shard_bits : 4 2026-03-10T05:43:52.909 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:43:52 vm02 bash[17462]: strict_capacity_limit : 0 2026-03-10T05:43:52.909 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:43:52 vm02 bash[17462]: high_pri_pool_ratio: 0.000 2026-03-10T05:43:52.909 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:43:52 vm02 bash[17462]: block_cache_compressed: (nil) 2026-03-10T05:43:52.909 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:43:52 vm02 bash[17462]: persistent_cache: (nil) 2026-03-10T05:43:52.909 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:43:52 vm02 bash[17462]: block_size: 4096 2026-03-10T05:43:52.909 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:43:52 vm02 bash[17462]: block_size_deviation: 10 2026-03-10T05:43:52.909 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:43:52 vm02 bash[17462]: block_restart_interval: 16 2026-03-10T05:43:52.909 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:43:52 vm02 bash[17462]: index_block_restart_interval: 1 2026-03-10T05:43:52.909 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:43:52 vm02 bash[17462]: metadata_block_size: 4096 2026-03-10T05:43:52.909 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:43:52 vm02 bash[17462]: partition_filters: 0 2026-03-10T05:43:52.909 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:43:52 vm02 bash[17462]: use_delta_encoding: 1 2026-03-10T05:43:52.909 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:43:52 vm02 bash[17462]: filter_policy: rocksdb.BuiltinBloomFilter 2026-03-10T05:43:52.909 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:43:52 vm02 bash[17462]: whole_key_filtering: 1 2026-03-10T05:43:52.909 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:43:52 vm02 bash[17462]: verify_compression: 0 2026-03-10T05:43:52.909 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:43:52 vm02 bash[17462]: read_amp_bytes_per_bit: 0 2026-03-10T05:43:52.909 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:43:52 vm02 bash[17462]: format_version: 4 2026-03-10T05:43:52.909 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:43:52 vm02 bash[17462]: enable_index_compression: 1 2026-03-10T05:43:52.909 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:43:52 vm02 bash[17462]: block_align: 0 2026-03-10T05:43:52.909 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:43:52 vm02 bash[17462]: debug 2026-03-10T05:43:52.647+0000 7f01d0d87880 4 rocksdb: Options.write_buffer_size: 33554432 2026-03-10T05:43:52.909 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:43:52 vm02 bash[17462]: debug 2026-03-10T05:43:52.647+0000 7f01d0d87880 4 rocksdb: Options.max_write_buffer_number: 2 2026-03-10T05:43:52.909 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:43:52 vm02 bash[17462]: debug 2026-03-10T05:43:52.647+0000 7f01d0d87880 4 rocksdb: Options.compression: NoCompression 2026-03-10T05:43:52.909 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:43:52 vm02 bash[17462]: debug 2026-03-10T05:43:52.647+0000 7f01d0d87880 4 rocksdb: Options.bottommost_compression: Disabled 2026-03-10T05:43:52.909 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:43:52 vm02 bash[17462]: debug 2026-03-10T05:43:52.647+0000 7f01d0d87880 4 rocksdb: Options.prefix_extractor: nullptr 2026-03-10T05:43:52.909 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:43:52 vm02 bash[17462]: debug 2026-03-10T05:43:52.647+0000 7f01d0d87880 4 rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr 2026-03-10T05:43:52.909 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:43:52 vm02 bash[17462]: debug 2026-03-10T05:43:52.647+0000 7f01d0d87880 4 rocksdb: Options.num_levels: 7 2026-03-10T05:43:52.909 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:43:52 vm02 bash[17462]: debug 2026-03-10T05:43:52.647+0000 7f01d0d87880 4 rocksdb: Options.min_write_buffer_number_to_merge: 1 2026-03-10T05:43:52.909 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:43:52 vm02 bash[17462]: debug 2026-03-10T05:43:52.647+0000 7f01d0d87880 4 rocksdb: Options.max_write_buffer_number_to_maintain: 0 2026-03-10T05:43:52.909 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:43:52 vm02 bash[17462]: debug 2026-03-10T05:43:52.647+0000 7f01d0d87880 4 rocksdb: Options.max_write_buffer_size_to_maintain: 0 2026-03-10T05:43:52.909 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:43:52 vm02 bash[17462]: debug 2026-03-10T05:43:52.647+0000 7f01d0d87880 4 rocksdb: Options.bottommost_compression_opts.window_bits: -14 2026-03-10T05:43:52.909 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:43:52 vm02 bash[17462]: debug 2026-03-10T05:43:52.647+0000 7f01d0d87880 4 rocksdb: Options.bottommost_compression_opts.level: 32767 2026-03-10T05:43:52.909 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:43:52 vm02 bash[17462]: debug 2026-03-10T05:43:52.647+0000 7f01d0d87880 4 rocksdb: Options.bottommost_compression_opts.strategy: 0 2026-03-10T05:43:52.909 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:43:52 vm02 bash[17462]: debug 2026-03-10T05:43:52.647+0000 7f01d0d87880 4 rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 2026-03-10T05:43:52.910 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:43:52 vm02 bash[17462]: debug 2026-03-10T05:43:52.647+0000 7f01d0d87880 4 rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 2026-03-10T05:43:52.910 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:43:52 vm02 bash[17462]: debug 2026-03-10T05:43:52.647+0000 7f01d0d87880 4 rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 2026-03-10T05:43:52.910 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:43:52 vm02 bash[17462]: debug 2026-03-10T05:43:52.647+0000 7f01d0d87880 4 rocksdb: Options.bottommost_compression_opts.enabled: false 2026-03-10T05:43:52.910 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:43:52 vm02 bash[17462]: debug 2026-03-10T05:43:52.647+0000 7f01d0d87880 4 rocksdb: Options.compression_opts.window_bits: -14 2026-03-10T05:43:52.910 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:43:52 vm02 bash[17462]: debug 2026-03-10T05:43:52.647+0000 7f01d0d87880 4 rocksdb: Options.compression_opts.level: 32767 2026-03-10T05:43:52.910 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:43:52 vm02 bash[17462]: debug 2026-03-10T05:43:52.647+0000 7f01d0d87880 4 rocksdb: Options.compression_opts.strategy: 0 2026-03-10T05:43:52.910 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:43:52 vm02 bash[17462]: debug 2026-03-10T05:43:52.647+0000 7f01d0d87880 4 rocksdb: Options.compression_opts.max_dict_bytes: 0 2026-03-10T05:43:52.910 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:43:52 vm02 bash[17462]: debug 2026-03-10T05:43:52.647+0000 7f01d0d87880 4 rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 2026-03-10T05:43:52.910 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:43:52 vm02 bash[17462]: debug 2026-03-10T05:43:52.647+0000 7f01d0d87880 4 rocksdb: Options.compression_opts.parallel_threads: 1 2026-03-10T05:43:52.910 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:43:52 vm02 bash[17462]: debug 2026-03-10T05:43:52.647+0000 7f01d0d87880 4 rocksdb: Options.compression_opts.enabled: false 2026-03-10T05:43:52.910 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:43:52 vm02 bash[17462]: debug 2026-03-10T05:43:52.647+0000 7f01d0d87880 4 rocksdb: Options.level0_file_num_compaction_trigger: 4 2026-03-10T05:43:52.910 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:43:52 vm02 bash[17462]: debug 2026-03-10T05:43:52.647+0000 7f01d0d87880 4 rocksdb: Options.level0_slowdown_writes_trigger: 20 2026-03-10T05:43:52.910 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:43:52 vm02 bash[17462]: debug 2026-03-10T05:43:52.647+0000 7f01d0d87880 4 rocksdb: Options.level0_stop_writes_trigger: 36 2026-03-10T05:43:52.910 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:43:52 vm02 bash[17462]: debug 2026-03-10T05:43:52.647+0000 7f01d0d87880 4 rocksdb: Options.target_file_size_base: 67108864 2026-03-10T05:43:52.910 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:43:52 vm02 bash[17462]: debug 2026-03-10T05:43:52.647+0000 7f01d0d87880 4 rocksdb: Options.target_file_size_multiplier: 1 2026-03-10T05:43:52.910 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:43:52 vm02 bash[17462]: debug 2026-03-10T05:43:52.647+0000 7f01d0d87880 4 rocksdb: Options.max_bytes_for_level_base: 268435456 2026-03-10T05:43:52.910 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:43:52 vm02 bash[17462]: debug 2026-03-10T05:43:52.647+0000 7f01d0d87880 4 rocksdb: Options.level_compaction_dynamic_level_bytes: 1 2026-03-10T05:43:52.910 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:43:52 vm02 bash[17462]: debug 2026-03-10T05:43:52.647+0000 7f01d0d87880 4 rocksdb: Options.max_bytes_for_level_multiplier: 10.000000 2026-03-10T05:43:52.910 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:43:52 vm02 bash[17462]: debug 2026-03-10T05:43:52.647+0000 7f01d0d87880 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 2026-03-10T05:43:52.910 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:43:52 vm02 bash[17462]: debug 2026-03-10T05:43:52.647+0000 7f01d0d87880 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 2026-03-10T05:43:52.910 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:43:52 vm02 bash[17462]: debug 2026-03-10T05:43:52.647+0000 7f01d0d87880 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 2026-03-10T05:43:52.910 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:43:52 vm02 bash[17462]: debug 2026-03-10T05:43:52.647+0000 7f01d0d87880 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 2026-03-10T05:43:52.910 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:43:52 vm02 bash[17462]: debug 2026-03-10T05:43:52.647+0000 7f01d0d87880 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 2026-03-10T05:43:52.910 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:43:52 vm02 bash[17462]: debug 2026-03-10T05:43:52.647+0000 7f01d0d87880 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 2026-03-10T05:43:52.910 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:43:52 vm02 bash[17462]: debug 2026-03-10T05:43:52.647+0000 7f01d0d87880 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 2026-03-10T05:43:52.910 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:43:52 vm02 bash[17462]: debug 2026-03-10T05:43:52.647+0000 7f01d0d87880 4 rocksdb: Options.max_sequential_skip_in_iterations: 8 2026-03-10T05:43:52.910 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:43:52 vm02 bash[17462]: debug 2026-03-10T05:43:52.647+0000 7f01d0d87880 4 rocksdb: Options.max_compaction_bytes: 1677721600 2026-03-10T05:43:52.910 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:43:52 vm02 bash[17462]: debug 2026-03-10T05:43:52.647+0000 7f01d0d87880 4 rocksdb: Options.arena_block_size: 4194304 2026-03-10T05:43:52.910 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:43:52 vm02 bash[17462]: debug 2026-03-10T05:43:52.647+0000 7f01d0d87880 4 rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 2026-03-10T05:43:52.910 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:43:52 vm02 bash[17462]: debug 2026-03-10T05:43:52.647+0000 7f01d0d87880 4 rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 2026-03-10T05:43:52.910 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:43:52 vm02 bash[17462]: debug 2026-03-10T05:43:52.647+0000 7f01d0d87880 4 rocksdb: Options.rate_limit_delay_max_milliseconds: 100 2026-03-10T05:43:52.910 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:43:52 vm02 bash[17462]: debug 2026-03-10T05:43:52.647+0000 7f01d0d87880 4 rocksdb: Options.disable_auto_compactions: 0 2026-03-10T05:43:52.910 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:43:52 vm02 bash[17462]: debug 2026-03-10T05:43:52.647+0000 7f01d0d87880 4 rocksdb: Options.compaction_style: kCompactionStyleLevel 2026-03-10T05:43:52.910 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:43:52 vm02 bash[17462]: debug 2026-03-10T05:43:52.647+0000 7f01d0d87880 4 rocksdb: Options.compaction_pri: kMinOverlappingRatio 2026-03-10T05:43:52.910 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:43:52 vm02 bash[17462]: debug 2026-03-10T05:43:52.647+0000 7f01d0d87880 4 rocksdb: Options.compaction_options_universal.size_ratio: 1 2026-03-10T05:43:52.910 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:43:52 vm02 bash[17462]: debug 2026-03-10T05:43:52.647+0000 7f01d0d87880 4 rocksdb: Options.compaction_options_universal.min_merge_width: 2 2026-03-10T05:43:52.910 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:43:52 vm02 bash[17462]: debug 2026-03-10T05:43:52.647+0000 7f01d0d87880 4 rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 2026-03-10T05:43:52.910 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:43:52 vm02 bash[17462]: debug 2026-03-10T05:43:52.647+0000 7f01d0d87880 4 rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 2026-03-10T05:43:52.910 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:43:52 vm02 bash[17462]: debug 2026-03-10T05:43:52.647+0000 7f01d0d87880 4 rocksdb: Options.compaction_options_universal.compression_size_percent: -1 2026-03-10T05:43:52.910 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:43:52 vm02 bash[17462]: debug 2026-03-10T05:43:52.647+0000 7f01d0d87880 4 rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize 2026-03-10T05:43:52.910 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:43:52 vm02 bash[17462]: debug 2026-03-10T05:43:52.647+0000 7f01d0d87880 4 rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 2026-03-10T05:43:52.910 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:43:52 vm02 bash[17462]: debug 2026-03-10T05:43:52.647+0000 7f01d0d87880 4 rocksdb: Options.compaction_options_fifo.allow_compaction: 0 2026-03-10T05:43:52.910 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:43:52 vm02 bash[17462]: debug 2026-03-10T05:43:52.647+0000 7f01d0d87880 4 rocksdb: Options.table_properties_collectors: 2026-03-10T05:43:52.910 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:43:52 vm02 bash[17462]: debug 2026-03-10T05:43:52.647+0000 7f01d0d87880 4 rocksdb: Options.inplace_update_support: 0 2026-03-10T05:43:52.910 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:43:52 vm02 bash[17462]: debug 2026-03-10T05:43:52.647+0000 7f01d0d87880 4 rocksdb: Options.inplace_update_num_locks: 10000 2026-03-10T05:43:52.910 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:43:52 vm02 bash[17462]: debug 2026-03-10T05:43:52.647+0000 7f01d0d87880 4 rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 2026-03-10T05:43:52.910 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:43:52 vm02 bash[17462]: debug 2026-03-10T05:43:52.647+0000 7f01d0d87880 4 rocksdb: Options.memtable_whole_key_filtering: 0 2026-03-10T05:43:52.910 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:43:52 vm02 bash[17462]: debug 2026-03-10T05:43:52.647+0000 7f01d0d87880 4 rocksdb: Options.memtable_huge_page_size: 0 2026-03-10T05:43:52.910 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:43:52 vm02 bash[17462]: debug 2026-03-10T05:43:52.647+0000 7f01d0d87880 4 rocksdb: Options.bloom_locality: 0 2026-03-10T05:43:52.910 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:43:52 vm02 bash[17462]: debug 2026-03-10T05:43:52.647+0000 7f01d0d87880 4 rocksdb: Options.max_successive_merges: 0 2026-03-10T05:43:52.910 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:43:52 vm02 bash[17462]: debug 2026-03-10T05:43:52.647+0000 7f01d0d87880 4 rocksdb: Options.optimize_filters_for_hits: 0 2026-03-10T05:43:52.910 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:43:52 vm02 bash[17462]: debug 2026-03-10T05:43:52.647+0000 7f01d0d87880 4 rocksdb: Options.paranoid_file_checks: 0 2026-03-10T05:43:52.910 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:43:52 vm02 bash[17462]: debug 2026-03-10T05:43:52.647+0000 7f01d0d87880 4 rocksdb: Options.force_consistency_checks: 1 2026-03-10T05:43:52.910 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:43:52 vm02 bash[17462]: debug 2026-03-10T05:43:52.647+0000 7f01d0d87880 4 rocksdb: Options.report_bg_io_stats: 0 2026-03-10T05:43:52.910 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:43:52 vm02 bash[17462]: debug 2026-03-10T05:43:52.647+0000 7f01d0d87880 4 rocksdb: Options.ttl: 2592000 2026-03-10T05:43:52.910 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:43:52 vm02 bash[17462]: debug 2026-03-10T05:43:52.647+0000 7f01d0d87880 4 rocksdb: Options.periodic_compaction_seconds: 0 2026-03-10T05:43:52.910 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:43:52 vm02 bash[17462]: debug 2026-03-10T05:43:52.647+0000 7f01d0d87880 4 rocksdb: Options.enable_blob_files: false 2026-03-10T05:43:52.910 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:43:52 vm02 bash[17462]: debug 2026-03-10T05:43:52.647+0000 7f01d0d87880 4 rocksdb: Options.min_blob_size: 0 2026-03-10T05:43:52.910 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:43:52 vm02 bash[17462]: debug 2026-03-10T05:43:52.647+0000 7f01d0d87880 4 rocksdb: Options.blob_file_size: 268435456 2026-03-10T05:43:52.910 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:43:52 vm02 bash[17462]: debug 2026-03-10T05:43:52.647+0000 7f01d0d87880 4 rocksdb: Options.blob_compression_type: NoCompression 2026-03-10T05:43:52.910 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:43:52 vm02 bash[17462]: debug 2026-03-10T05:43:52.647+0000 7f01d0d87880 4 rocksdb: Options.enable_blob_garbage_collection: false 2026-03-10T05:43:52.910 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:43:52 vm02 bash[17462]: debug 2026-03-10T05:43:52.647+0000 7f01d0d87880 4 rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 2026-03-10T05:43:52.910 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:43:52 vm02 bash[17462]: debug 2026-03-10T05:43:52.647+0000 7f01d0d87880 4 rocksdb: [db/version_set.cc:4773] Recovered from manifest file:/var/lib/ceph/mon/ceph-a/store.db/MANIFEST-000009 succeeded,manifest_file_number is 9, next_file_number is 11, last_sequence is 5, log_number is 5,prev_log_number is 0,max_column_family is 0,min_log_number_to_keep is 0 2026-03-10T05:43:52.910 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:43:52 vm02 bash[17462]: debug 2026-03-10T05:43:52.647+0000 7f01d0d87880 4 rocksdb: [db/version_set.cc:4782] Column family [default] (ID 0), log number is 5 2026-03-10T05:43:52.910 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:43:52 vm02 bash[17462]: debug 2026-03-10T05:43:52.647+0000 7f01d0d87880 4 rocksdb: [db/version_set.cc:4083] Creating manifest 13 2026-03-10T05:43:52.910 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:43:52 vm02 bash[17462]: debug 2026-03-10T05:43:52.651+0000 7f01d0d87880 4 rocksdb: EVENT_LOG_v1 {"time_micros": 1773121432654921, "job": 1, "event": "recovery_started", "wal_files": [10]} 2026-03-10T05:43:52.910 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:43:52 vm02 bash[17462]: debug 2026-03-10T05:43:52.651+0000 7f01d0d87880 4 rocksdb: [db/db_impl/db_impl_open.cc:847] Recovering log #10 mode 2 2026-03-10T05:43:52.911 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:43:52 vm02 bash[17462]: debug 2026-03-10T05:43:52.651+0000 7f01d0d87880 3 rocksdb: [table/block_based/filter_policy.cc:996] Using legacy Bloom filter with high (20) bits/key. Dramatic filter space and/or accuracy improvement is available with format_version>=5. 2026-03-10T05:43:52.911 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:43:52 vm02 bash[17462]: debug 2026-03-10T05:43:52.651+0000 7f01d0d87880 4 rocksdb: EVENT_LOG_v1 {"time_micros": 1773121432656237, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 14, "file_size": 70687, "file_checksum": "", "file_checksum_func_name": "Unknown", "table_properties": {"data_size": 69004, "index_size": 176, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 581, "raw_key_size": 9687, "raw_average_key_size": 49, "raw_value_size": 63573, "raw_average_value_size": 324, "num_data_blocks": 8, "num_entries": 196, "num_deletions": 3, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "rocksdb.BuiltinBloomFilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; ", "creation_time": 1773121432, "oldest_key_time": 0, "file_creation_time": 0, "db_id": "c6329304-c2c1-42c6-a241-f0f851194597", "db_session_id": "LTGOBLXBDWQTT7GDMIRH"}} 2026-03-10T05:43:52.911 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:43:52 vm02 bash[17462]: debug 2026-03-10T05:43:52.651+0000 7f01d0d87880 4 rocksdb: [db/version_set.cc:4083] Creating manifest 15 2026-03-10T05:43:52.911 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:43:52 vm02 bash[17462]: debug 2026-03-10T05:43:52.651+0000 7f01d0d87880 4 rocksdb: EVENT_LOG_v1 {"time_micros": 1773121432657585, "job": 1, "event": "recovery_finished"} 2026-03-10T05:43:52.911 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:43:52 vm02 bash[17462]: debug 2026-03-10T05:43:52.651+0000 7f01d0d87880 4 rocksdb: [file/delete_scheduler.cc:73] Deleted file /var/lib/ceph/mon/ceph-a/store.db/000010.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 2026-03-10T05:43:52.911 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:43:52 vm02 bash[17462]: cluster 2026-03-10T05:43:52.667067+0000 mon.a (mon.0) 1 : cluster [INF] mon.a is new leader, mons a in quorum (ranks 0) 2026-03-10T05:43:52.911 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:43:52 vm02 bash[17462]: cluster 2026-03-10T05:43:52.667093+0000 mon.a (mon.0) 2 : cluster [DBG] monmap e1: 1 mons at {a=[v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0]} 2026-03-10T05:43:52.911 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:43:52 vm02 bash[17462]: cluster 2026-03-10T05:43:52.667132+0000 mon.a (mon.0) 3 : cluster [DBG] fsmap 2026-03-10T05:43:52.911 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:43:52 vm02 bash[17462]: cluster 2026-03-10T05:43:52.667142+0000 mon.a (mon.0) 4 : cluster [DBG] osdmap e1: 0 total, 0 up, 0 in 2026-03-10T05:43:52.911 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:43:52 vm02 bash[17462]: cluster 2026-03-10T05:43:52.667574+0000 mon.a (mon.0) 5 : cluster [DBG] mgrmap e1: no daemons active 2026-03-10T05:43:52.957 INFO:teuthology.orchestra.run.vm02.stderr:systemctl: Failed to reset failed state of unit ceph-107483ae-1c44-11f1-b530-c1172cd6122a@mgr.y.service: Unit ceph-107483ae-1c44-11f1-b530-c1172cd6122a@mgr.y.service not loaded. 2026-03-10T05:43:52.961 INFO:teuthology.orchestra.run.vm02.stderr:systemctl: Created symlink /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a.target.wants/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@mgr.y.service → /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service. 2026-03-10T05:43:53.112 INFO:teuthology.orchestra.run.vm02.stderr:firewalld does not appear to be present 2026-03-10T05:43:53.112 INFO:teuthology.orchestra.run.vm02.stderr:Not possible to enable service . firewalld.service is not available 2026-03-10T05:43:53.112 INFO:teuthology.orchestra.run.vm02.stderr:firewalld does not appear to be present 2026-03-10T05:43:53.112 INFO:teuthology.orchestra.run.vm02.stderr:Not possible to open ports <[9283]>. firewalld.service is not available 2026-03-10T05:43:53.112 INFO:teuthology.orchestra.run.vm02.stderr:Waiting for mgr to start... 2026-03-10T05:43:53.112 INFO:teuthology.orchestra.run.vm02.stderr:Waiting for mgr... 2026-03-10T05:43:53.157 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:43:52 vm02 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:43:53.157 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:43:53 vm02 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:43:53.157 INFO:journalctl@ceph.mgr.y.vm02.stdout:Mar 10 05:43:53 vm02 systemd[1]: Started Ceph mgr.y for 107483ae-1c44-11f1-b530-c1172cd6122a. 2026-03-10T05:43:53.288 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph: 2026-03-10T05:43:53.288 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph: { 2026-03-10T05:43:53.288 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph: "fsid": "107483ae-1c44-11f1-b530-c1172cd6122a", 2026-03-10T05:43:53.288 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph: "health": { 2026-03-10T05:43:53.288 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph: "status": "HEALTH_OK", 2026-03-10T05:43:53.288 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph: "checks": {}, 2026-03-10T05:43:53.288 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph: "mutes": [] 2026-03-10T05:43:53.288 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph: }, 2026-03-10T05:43:53.288 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph: "election_epoch": 5, 2026-03-10T05:43:53.288 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph: "quorum": [ 2026-03-10T05:43:53.288 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph: 0 2026-03-10T05:43:53.288 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph: ], 2026-03-10T05:43:53.288 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph: "quorum_names": [ 2026-03-10T05:43:53.288 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph: "a" 2026-03-10T05:43:53.288 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph: ], 2026-03-10T05:43:53.288 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph: "quorum_age": 0, 2026-03-10T05:43:53.288 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph: "monmap": { 2026-03-10T05:43:53.288 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph: "epoch": 1, 2026-03-10T05:43:53.288 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph: "min_mon_release_name": "quincy", 2026-03-10T05:43:53.288 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph: "num_mons": 1 2026-03-10T05:43:53.288 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph: }, 2026-03-10T05:43:53.288 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph: "osdmap": { 2026-03-10T05:43:53.288 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph: "epoch": 1, 2026-03-10T05:43:53.288 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph: "num_osds": 0, 2026-03-10T05:43:53.288 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph: "num_up_osds": 0, 2026-03-10T05:43:53.288 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph: "osd_up_since": 0, 2026-03-10T05:43:53.288 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph: "num_in_osds": 0, 2026-03-10T05:43:53.288 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph: "osd_in_since": 0, 2026-03-10T05:43:53.288 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph: "num_remapped_pgs": 0 2026-03-10T05:43:53.288 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph: }, 2026-03-10T05:43:53.288 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph: "pgmap": { 2026-03-10T05:43:53.288 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph: "pgs_by_state": [], 2026-03-10T05:43:53.288 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph: "num_pgs": 0, 2026-03-10T05:43:53.288 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph: "num_pools": 0, 2026-03-10T05:43:53.288 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph: "num_objects": 0, 2026-03-10T05:43:53.288 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph: "data_bytes": 0, 2026-03-10T05:43:53.288 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph: "bytes_used": 0, 2026-03-10T05:43:53.288 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph: "bytes_avail": 0, 2026-03-10T05:43:53.288 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph: "bytes_total": 0 2026-03-10T05:43:53.288 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph: }, 2026-03-10T05:43:53.288 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph: "fsmap": { 2026-03-10T05:43:53.288 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph: "epoch": 1, 2026-03-10T05:43:53.288 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph: "by_rank": [], 2026-03-10T05:43:53.288 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph: "up:standby": 0 2026-03-10T05:43:53.289 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph: }, 2026-03-10T05:43:53.289 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph: "mgrmap": { 2026-03-10T05:43:53.289 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph: "available": false, 2026-03-10T05:43:53.289 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph: "num_standbys": 0, 2026-03-10T05:43:53.289 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph: "modules": [ 2026-03-10T05:43:53.289 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph: "iostat", 2026-03-10T05:43:53.289 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph: "nfs", 2026-03-10T05:43:53.289 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph: "restful" 2026-03-10T05:43:53.289 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph: ], 2026-03-10T05:43:53.289 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph: "services": {} 2026-03-10T05:43:53.289 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph: }, 2026-03-10T05:43:53.289 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph: "servicemap": { 2026-03-10T05:43:53.289 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph: "epoch": 1, 2026-03-10T05:43:53.289 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph: "modified": "2026-03-10T05:43:51.949459+0000", 2026-03-10T05:43:53.289 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph: "services": {} 2026-03-10T05:43:53.289 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph: }, 2026-03-10T05:43:53.289 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph: "progress_events": {} 2026-03-10T05:43:53.289 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph: } 2026-03-10T05:43:53.330 INFO:teuthology.orchestra.run.vm02.stderr:mgr not available, waiting (1/15)... 2026-03-10T05:43:53.583 INFO:journalctl@ceph.mgr.y.vm02.stdout:Mar 10 05:43:53 vm02 bash[17731]: debug 2026-03-10T05:43:53.315+0000 7f2a41ba2000 -1 mgr[py] Module status has missing NOTIFY_TYPES member 2026-03-10T05:43:53.583 INFO:journalctl@ceph.mgr.y.vm02.stdout:Mar 10 05:43:53 vm02 bash[17731]: debug 2026-03-10T05:43:53.363+0000 7f2a41ba2000 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member 2026-03-10T05:43:54.064 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:43:53 vm02 bash[17462]: audit 2026-03-10T05:43:52.747870+0000 mon.a (mon.0) 6 : audit [INF] from='client.? 192.168.123.102:0/1955491091' entity='client.admin' 2026-03-10T05:43:54.064 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:43:53 vm02 bash[17462]: audit 2026-03-10T05:43:53.287071+0000 mon.a (mon.0) 7 : audit [DBG] from='client.? 192.168.123.102:0/3764863435' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch 2026-03-10T05:43:54.064 INFO:journalctl@ceph.mgr.y.vm02.stdout:Mar 10 05:43:53 vm02 bash[17731]: debug 2026-03-10T05:43:53.631+0000 7f2a41ba2000 -1 mgr[py] Module rook has missing NOTIFY_TYPES member 2026-03-10T05:43:54.065 INFO:journalctl@ceph.mgr.y.vm02.stdout:Mar 10 05:43:54 vm02 bash[17731]: debug 2026-03-10T05:43:54.059+0000 7f2a41ba2000 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member 2026-03-10T05:43:54.333 INFO:journalctl@ceph.mgr.y.vm02.stdout:Mar 10 05:43:54 vm02 bash[17731]: debug 2026-03-10T05:43:54.139+0000 7f2a41ba2000 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member 2026-03-10T05:43:54.333 INFO:journalctl@ceph.mgr.y.vm02.stdout:Mar 10 05:43:54 vm02 bash[17731]: debug 2026-03-10T05:43:54.307+0000 7f2a41ba2000 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member 2026-03-10T05:43:54.678 INFO:journalctl@ceph.mgr.y.vm02.stdout:Mar 10 05:43:54 vm02 bash[17731]: debug 2026-03-10T05:43:54.395+0000 7f2a41ba2000 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member 2026-03-10T05:43:54.678 INFO:journalctl@ceph.mgr.y.vm02.stdout:Mar 10 05:43:54 vm02 bash[17731]: debug 2026-03-10T05:43:54.443+0000 7f2a41ba2000 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member 2026-03-10T05:43:54.678 INFO:journalctl@ceph.mgr.y.vm02.stdout:Mar 10 05:43:54 vm02 bash[17731]: debug 2026-03-10T05:43:54.559+0000 7f2a41ba2000 -1 mgr[py] Module influx has missing NOTIFY_TYPES member 2026-03-10T05:43:54.678 INFO:journalctl@ceph.mgr.y.vm02.stdout:Mar 10 05:43:54 vm02 bash[17731]: debug 2026-03-10T05:43:54.611+0000 7f2a41ba2000 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member 2026-03-10T05:43:55.083 INFO:journalctl@ceph.mgr.y.vm02.stdout:Mar 10 05:43:54 vm02 bash[17731]: debug 2026-03-10T05:43:54.675+0000 7f2a41ba2000 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member 2026-03-10T05:43:55.394 INFO:journalctl@ceph.mgr.y.vm02.stdout:Mar 10 05:43:55 vm02 bash[17731]: debug 2026-03-10T05:43:55.135+0000 7f2a41ba2000 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member 2026-03-10T05:43:55.395 INFO:journalctl@ceph.mgr.y.vm02.stdout:Mar 10 05:43:55 vm02 bash[17731]: debug 2026-03-10T05:43:55.183+0000 7f2a41ba2000 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member 2026-03-10T05:43:55.395 INFO:journalctl@ceph.mgr.y.vm02.stdout:Mar 10 05:43:55 vm02 bash[17731]: debug 2026-03-10T05:43:55.227+0000 7f2a41ba2000 -1 mgr[py] Module progress has missing NOTIFY_TYPES member 2026-03-10T05:43:55.543 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph: 2026-03-10T05:43:55.543 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph: { 2026-03-10T05:43:55.543 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph: "fsid": "107483ae-1c44-11f1-b530-c1172cd6122a", 2026-03-10T05:43:55.543 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph: "health": { 2026-03-10T05:43:55.543 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph: "status": "HEALTH_OK", 2026-03-10T05:43:55.543 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph: "checks": {}, 2026-03-10T05:43:55.543 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph: "mutes": [] 2026-03-10T05:43:55.543 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph: }, 2026-03-10T05:43:55.543 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph: "election_epoch": 5, 2026-03-10T05:43:55.543 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph: "quorum": [ 2026-03-10T05:43:55.543 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph: 0 2026-03-10T05:43:55.543 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph: ], 2026-03-10T05:43:55.543 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph: "quorum_names": [ 2026-03-10T05:43:55.543 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph: "a" 2026-03-10T05:43:55.543 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph: ], 2026-03-10T05:43:55.544 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph: "quorum_age": 2, 2026-03-10T05:43:55.544 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph: "monmap": { 2026-03-10T05:43:55.544 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph: "epoch": 1, 2026-03-10T05:43:55.544 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph: "min_mon_release_name": "quincy", 2026-03-10T05:43:55.544 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph: "num_mons": 1 2026-03-10T05:43:55.544 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph: }, 2026-03-10T05:43:55.544 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph: "osdmap": { 2026-03-10T05:43:55.544 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph: "epoch": 1, 2026-03-10T05:43:55.544 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph: "num_osds": 0, 2026-03-10T05:43:55.544 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph: "num_up_osds": 0, 2026-03-10T05:43:55.544 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph: "osd_up_since": 0, 2026-03-10T05:43:55.544 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph: "num_in_osds": 0, 2026-03-10T05:43:55.544 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph: "osd_in_since": 0, 2026-03-10T05:43:55.544 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph: "num_remapped_pgs": 0 2026-03-10T05:43:55.544 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph: }, 2026-03-10T05:43:55.544 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph: "pgmap": { 2026-03-10T05:43:55.544 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph: "pgs_by_state": [], 2026-03-10T05:43:55.544 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph: "num_pgs": 0, 2026-03-10T05:43:55.544 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph: "num_pools": 0, 2026-03-10T05:43:55.544 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph: "num_objects": 0, 2026-03-10T05:43:55.544 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph: "data_bytes": 0, 2026-03-10T05:43:55.544 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph: "bytes_used": 0, 2026-03-10T05:43:55.544 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph: "bytes_avail": 0, 2026-03-10T05:43:55.544 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph: "bytes_total": 0 2026-03-10T05:43:55.544 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph: }, 2026-03-10T05:43:55.544 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph: "fsmap": { 2026-03-10T05:43:55.544 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph: "epoch": 1, 2026-03-10T05:43:55.544 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph: "by_rank": [], 2026-03-10T05:43:55.544 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph: "up:standby": 0 2026-03-10T05:43:55.544 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph: }, 2026-03-10T05:43:55.544 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph: "mgrmap": { 2026-03-10T05:43:55.544 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph: "available": false, 2026-03-10T05:43:55.544 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph: "num_standbys": 0, 2026-03-10T05:43:55.544 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph: "modules": [ 2026-03-10T05:43:55.544 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph: "iostat", 2026-03-10T05:43:55.544 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph: "nfs", 2026-03-10T05:43:55.544 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph: "restful" 2026-03-10T05:43:55.544 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph: ], 2026-03-10T05:43:55.544 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph: "services": {} 2026-03-10T05:43:55.544 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph: }, 2026-03-10T05:43:55.544 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph: "servicemap": { 2026-03-10T05:43:55.544 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph: "epoch": 1, 2026-03-10T05:43:55.544 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph: "modified": "2026-03-10T05:43:51.949459+0000", 2026-03-10T05:43:55.544 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph: "services": {} 2026-03-10T05:43:55.544 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph: }, 2026-03-10T05:43:55.544 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph: "progress_events": {} 2026-03-10T05:43:55.544 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph: } 2026-03-10T05:43:55.593 INFO:teuthology.orchestra.run.vm02.stderr:mgr not available, waiting (2/15)... 2026-03-10T05:43:55.651 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:43:55 vm02 bash[17462]: audit 2026-03-10T05:43:55.542462+0000 mon.a (mon.0) 8 : audit [DBG] from='client.? 192.168.123.102:0/1294343582' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch 2026-03-10T05:43:55.651 INFO:journalctl@ceph.mgr.y.vm02.stdout:Mar 10 05:43:55 vm02 bash[17731]: debug 2026-03-10T05:43:55.531+0000 7f2a41ba2000 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member 2026-03-10T05:43:55.651 INFO:journalctl@ceph.mgr.y.vm02.stdout:Mar 10 05:43:55 vm02 bash[17731]: debug 2026-03-10T05:43:55.599+0000 7f2a41ba2000 -1 mgr[py] Module crash has missing NOTIFY_TYPES member 2026-03-10T05:43:55.990 INFO:journalctl@ceph.mgr.y.vm02.stdout:Mar 10 05:43:55 vm02 bash[17731]: debug 2026-03-10T05:43:55.647+0000 7f2a41ba2000 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member 2026-03-10T05:43:55.991 INFO:journalctl@ceph.mgr.y.vm02.stdout:Mar 10 05:43:55 vm02 bash[17731]: debug 2026-03-10T05:43:55.719+0000 7f2a41ba2000 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member 2026-03-10T05:43:56.242 INFO:journalctl@ceph.mgr.y.vm02.stdout:Mar 10 05:43:55 vm02 bash[17731]: debug 2026-03-10T05:43:55.987+0000 7f2a41ba2000 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member 2026-03-10T05:43:56.242 INFO:journalctl@ceph.mgr.y.vm02.stdout:Mar 10 05:43:56 vm02 bash[17731]: debug 2026-03-10T05:43:56.139+0000 7f2a41ba2000 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member 2026-03-10T05:43:56.242 INFO:journalctl@ceph.mgr.y.vm02.stdout:Mar 10 05:43:56 vm02 bash[17731]: debug 2026-03-10T05:43:56.187+0000 7f2a41ba2000 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member 2026-03-10T05:43:56.583 INFO:journalctl@ceph.mgr.y.vm02.stdout:Mar 10 05:43:56 vm02 bash[17731]: debug 2026-03-10T05:43:56.239+0000 7f2a41ba2000 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member 2026-03-10T05:43:56.584 INFO:journalctl@ceph.mgr.y.vm02.stdout:Mar 10 05:43:56 vm02 bash[17731]: debug 2026-03-10T05:43:56.359+0000 7f2a41ba2000 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member 2026-03-10T05:43:57.083 INFO:journalctl@ceph.mgr.y.vm02.stdout:Mar 10 05:43:56 vm02 bash[17731]: debug 2026-03-10T05:43:56.779+0000 7f2a41ba2000 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member 2026-03-10T05:43:57.084 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:43:56 vm02 bash[17462]: cluster 2026-03-10T05:43:56.783809+0000 mon.a (mon.0) 9 : cluster [INF] Activating manager daemon y 2026-03-10T05:43:57.084 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:43:56 vm02 bash[17462]: cluster 2026-03-10T05:43:56.787385+0000 mon.a (mon.0) 10 : cluster [DBG] mgrmap e2: y(active, starting, since 0.00364367s) 2026-03-10T05:43:57.084 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:43:56 vm02 bash[17462]: audit 2026-03-10T05:43:56.789796+0000 mon.a (mon.0) 11 : audit [DBG] from='mgr.14100 192.168.123.102:0/3770685132' entity='mgr.y' cmd=[{"prefix": "mds metadata"}]: dispatch 2026-03-10T05:43:57.084 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:43:56 vm02 bash[17462]: audit 2026-03-10T05:43:56.789945+0000 mon.a (mon.0) 12 : audit [DBG] from='mgr.14100 192.168.123.102:0/3770685132' entity='mgr.y' cmd=[{"prefix": "osd metadata"}]: dispatch 2026-03-10T05:43:57.084 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:43:56 vm02 bash[17462]: audit 2026-03-10T05:43:56.790127+0000 mon.a (mon.0) 13 : audit [DBG] from='mgr.14100 192.168.123.102:0/3770685132' entity='mgr.y' cmd=[{"prefix": "mon metadata"}]: dispatch 2026-03-10T05:43:57.084 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:43:56 vm02 bash[17462]: audit 2026-03-10T05:43:56.790282+0000 mon.a (mon.0) 14 : audit [DBG] from='mgr.14100 192.168.123.102:0/3770685132' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-10T05:43:57.084 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:43:56 vm02 bash[17462]: audit 2026-03-10T05:43:56.791064+0000 mon.a (mon.0) 15 : audit [DBG] from='mgr.14100 192.168.123.102:0/3770685132' entity='mgr.y' cmd=[{"prefix": "mgr metadata", "who": "y", "id": "y"}]: dispatch 2026-03-10T05:43:57.084 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:43:56 vm02 bash[17462]: cluster 2026-03-10T05:43:56.794903+0000 mon.a (mon.0) 16 : cluster [INF] Manager daemon y is now available 2026-03-10T05:43:57.084 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:43:56 vm02 bash[17462]: audit 2026-03-10T05:43:56.801973+0000 mon.a (mon.0) 17 : audit [INF] from='mgr.14100 192.168.123.102:0/3770685132' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/mirror_snapshot_schedule"}]: dispatch 2026-03-10T05:43:57.084 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:43:56 vm02 bash[17462]: audit 2026-03-10T05:43:56.802748+0000 mon.a (mon.0) 18 : audit [INF] from='mgr.14100 192.168.123.102:0/3770685132' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/trash_purge_schedule"}]: dispatch 2026-03-10T05:43:57.084 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:43:56 vm02 bash[17462]: audit 2026-03-10T05:43:56.806730+0000 mon.a (mon.0) 19 : audit [INF] from='mgr.14100 192.168.123.102:0/3770685132' entity='mgr.y' 2026-03-10T05:43:57.084 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:43:56 vm02 bash[17462]: audit 2026-03-10T05:43:56.808974+0000 mon.a (mon.0) 20 : audit [INF] from='mgr.14100 192.168.123.102:0/3770685132' entity='mgr.y' 2026-03-10T05:43:57.084 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:43:56 vm02 bash[17462]: audit 2026-03-10T05:43:56.811124+0000 mon.a (mon.0) 21 : audit [INF] from='mgr.14100 192.168.123.102:0/3770685132' entity='mgr.y' 2026-03-10T05:43:57.761 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph: 2026-03-10T05:43:57.761 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph: { 2026-03-10T05:43:57.761 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph: "fsid": "107483ae-1c44-11f1-b530-c1172cd6122a", 2026-03-10T05:43:57.761 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph: "health": { 2026-03-10T05:43:57.761 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph: "status": "HEALTH_OK", 2026-03-10T05:43:57.762 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph: "checks": {}, 2026-03-10T05:43:57.762 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph: "mutes": [] 2026-03-10T05:43:57.762 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph: }, 2026-03-10T05:43:57.762 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph: "election_epoch": 5, 2026-03-10T05:43:57.762 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph: "quorum": [ 2026-03-10T05:43:57.762 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph: 0 2026-03-10T05:43:57.762 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph: ], 2026-03-10T05:43:57.762 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph: "quorum_names": [ 2026-03-10T05:43:57.762 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph: "a" 2026-03-10T05:43:57.762 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph: ], 2026-03-10T05:43:57.762 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph: "quorum_age": 5, 2026-03-10T05:43:57.762 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph: "monmap": { 2026-03-10T05:43:57.762 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph: "epoch": 1, 2026-03-10T05:43:57.762 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph: "min_mon_release_name": "quincy", 2026-03-10T05:43:57.762 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph: "num_mons": 1 2026-03-10T05:43:57.762 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph: }, 2026-03-10T05:43:57.762 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph: "osdmap": { 2026-03-10T05:43:57.762 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph: "epoch": 1, 2026-03-10T05:43:57.762 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph: "num_osds": 0, 2026-03-10T05:43:57.762 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph: "num_up_osds": 0, 2026-03-10T05:43:57.762 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph: "osd_up_since": 0, 2026-03-10T05:43:57.762 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph: "num_in_osds": 0, 2026-03-10T05:43:57.763 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph: "osd_in_since": 0, 2026-03-10T05:43:57.763 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph: "num_remapped_pgs": 0 2026-03-10T05:43:57.763 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph: }, 2026-03-10T05:43:57.763 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph: "pgmap": { 2026-03-10T05:43:57.763 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph: "pgs_by_state": [], 2026-03-10T05:43:57.763 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph: "num_pgs": 0, 2026-03-10T05:43:57.763 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph: "num_pools": 0, 2026-03-10T05:43:57.763 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph: "num_objects": 0, 2026-03-10T05:43:57.763 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph: "data_bytes": 0, 2026-03-10T05:43:57.763 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph: "bytes_used": 0, 2026-03-10T05:43:57.763 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph: "bytes_avail": 0, 2026-03-10T05:43:57.763 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph: "bytes_total": 0 2026-03-10T05:43:57.763 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph: }, 2026-03-10T05:43:57.763 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph: "fsmap": { 2026-03-10T05:43:57.763 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph: "epoch": 1, 2026-03-10T05:43:57.763 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph: "by_rank": [], 2026-03-10T05:43:57.763 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph: "up:standby": 0 2026-03-10T05:43:57.763 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph: }, 2026-03-10T05:43:57.763 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph: "mgrmap": { 2026-03-10T05:43:57.763 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph: "available": false, 2026-03-10T05:43:57.763 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph: "num_standbys": 0, 2026-03-10T05:43:57.763 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph: "modules": [ 2026-03-10T05:43:57.763 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph: "iostat", 2026-03-10T05:43:57.763 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph: "nfs", 2026-03-10T05:43:57.763 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph: "restful" 2026-03-10T05:43:57.763 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph: ], 2026-03-10T05:43:57.763 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph: "services": {} 2026-03-10T05:43:57.763 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph: }, 2026-03-10T05:43:57.763 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph: "servicemap": { 2026-03-10T05:43:57.763 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph: "epoch": 1, 2026-03-10T05:43:57.763 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph: "modified": "2026-03-10T05:43:51.949459+0000", 2026-03-10T05:43:57.763 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph: "services": {} 2026-03-10T05:43:57.763 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph: }, 2026-03-10T05:43:57.763 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph: "progress_events": {} 2026-03-10T05:43:57.763 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph: } 2026-03-10T05:43:57.795 INFO:teuthology.orchestra.run.vm02.stderr:mgr not available, waiting (3/15)... 2026-03-10T05:43:59.083 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:43:58 vm02 bash[17462]: audit 2026-03-10T05:43:57.761152+0000 mon.a (mon.0) 22 : audit [DBG] from='client.? 192.168.123.102:0/1944388036' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch 2026-03-10T05:43:59.083 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:43:58 vm02 bash[17462]: cluster 2026-03-10T05:43:57.791595+0000 mon.a (mon.0) 23 : cluster [DBG] mgrmap e3: y(active, since 1.00786s) 2026-03-10T05:44:00.053 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph: 2026-03-10T05:44:00.053 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph: { 2026-03-10T05:44:00.053 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph: "fsid": "107483ae-1c44-11f1-b530-c1172cd6122a", 2026-03-10T05:44:00.053 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph: "health": { 2026-03-10T05:44:00.053 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph: "status": "HEALTH_OK", 2026-03-10T05:44:00.053 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph: "checks": {}, 2026-03-10T05:44:00.053 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph: "mutes": [] 2026-03-10T05:44:00.053 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph: }, 2026-03-10T05:44:00.053 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph: "election_epoch": 5, 2026-03-10T05:44:00.053 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph: "quorum": [ 2026-03-10T05:44:00.053 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph: 0 2026-03-10T05:44:00.054 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph: ], 2026-03-10T05:44:00.054 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph: "quorum_names": [ 2026-03-10T05:44:00.054 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph: "a" 2026-03-10T05:44:00.054 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph: ], 2026-03-10T05:44:00.054 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph: "quorum_age": 7, 2026-03-10T05:44:00.054 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph: "monmap": { 2026-03-10T05:44:00.054 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph: "epoch": 1, 2026-03-10T05:44:00.054 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph: "min_mon_release_name": "quincy", 2026-03-10T05:44:00.054 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph: "num_mons": 1 2026-03-10T05:44:00.054 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph: }, 2026-03-10T05:44:00.054 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph: "osdmap": { 2026-03-10T05:44:00.054 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph: "epoch": 1, 2026-03-10T05:44:00.054 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph: "num_osds": 0, 2026-03-10T05:44:00.054 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph: "num_up_osds": 0, 2026-03-10T05:44:00.054 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph: "osd_up_since": 0, 2026-03-10T05:44:00.054 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph: "num_in_osds": 0, 2026-03-10T05:44:00.054 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph: "osd_in_since": 0, 2026-03-10T05:44:00.054 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph: "num_remapped_pgs": 0 2026-03-10T05:44:00.054 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph: }, 2026-03-10T05:44:00.054 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph: "pgmap": { 2026-03-10T05:44:00.054 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph: "pgs_by_state": [], 2026-03-10T05:44:00.054 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph: "num_pgs": 0, 2026-03-10T05:44:00.054 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph: "num_pools": 0, 2026-03-10T05:44:00.054 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph: "num_objects": 0, 2026-03-10T05:44:00.054 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph: "data_bytes": 0, 2026-03-10T05:44:00.054 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph: "bytes_used": 0, 2026-03-10T05:44:00.054 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph: "bytes_avail": 0, 2026-03-10T05:44:00.054 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph: "bytes_total": 0 2026-03-10T05:44:00.054 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph: }, 2026-03-10T05:44:00.054 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph: "fsmap": { 2026-03-10T05:44:00.054 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph: "epoch": 1, 2026-03-10T05:44:00.054 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph: "by_rank": [], 2026-03-10T05:44:00.054 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph: "up:standby": 0 2026-03-10T05:44:00.054 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph: }, 2026-03-10T05:44:00.054 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph: "mgrmap": { 2026-03-10T05:44:00.054 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph: "available": true, 2026-03-10T05:44:00.054 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph: "num_standbys": 0, 2026-03-10T05:44:00.054 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph: "modules": [ 2026-03-10T05:44:00.054 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph: "iostat", 2026-03-10T05:44:00.054 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph: "nfs", 2026-03-10T05:44:00.054 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph: "restful" 2026-03-10T05:44:00.054 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph: ], 2026-03-10T05:44:00.054 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph: "services": {} 2026-03-10T05:44:00.054 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph: }, 2026-03-10T05:44:00.054 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph: "servicemap": { 2026-03-10T05:44:00.054 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph: "epoch": 1, 2026-03-10T05:44:00.054 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph: "modified": "2026-03-10T05:43:51.949459+0000", 2026-03-10T05:44:00.054 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph: "services": {} 2026-03-10T05:44:00.054 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph: }, 2026-03-10T05:44:00.054 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph: "progress_events": {} 2026-03-10T05:44:00.054 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph: } 2026-03-10T05:44:00.087 INFO:teuthology.orchestra.run.vm02.stderr:mgr is available 2026-03-10T05:44:00.302 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph: 2026-03-10T05:44:00.302 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph: [global] 2026-03-10T05:44:00.302 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph: fsid = 107483ae-1c44-11f1-b530-c1172cd6122a 2026-03-10T05:44:00.302 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph: mon_osd_allow_pg_remap = true 2026-03-10T05:44:00.302 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph: mon_osd_allow_primary_affinity = true 2026-03-10T05:44:00.302 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph: mon_warn_on_no_sortbitwise = false 2026-03-10T05:44:00.302 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph: osd_crush_chooseleaf_type = 0 2026-03-10T05:44:00.302 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph: 2026-03-10T05:44:00.302 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph: [mgr] 2026-03-10T05:44:00.302 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph: mgr/telemetry/nag = false 2026-03-10T05:44:00.302 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph: 2026-03-10T05:44:00.302 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph: [osd] 2026-03-10T05:44:00.302 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph: osd_map_max_advance = 10 2026-03-10T05:44:00.302 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph: osd_mclock_iops_capacity_threshold_hdd = 49000 2026-03-10T05:44:00.302 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph: osd_sloppy_crc = true 2026-03-10T05:44:00.356 INFO:teuthology.orchestra.run.vm02.stderr:Enabling cephadm module... 2026-03-10T05:44:01.083 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:44:00 vm02 bash[17462]: cluster 2026-03-10T05:43:59.673693+0000 mon.a (mon.0) 24 : cluster [DBG] mgrmap e4: y(active, since 2s) 2026-03-10T05:44:01.083 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:44:00 vm02 bash[17462]: audit 2026-03-10T05:44:00.052465+0000 mon.a (mon.0) 25 : audit [DBG] from='client.? 192.168.123.102:0/4021232271' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch 2026-03-10T05:44:01.083 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:44:00 vm02 bash[17462]: audit 2026-03-10T05:44:00.297101+0000 mon.a (mon.0) 26 : audit [INF] from='client.? 192.168.123.102:0/2201917457' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch 2026-03-10T05:44:01.083 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:44:00 vm02 bash[17462]: audit 2026-03-10T05:44:00.300496+0000 mon.a (mon.0) 27 : audit [INF] from='client.? 192.168.123.102:0/2201917457' entity='client.admin' cmd='[{"prefix": "config assimilate-conf"}]': finished 2026-03-10T05:44:01.084 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:44:00 vm02 bash[17462]: audit 2026-03-10T05:44:00.601312+0000 mon.a (mon.0) 28 : audit [INF] from='client.? 192.168.123.102:0/705689371' entity='client.admin' cmd=[{"prefix": "mgr module enable", "module": "cephadm"}]: dispatch 2026-03-10T05:44:01.583 INFO:journalctl@ceph.mgr.y.vm02.stdout:Mar 10 05:44:01 vm02 bash[17731]: ignoring --setuser ceph since I am not root 2026-03-10T05:44:01.584 INFO:journalctl@ceph.mgr.y.vm02.stdout:Mar 10 05:44:01 vm02 bash[17731]: ignoring --setgroup ceph since I am not root 2026-03-10T05:44:01.584 INFO:journalctl@ceph.mgr.y.vm02.stdout:Mar 10 05:44:01 vm02 bash[17731]: debug 2026-03-10T05:44:01.439+0000 7f3ca87e5000 -1 mgr[py] Module status has missing NOTIFY_TYPES member 2026-03-10T05:44:01.584 INFO:journalctl@ceph.mgr.y.vm02.stdout:Mar 10 05:44:01 vm02 bash[17731]: debug 2026-03-10T05:44:01.483+0000 7f3ca87e5000 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member 2026-03-10T05:44:01.619 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph: { 2026-03-10T05:44:01.619 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph: "epoch": 5, 2026-03-10T05:44:01.619 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph: "available": true, 2026-03-10T05:44:01.620 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph: "active_name": "y", 2026-03-10T05:44:01.620 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph: "num_standby": 0 2026-03-10T05:44:01.620 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph: } 2026-03-10T05:44:01.661 INFO:teuthology.orchestra.run.vm02.stderr:Waiting for the mgr to restart... 2026-03-10T05:44:01.661 INFO:teuthology.orchestra.run.vm02.stderr:Waiting for mgr epoch 5... 2026-03-10T05:44:02.083 INFO:journalctl@ceph.mgr.y.vm02.stdout:Mar 10 05:44:01 vm02 bash[17731]: debug 2026-03-10T05:44:01.791+0000 7f3ca87e5000 -1 mgr[py] Module rook has missing NOTIFY_TYPES member 2026-03-10T05:44:02.549 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:44:02 vm02 bash[17462]: audit 2026-03-10T05:44:01.303345+0000 mon.a (mon.0) 29 : audit [INF] from='client.? 192.168.123.102:0/705689371' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "cephadm"}]': finished 2026-03-10T05:44:02.549 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:44:02 vm02 bash[17462]: cluster 2026-03-10T05:44:01.303418+0000 mon.a (mon.0) 30 : cluster [DBG] mgrmap e5: y(active, since 4s) 2026-03-10T05:44:02.549 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:44:02 vm02 bash[17462]: audit 2026-03-10T05:44:01.619497+0000 mon.a (mon.0) 31 : audit [DBG] from='client.? 192.168.123.102:0/1880371044' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch 2026-03-10T05:44:02.549 INFO:journalctl@ceph.mgr.y.vm02.stdout:Mar 10 05:44:02 vm02 bash[17731]: debug 2026-03-10T05:44:02.211+0000 7f3ca87e5000 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member 2026-03-10T05:44:02.549 INFO:journalctl@ceph.mgr.y.vm02.stdout:Mar 10 05:44:02 vm02 bash[17731]: debug 2026-03-10T05:44:02.291+0000 7f3ca87e5000 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member 2026-03-10T05:44:02.549 INFO:journalctl@ceph.mgr.y.vm02.stdout:Mar 10 05:44:02 vm02 bash[17731]: debug 2026-03-10T05:44:02.455+0000 7f3ca87e5000 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member 2026-03-10T05:44:02.826 INFO:journalctl@ceph.mgr.y.vm02.stdout:Mar 10 05:44:02 vm02 bash[17731]: debug 2026-03-10T05:44:02.543+0000 7f3ca87e5000 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member 2026-03-10T05:44:02.826 INFO:journalctl@ceph.mgr.y.vm02.stdout:Mar 10 05:44:02 vm02 bash[17731]: debug 2026-03-10T05:44:02.591+0000 7f3ca87e5000 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member 2026-03-10T05:44:02.826 INFO:journalctl@ceph.mgr.y.vm02.stdout:Mar 10 05:44:02 vm02 bash[17731]: debug 2026-03-10T05:44:02.707+0000 7f3ca87e5000 -1 mgr[py] Module influx has missing NOTIFY_TYPES member 2026-03-10T05:44:02.826 INFO:journalctl@ceph.mgr.y.vm02.stdout:Mar 10 05:44:02 vm02 bash[17731]: debug 2026-03-10T05:44:02.759+0000 7f3ca87e5000 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member 2026-03-10T05:44:02.826 INFO:journalctl@ceph.mgr.y.vm02.stdout:Mar 10 05:44:02 vm02 bash[17731]: debug 2026-03-10T05:44:02.823+0000 7f3ca87e5000 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member 2026-03-10T05:44:03.583 INFO:journalctl@ceph.mgr.y.vm02.stdout:Mar 10 05:44:03 vm02 bash[17731]: debug 2026-03-10T05:44:03.275+0000 7f3ca87e5000 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member 2026-03-10T05:44:03.583 INFO:journalctl@ceph.mgr.y.vm02.stdout:Mar 10 05:44:03 vm02 bash[17731]: debug 2026-03-10T05:44:03.327+0000 7f3ca87e5000 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member 2026-03-10T05:44:03.583 INFO:journalctl@ceph.mgr.y.vm02.stdout:Mar 10 05:44:03 vm02 bash[17731]: debug 2026-03-10T05:44:03.375+0000 7f3ca87e5000 -1 mgr[py] Module progress has missing NOTIFY_TYPES member 2026-03-10T05:44:04.083 INFO:journalctl@ceph.mgr.y.vm02.stdout:Mar 10 05:44:03 vm02 bash[17731]: debug 2026-03-10T05:44:03.655+0000 7f3ca87e5000 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member 2026-03-10T05:44:04.084 INFO:journalctl@ceph.mgr.y.vm02.stdout:Mar 10 05:44:03 vm02 bash[17731]: debug 2026-03-10T05:44:03.707+0000 7f3ca87e5000 -1 mgr[py] Module crash has missing NOTIFY_TYPES member 2026-03-10T05:44:04.084 INFO:journalctl@ceph.mgr.y.vm02.stdout:Mar 10 05:44:03 vm02 bash[17731]: debug 2026-03-10T05:44:03.759+0000 7f3ca87e5000 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member 2026-03-10T05:44:04.084 INFO:journalctl@ceph.mgr.y.vm02.stdout:Mar 10 05:44:03 vm02 bash[17731]: debug 2026-03-10T05:44:03.831+0000 7f3ca87e5000 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member 2026-03-10T05:44:04.349 INFO:journalctl@ceph.mgr.y.vm02.stdout:Mar 10 05:44:04 vm02 bash[17731]: debug 2026-03-10T05:44:04.095+0000 7f3ca87e5000 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member 2026-03-10T05:44:04.349 INFO:journalctl@ceph.mgr.y.vm02.stdout:Mar 10 05:44:04 vm02 bash[17731]: debug 2026-03-10T05:44:04.247+0000 7f3ca87e5000 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member 2026-03-10T05:44:04.349 INFO:journalctl@ceph.mgr.y.vm02.stdout:Mar 10 05:44:04 vm02 bash[17731]: debug 2026-03-10T05:44:04.291+0000 7f3ca87e5000 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member 2026-03-10T05:44:04.833 INFO:journalctl@ceph.mgr.y.vm02.stdout:Mar 10 05:44:04 vm02 bash[17731]: debug 2026-03-10T05:44:04.343+0000 7f3ca87e5000 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member 2026-03-10T05:44:04.834 INFO:journalctl@ceph.mgr.y.vm02.stdout:Mar 10 05:44:04 vm02 bash[17731]: debug 2026-03-10T05:44:04.467+0000 7f3ca87e5000 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member 2026-03-10T05:44:05.333 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:44:04 vm02 bash[17462]: cluster 2026-03-10T05:44:04.884676+0000 mon.a (mon.0) 32 : cluster [INF] Active manager daemon y restarted 2026-03-10T05:44:05.334 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:44:04 vm02 bash[17462]: cluster 2026-03-10T05:44:04.885386+0000 mon.a (mon.0) 33 : cluster [INF] Activating manager daemon y 2026-03-10T05:44:05.334 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:44:04 vm02 bash[17462]: cluster 2026-03-10T05:44:04.887451+0000 mon.a (mon.0) 34 : cluster [DBG] osdmap e2: 0 total, 0 up, 0 in 2026-03-10T05:44:05.334 INFO:journalctl@ceph.mgr.y.vm02.stdout:Mar 10 05:44:04 vm02 bash[17731]: debug 2026-03-10T05:44:04.879+0000 7f3ca87e5000 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member 2026-03-10T05:44:05.950 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph: { 2026-03-10T05:44:05.950 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph: "mgrmap_epoch": 7, 2026-03-10T05:44:05.950 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph: "initialized": true 2026-03-10T05:44:05.950 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph: } 2026-03-10T05:44:05.987 INFO:teuthology.orchestra.run.vm02.stderr:mgr epoch 5 is available 2026-03-10T05:44:05.987 INFO:teuthology.orchestra.run.vm02.stderr:Setting orchestrator backend to cephadm... 2026-03-10T05:44:06.084 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:44:05 vm02 bash[17462]: cluster 2026-03-10T05:44:04.938176+0000 mon.a (mon.0) 35 : cluster [DBG] mgrmap e6: y(active, starting, since 0.0528651s) 2026-03-10T05:44:06.084 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:44:05 vm02 bash[17462]: audit 2026-03-10T05:44:04.941819+0000 mon.a (mon.0) 36 : audit [DBG] from='mgr.14120 192.168.123.102:0/1507654234' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-10T05:44:06.084 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:44:05 vm02 bash[17462]: audit 2026-03-10T05:44:04.941997+0000 mon.a (mon.0) 37 : audit [DBG] from='mgr.14120 192.168.123.102:0/1507654234' entity='mgr.y' cmd=[{"prefix": "mgr metadata", "who": "y", "id": "y"}]: dispatch 2026-03-10T05:44:06.084 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:44:05 vm02 bash[17462]: audit 2026-03-10T05:44:04.943031+0000 mon.a (mon.0) 38 : audit [DBG] from='mgr.14120 192.168.123.102:0/1507654234' entity='mgr.y' cmd=[{"prefix": "mds metadata"}]: dispatch 2026-03-10T05:44:06.084 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:44:05 vm02 bash[17462]: audit 2026-03-10T05:44:04.943225+0000 mon.a (mon.0) 39 : audit [DBG] from='mgr.14120 192.168.123.102:0/1507654234' entity='mgr.y' cmd=[{"prefix": "osd metadata"}]: dispatch 2026-03-10T05:44:06.084 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:44:05 vm02 bash[17462]: audit 2026-03-10T05:44:04.943418+0000 mon.a (mon.0) 40 : audit [DBG] from='mgr.14120 192.168.123.102:0/1507654234' entity='mgr.y' cmd=[{"prefix": "mon metadata"}]: dispatch 2026-03-10T05:44:06.084 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:44:05 vm02 bash[17462]: cluster 2026-03-10T05:44:04.947341+0000 mon.a (mon.0) 41 : cluster [INF] Manager daemon y is now available 2026-03-10T05:44:06.084 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:44:05 vm02 bash[17462]: audit 2026-03-10T05:44:04.955250+0000 mon.a (mon.0) 42 : audit [INF] from='mgr.14120 192.168.123.102:0/1507654234' entity='mgr.y' 2026-03-10T05:44:06.084 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:44:05 vm02 bash[17462]: audit 2026-03-10T05:44:04.957288+0000 mon.a (mon.0) 43 : audit [INF] from='mgr.14120 192.168.123.102:0/1507654234' entity='mgr.y' 2026-03-10T05:44:06.084 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:44:05 vm02 bash[17462]: audit 2026-03-10T05:44:04.965883+0000 mon.a (mon.0) 44 : audit [DBG] from='mgr.14120 192.168.123.102:0/1507654234' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T05:44:06.084 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:44:05 vm02 bash[17462]: audit 2026-03-10T05:44:04.966605+0000 mon.a (mon.0) 45 : audit [DBG] from='mgr.14120 192.168.123.102:0/1507654234' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T05:44:06.084 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:44:05 vm02 bash[17462]: audit 2026-03-10T05:44:04.967390+0000 mon.a (mon.0) 46 : audit [DBG] from='mgr.14120 192.168.123.102:0/1507654234' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T05:44:06.084 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:44:05 vm02 bash[17462]: audit 2026-03-10T05:44:04.971227+0000 mon.a (mon.0) 47 : audit [INF] from='mgr.14120 192.168.123.102:0/1507654234' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/mirror_snapshot_schedule"}]: dispatch 2026-03-10T05:44:06.084 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:44:05 vm02 bash[17462]: audit 2026-03-10T05:44:04.971894+0000 mon.a (mon.0) 48 : audit [INF] from='mgr.14120 192.168.123.102:0/1507654234' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/trash_purge_schedule"}]: dispatch 2026-03-10T05:44:06.084 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:44:05 vm02 bash[17462]: audit 2026-03-10T05:44:05.931908+0000 mon.a (mon.0) 49 : audit [INF] from='mgr.14120 192.168.123.102:0/1507654234' entity='mgr.y' 2026-03-10T05:44:06.084 INFO:journalctl@ceph.mgr.y.vm02.stdout:Mar 10 05:44:05 vm02 bash[17731]: [10/Mar/2026:05:44:05] ENGINE Bus STARTING 2026-03-10T05:44:06.084 INFO:journalctl@ceph.mgr.y.vm02.stdout:Mar 10 05:44:05 vm02 bash[17731]: [10/Mar/2026:05:44:05] ENGINE Serving on https://192.168.123.102:7150 2026-03-10T05:44:06.084 INFO:journalctl@ceph.mgr.y.vm02.stdout:Mar 10 05:44:05 vm02 bash[17731]: [10/Mar/2026:05:44:05] ENGINE Bus STARTED 2026-03-10T05:44:06.466 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph: value unchanged 2026-03-10T05:44:06.499 INFO:teuthology.orchestra.run.vm02.stderr:Generating ssh key... 2026-03-10T05:44:07.168 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph: ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDhoGVKGmHR1/pl1LL6b4N5J3bxsJRd2mlCKPnD/BqsLyVmZmQmN4NwbU0hDNkhXRnpgWau71Aw/1+vbFuOcxid4vnDGukwRWilpRr0BurwJMzb6KBLB1AsMCpMv5WqaEF9MFU1jqwmHFmwch51xyiTp6QNlQx2hJKmgK3gCwClOfNhk4Lx52y6Oqw9geUBUMRVLjdIQx1xArqZhDeSnMqTSinld6Nff3tbVxoHJ/vrXsigc6RZeauvtJ1bsrC39l6/OlrpQS8ZBl33qv59Hozg/f+h/sqFE0NGBLxb6xYXV1gSdBZyqYsnQLPXQT9voL0d3CLx+xgra5ET246R5wiCy5Wbckgy9EP/dt+Ud671wdX715Eslwb/2K/aR7/t4bAAYOZGs+oA2wX3g/1NI0a2oElR4P4jXlt6D5+ZXMzeDkz1kR3uZV0bnFy8cALGz4boOyvbCU+RPadynCnXo3pI3AiCWVf7nUrIi4+7sNTF9t/CWPexFyL0igNgMjkvaFM= ceph-107483ae-1c44-11f1-b530-c1172cd6122a 2026-03-10T05:44:07.177 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:44:06 vm02 bash[17462]: cephadm 2026-03-10T05:44:05.819202+0000 mgr.y (mgr.14120) 1 : cephadm [INF] [10/Mar/2026:05:44:05] ENGINE Bus STARTING 2026-03-10T05:44:07.177 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:44:06 vm02 bash[17462]: cephadm 2026-03-10T05:44:05.928790+0000 mgr.y (mgr.14120) 2 : cephadm [INF] [10/Mar/2026:05:44:05] ENGINE Serving on https://192.168.123.102:7150 2026-03-10T05:44:07.177 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:44:06 vm02 bash[17462]: cephadm 2026-03-10T05:44:05.928869+0000 mgr.y (mgr.14120) 3 : cephadm [INF] [10/Mar/2026:05:44:05] ENGINE Bus STARTED 2026-03-10T05:44:07.177 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:44:06 vm02 bash[17462]: audit 2026-03-10T05:44:05.940177+0000 mon.a (mon.0) 50 : audit [DBG] from='mgr.14120 192.168.123.102:0/1507654234' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T05:44:07.177 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:44:06 vm02 bash[17462]: cluster 2026-03-10T05:44:05.945637+0000 mon.a (mon.0) 51 : cluster [DBG] mgrmap e7: y(active, since 1.06033s) 2026-03-10T05:44:07.177 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:44:06 vm02 bash[17462]: audit 2026-03-10T05:44:05.946751+0000 mgr.y (mgr.14120) 4 : audit [DBG] from='client.14124 -' entity='client.admin' cmd=[{"prefix": "get_command_descriptions"}]: dispatch 2026-03-10T05:44:07.177 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:44:06 vm02 bash[17462]: audit 2026-03-10T05:44:05.950436+0000 mgr.y (mgr.14120) 5 : audit [DBG] from='client.14124 -' entity='client.admin' cmd=[{"prefix": "mgr_status"}]: dispatch 2026-03-10T05:44:07.177 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:44:06 vm02 bash[17462]: audit 2026-03-10T05:44:06.217003+0000 mgr.y (mgr.14120) 6 : audit [DBG] from='client.14132 -' entity='client.admin' cmd=[{"prefix": "orch set backend", "module_name": "cephadm", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T05:44:07.177 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:44:06 vm02 bash[17462]: audit 2026-03-10T05:44:06.222662+0000 mon.a (mon.0) 52 : audit [INF] from='mgr.14120 192.168.123.102:0/1507654234' entity='mgr.y' 2026-03-10T05:44:07.177 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:44:06 vm02 bash[17462]: audit 2026-03-10T05:44:06.260238+0000 mon.a (mon.0) 53 : audit [DBG] from='mgr.14120 192.168.123.102:0/1507654234' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T05:44:07.177 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:44:06 vm02 bash[17462]: audit 2026-03-10T05:44:06.901580+0000 mon.a (mon.0) 54 : audit [INF] from='mgr.14120 192.168.123.102:0/1507654234' entity='mgr.y' 2026-03-10T05:44:07.177 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:44:06 vm02 bash[17462]: audit 2026-03-10T05:44:06.903734+0000 mon.a (mon.0) 55 : audit [INF] from='mgr.14120 192.168.123.102:0/1507654234' entity='mgr.y' 2026-03-10T05:44:07.181 INFO:journalctl@ceph.mgr.y.vm02.stdout:Mar 10 05:44:06 vm02 bash[17731]: Generating public/private rsa key pair. 2026-03-10T05:44:07.181 INFO:journalctl@ceph.mgr.y.vm02.stdout:Mar 10 05:44:06 vm02 bash[17731]: Your identification has been saved in /tmp/tmpoz_l510d/key. 2026-03-10T05:44:07.181 INFO:journalctl@ceph.mgr.y.vm02.stdout:Mar 10 05:44:06 vm02 bash[17731]: Your public key has been saved in /tmp/tmpoz_l510d/key.pub. 2026-03-10T05:44:07.181 INFO:journalctl@ceph.mgr.y.vm02.stdout:Mar 10 05:44:06 vm02 bash[17731]: The key fingerprint is: 2026-03-10T05:44:07.181 INFO:journalctl@ceph.mgr.y.vm02.stdout:Mar 10 05:44:06 vm02 bash[17731]: SHA256:hBdIUtGhORamGYs3D4F7EWMLFJMMp16aUeoEkip0U+Q ceph-107483ae-1c44-11f1-b530-c1172cd6122a 2026-03-10T05:44:07.181 INFO:journalctl@ceph.mgr.y.vm02.stdout:Mar 10 05:44:06 vm02 bash[17731]: The key's randomart image is: 2026-03-10T05:44:07.181 INFO:journalctl@ceph.mgr.y.vm02.stdout:Mar 10 05:44:06 vm02 bash[17731]: +---[RSA 3072]----+ 2026-03-10T05:44:07.181 INFO:journalctl@ceph.mgr.y.vm02.stdout:Mar 10 05:44:06 vm02 bash[17731]: |+=*=X**+o. | 2026-03-10T05:44:07.181 INFO:journalctl@ceph.mgr.y.vm02.stdout:Mar 10 05:44:06 vm02 bash[17731]: |++*Bo@.=.. | 2026-03-10T05:44:07.181 INFO:journalctl@ceph.mgr.y.vm02.stdout:Mar 10 05:44:06 vm02 bash[17731]: |++ooXE* o | 2026-03-10T05:44:07.181 INFO:journalctl@ceph.mgr.y.vm02.stdout:Mar 10 05:44:06 vm02 bash[17731]: |* *..= + | 2026-03-10T05:44:07.181 INFO:journalctl@ceph.mgr.y.vm02.stdout:Mar 10 05:44:06 vm02 bash[17731]: |.= . . S | 2026-03-10T05:44:07.181 INFO:journalctl@ceph.mgr.y.vm02.stdout:Mar 10 05:44:06 vm02 bash[17731]: | | 2026-03-10T05:44:07.181 INFO:journalctl@ceph.mgr.y.vm02.stdout:Mar 10 05:44:06 vm02 bash[17731]: | | 2026-03-10T05:44:07.181 INFO:journalctl@ceph.mgr.y.vm02.stdout:Mar 10 05:44:06 vm02 bash[17731]: | | 2026-03-10T05:44:07.181 INFO:journalctl@ceph.mgr.y.vm02.stdout:Mar 10 05:44:06 vm02 bash[17731]: | | 2026-03-10T05:44:07.181 INFO:journalctl@ceph.mgr.y.vm02.stdout:Mar 10 05:44:06 vm02 bash[17731]: +----[SHA256]-----+ 2026-03-10T05:44:07.206 INFO:teuthology.orchestra.run.vm02.stderr:Wrote public SSH key to /home/ubuntu/cephtest/ceph.pub 2026-03-10T05:44:07.206 INFO:teuthology.orchestra.run.vm02.stderr:Adding key to root@localhost authorized_keys... 2026-03-10T05:44:07.206 INFO:teuthology.orchestra.run.vm02.stderr:Adding host vm02... 2026-03-10T05:44:07.842 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph: Added host 'vm02' with addr '192.168.123.102' 2026-03-10T05:44:07.877 INFO:teuthology.orchestra.run.vm02.stderr:Deploying unmanaged mon service... 2026-03-10T05:44:08.143 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph: Scheduled mon update... 2026-03-10T05:44:08.180 INFO:teuthology.orchestra.run.vm02.stderr:Deploying unmanaged mgr service... 2026-03-10T05:44:08.409 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph: Scheduled mgr update... 2026-03-10T05:44:08.418 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:44:08 vm02 bash[17462]: audit 2026-03-10T05:44:06.465700+0000 mgr.y (mgr.14120) 7 : audit [DBG] from='client.14134 -' entity='client.admin' cmd=[{"prefix": "cephadm set-user", "user": "root", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T05:44:08.418 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:44:08 vm02 bash[17462]: audit 2026-03-10T05:44:06.699078+0000 mgr.y (mgr.14120) 8 : audit [DBG] from='client.14136 -' entity='client.admin' cmd=[{"prefix": "cephadm generate-key", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T05:44:08.418 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:44:08 vm02 bash[17462]: cephadm 2026-03-10T05:44:06.699257+0000 mgr.y (mgr.14120) 9 : cephadm [INF] Generating ssh key... 2026-03-10T05:44:08.418 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:44:08 vm02 bash[17462]: audit 2026-03-10T05:44:07.167853+0000 mgr.y (mgr.14120) 10 : audit [DBG] from='client.14138 -' entity='client.admin' cmd=[{"prefix": "cephadm get-pub-key", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T05:44:08.418 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:44:08 vm02 bash[17462]: cluster 2026-03-10T05:44:07.224547+0000 mon.a (mon.0) 56 : cluster [DBG] mgrmap e8: y(active, since 2s) 2026-03-10T05:44:08.418 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:44:08 vm02 bash[17462]: audit 2026-03-10T05:44:07.839923+0000 mon.a (mon.0) 57 : audit [INF] from='mgr.14120 192.168.123.102:0/1507654234' entity='mgr.y' 2026-03-10T05:44:08.418 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:44:08 vm02 bash[17462]: audit 2026-03-10T05:44:07.879220+0000 mon.a (mon.0) 58 : audit [DBG] from='mgr.14120 192.168.123.102:0/1507654234' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T05:44:08.418 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:44:08 vm02 bash[17462]: audit 2026-03-10T05:44:08.142839+0000 mon.a (mon.0) 59 : audit [INF] from='mgr.14120 192.168.123.102:0/1507654234' entity='mgr.y' 2026-03-10T05:44:08.947 INFO:teuthology.orchestra.run.vm02.stderr:Enabling the dashboard module... 2026-03-10T05:44:09.583 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:44:09 vm02 bash[17462]: audit 2026-03-10T05:44:07.433382+0000 mgr.y (mgr.14120) 11 : audit [DBG] from='client.14140 -' entity='client.admin' cmd=[{"prefix": "orch host add", "hostname": "vm02", "addr": "192.168.123.102", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T05:44:09.583 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:44:09 vm02 bash[17462]: cephadm 2026-03-10T05:44:07.618153+0000 mgr.y (mgr.14120) 12 : cephadm [INF] Deploying cephadm binary to vm02 2026-03-10T05:44:09.584 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:44:09 vm02 bash[17462]: cephadm 2026-03-10T05:44:07.840364+0000 mgr.y (mgr.14120) 13 : cephadm [INF] Added host vm02 2026-03-10T05:44:09.584 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:44:09 vm02 bash[17462]: audit 2026-03-10T05:44:08.139505+0000 mgr.y (mgr.14120) 14 : audit [DBG] from='client.14142 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mon", "unmanaged": true, "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T05:44:09.584 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:44:09 vm02 bash[17462]: cephadm 2026-03-10T05:44:08.140303+0000 mgr.y (mgr.14120) 15 : cephadm [INF] Saving service mon spec with placement count:5 2026-03-10T05:44:09.584 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:44:09 vm02 bash[17462]: audit 2026-03-10T05:44:08.408643+0000 mon.a (mon.0) 60 : audit [INF] from='mgr.14120 192.168.123.102:0/1507654234' entity='mgr.y' 2026-03-10T05:44:09.584 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:44:09 vm02 bash[17462]: audit 2026-03-10T05:44:08.657556+0000 mon.a (mon.0) 61 : audit [INF] from='client.? 192.168.123.102:0/2295306284' entity='client.admin' 2026-03-10T05:44:09.584 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:44:09 vm02 bash[17462]: audit 2026-03-10T05:44:08.909484+0000 mon.a (mon.0) 62 : audit [INF] from='client.? 192.168.123.102:0/502731798' entity='client.admin' 2026-03-10T05:44:10.553 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph: { 2026-03-10T05:44:10.553 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph: "epoch": 9, 2026-03-10T05:44:10.553 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph: "available": true, 2026-03-10T05:44:10.553 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph: "active_name": "y", 2026-03-10T05:44:10.553 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph: "num_standby": 0 2026-03-10T05:44:10.553 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph: } 2026-03-10T05:44:10.573 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:44:10 vm02 bash[17462]: audit 2026-03-10T05:44:08.405331+0000 mgr.y (mgr.14120) 16 : audit [DBG] from='client.14144 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mgr", "unmanaged": true, "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T05:44:10.574 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:44:10 vm02 bash[17462]: cephadm 2026-03-10T05:44:08.406161+0000 mgr.y (mgr.14120) 17 : cephadm [INF] Saving service mgr spec with placement count:2 2026-03-10T05:44:10.574 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:44:10 vm02 bash[17462]: audit 2026-03-10T05:44:09.237124+0000 mon.a (mon.0) 63 : audit [INF] from='mgr.14120 192.168.123.102:0/1507654234' entity='mgr.y' 2026-03-10T05:44:10.574 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:44:10 vm02 bash[17462]: audit 2026-03-10T05:44:09.276096+0000 mon.a (mon.0) 64 : audit [INF] from='client.? 192.168.123.102:0/4216966907' entity='client.admin' cmd=[{"prefix": "mgr module enable", "module": "dashboard"}]: dispatch 2026-03-10T05:44:10.574 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:44:10 vm02 bash[17462]: audit 2026-03-10T05:44:09.355888+0000 mon.a (mon.0) 65 : audit [INF] from='mgr.14120 192.168.123.102:0/1507654234' entity='mgr.y' 2026-03-10T05:44:10.581 INFO:journalctl@ceph.mgr.y.vm02.stdout:Mar 10 05:44:10 vm02 bash[17731]: ignoring --setuser ceph since I am not root 2026-03-10T05:44:10.581 INFO:journalctl@ceph.mgr.y.vm02.stdout:Mar 10 05:44:10 vm02 bash[17731]: ignoring --setgroup ceph since I am not root 2026-03-10T05:44:10.581 INFO:journalctl@ceph.mgr.y.vm02.stdout:Mar 10 05:44:10 vm02 bash[17731]: debug 2026-03-10T05:44:10.375+0000 7f3ddcd1d000 -1 mgr[py] Module status has missing NOTIFY_TYPES member 2026-03-10T05:44:10.581 INFO:journalctl@ceph.mgr.y.vm02.stdout:Mar 10 05:44:10 vm02 bash[17731]: debug 2026-03-10T05:44:10.423+0000 7f3ddcd1d000 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member 2026-03-10T05:44:10.616 INFO:teuthology.orchestra.run.vm02.stderr:Waiting for the mgr to restart... 2026-03-10T05:44:10.616 INFO:teuthology.orchestra.run.vm02.stderr:Waiting for mgr epoch 9... 2026-03-10T05:44:10.833 INFO:journalctl@ceph.mgr.y.vm02.stdout:Mar 10 05:44:10 vm02 bash[17731]: debug 2026-03-10T05:44:10.759+0000 7f3ddcd1d000 -1 mgr[py] Module rook has missing NOTIFY_TYPES member 2026-03-10T05:44:11.455 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:44:11 vm02 bash[17462]: audit 2026-03-10T05:44:10.242731+0000 mon.a (mon.0) 66 : audit [INF] from='client.? 192.168.123.102:0/4216966907' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "dashboard"}]': finished 2026-03-10T05:44:11.455 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:44:11 vm02 bash[17462]: cluster 2026-03-10T05:44:10.242805+0000 mon.a (mon.0) 67 : cluster [DBG] mgrmap e9: y(active, since 5s) 2026-03-10T05:44:11.455 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:44:11 vm02 bash[17462]: audit 2026-03-10T05:44:10.552959+0000 mon.a (mon.0) 68 : audit [DBG] from='client.? 192.168.123.102:0/1938891394' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch 2026-03-10T05:44:11.455 INFO:journalctl@ceph.mgr.y.vm02.stdout:Mar 10 05:44:11 vm02 bash[17731]: debug 2026-03-10T05:44:11.195+0000 7f3ddcd1d000 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member 2026-03-10T05:44:11.455 INFO:journalctl@ceph.mgr.y.vm02.stdout:Mar 10 05:44:11 vm02 bash[17731]: debug 2026-03-10T05:44:11.279+0000 7f3ddcd1d000 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member 2026-03-10T05:44:11.706 INFO:journalctl@ceph.mgr.y.vm02.stdout:Mar 10 05:44:11 vm02 bash[17731]: debug 2026-03-10T05:44:11.451+0000 7f3ddcd1d000 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member 2026-03-10T05:44:11.706 INFO:journalctl@ceph.mgr.y.vm02.stdout:Mar 10 05:44:11 vm02 bash[17731]: debug 2026-03-10T05:44:11.543+0000 7f3ddcd1d000 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member 2026-03-10T05:44:11.706 INFO:journalctl@ceph.mgr.y.vm02.stdout:Mar 10 05:44:11 vm02 bash[17731]: debug 2026-03-10T05:44:11.587+0000 7f3ddcd1d000 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member 2026-03-10T05:44:12.083 INFO:journalctl@ceph.mgr.y.vm02.stdout:Mar 10 05:44:11 vm02 bash[17731]: debug 2026-03-10T05:44:11.703+0000 7f3ddcd1d000 -1 mgr[py] Module influx has missing NOTIFY_TYPES member 2026-03-10T05:44:12.083 INFO:journalctl@ceph.mgr.y.vm02.stdout:Mar 10 05:44:11 vm02 bash[17731]: debug 2026-03-10T05:44:11.755+0000 7f3ddcd1d000 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member 2026-03-10T05:44:12.084 INFO:journalctl@ceph.mgr.y.vm02.stdout:Mar 10 05:44:11 vm02 bash[17731]: debug 2026-03-10T05:44:11.811+0000 7f3ddcd1d000 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member 2026-03-10T05:44:12.583 INFO:journalctl@ceph.mgr.y.vm02.stdout:Mar 10 05:44:12 vm02 bash[17731]: debug 2026-03-10T05:44:12.255+0000 7f3ddcd1d000 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member 2026-03-10T05:44:12.583 INFO:journalctl@ceph.mgr.y.vm02.stdout:Mar 10 05:44:12 vm02 bash[17731]: debug 2026-03-10T05:44:12.303+0000 7f3ddcd1d000 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member 2026-03-10T05:44:12.583 INFO:journalctl@ceph.mgr.y.vm02.stdout:Mar 10 05:44:12 vm02 bash[17731]: debug 2026-03-10T05:44:12.351+0000 7f3ddcd1d000 -1 mgr[py] Module progress has missing NOTIFY_TYPES member 2026-03-10T05:44:13.083 INFO:journalctl@ceph.mgr.y.vm02.stdout:Mar 10 05:44:12 vm02 bash[17731]: debug 2026-03-10T05:44:12.623+0000 7f3ddcd1d000 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member 2026-03-10T05:44:13.084 INFO:journalctl@ceph.mgr.y.vm02.stdout:Mar 10 05:44:12 vm02 bash[17731]: debug 2026-03-10T05:44:12.679+0000 7f3ddcd1d000 -1 mgr[py] Module crash has missing NOTIFY_TYPES member 2026-03-10T05:44:13.084 INFO:journalctl@ceph.mgr.y.vm02.stdout:Mar 10 05:44:12 vm02 bash[17731]: debug 2026-03-10T05:44:12.727+0000 7f3ddcd1d000 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member 2026-03-10T05:44:13.084 INFO:journalctl@ceph.mgr.y.vm02.stdout:Mar 10 05:44:12 vm02 bash[17731]: debug 2026-03-10T05:44:12.799+0000 7f3ddcd1d000 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member 2026-03-10T05:44:13.362 INFO:journalctl@ceph.mgr.y.vm02.stdout:Mar 10 05:44:13 vm02 bash[17731]: debug 2026-03-10T05:44:13.087+0000 7f3ddcd1d000 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member 2026-03-10T05:44:13.362 INFO:journalctl@ceph.mgr.y.vm02.stdout:Mar 10 05:44:13 vm02 bash[17731]: debug 2026-03-10T05:44:13.251+0000 7f3ddcd1d000 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member 2026-03-10T05:44:13.362 INFO:journalctl@ceph.mgr.y.vm02.stdout:Mar 10 05:44:13 vm02 bash[17731]: debug 2026-03-10T05:44:13.299+0000 7f3ddcd1d000 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member 2026-03-10T05:44:13.833 INFO:journalctl@ceph.mgr.y.vm02.stdout:Mar 10 05:44:13 vm02 bash[17731]: debug 2026-03-10T05:44:13.359+0000 7f3ddcd1d000 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member 2026-03-10T05:44:13.833 INFO:journalctl@ceph.mgr.y.vm02.stdout:Mar 10 05:44:13 vm02 bash[17731]: debug 2026-03-10T05:44:13.487+0000 7f3ddcd1d000 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member 2026-03-10T05:44:14.330 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:44:14 vm02 bash[17462]: cluster 2026-03-10T05:44:13.944495+0000 mon.a (mon.0) 69 : cluster [INF] Active manager daemon y restarted 2026-03-10T05:44:14.330 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:44:14 vm02 bash[17462]: cluster 2026-03-10T05:44:13.945365+0000 mon.a (mon.0) 70 : cluster [INF] Activating manager daemon y 2026-03-10T05:44:14.330 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:44:14 vm02 bash[17462]: cluster 2026-03-10T05:44:13.947511+0000 mon.a (mon.0) 71 : cluster [DBG] osdmap e3: 0 total, 0 up, 0 in 2026-03-10T05:44:14.330 INFO:journalctl@ceph.mgr.y.vm02.stdout:Mar 10 05:44:13 vm02 bash[17731]: debug 2026-03-10T05:44:13.939+0000 7f3ddcd1d000 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member 2026-03-10T05:44:14.583 INFO:journalctl@ceph.mgr.y.vm02.stdout:Mar 10 05:44:14 vm02 bash[17731]: [10/Mar/2026:05:44:14] ENGINE Bus STARTING 2026-03-10T05:44:14.583 INFO:journalctl@ceph.mgr.y.vm02.stdout:Mar 10 05:44:14 vm02 bash[17731]: [10/Mar/2026:05:44:14] ENGINE Serving on https://192.168.123.102:7150 2026-03-10T05:44:14.583 INFO:journalctl@ceph.mgr.y.vm02.stdout:Mar 10 05:44:14 vm02 bash[17731]: [10/Mar/2026:05:44:14] ENGINE Bus STARTED 2026-03-10T05:44:15.016 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph: { 2026-03-10T05:44:15.016 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph: "mgrmap_epoch": 11, 2026-03-10T05:44:15.016 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph: "initialized": true 2026-03-10T05:44:15.016 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph: } 2026-03-10T05:44:15.068 INFO:teuthology.orchestra.run.vm02.stderr:mgr epoch 9 is available 2026-03-10T05:44:15.068 INFO:teuthology.orchestra.run.vm02.stderr:Generating a dashboard self-signed certificate... 2026-03-10T05:44:15.333 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:44:15 vm02 bash[17462]: cluster 2026-03-10T05:44:13.999676+0000 mon.a (mon.0) 72 : cluster [DBG] mgrmap e10: y(active, starting, since 0.0544085s) 2026-03-10T05:44:15.333 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:44:15 vm02 bash[17462]: audit 2026-03-10T05:44:14.003464+0000 mon.a (mon.0) 73 : audit [DBG] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-10T05:44:15.334 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:44:15 vm02 bash[17462]: audit 2026-03-10T05:44:14.004685+0000 mon.a (mon.0) 74 : audit [DBG] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "mgr metadata", "who": "y", "id": "y"}]: dispatch 2026-03-10T05:44:15.334 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:44:15 vm02 bash[17462]: audit 2026-03-10T05:44:14.005526+0000 mon.a (mon.0) 75 : audit [DBG] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "mds metadata"}]: dispatch 2026-03-10T05:44:15.334 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:44:15 vm02 bash[17462]: audit 2026-03-10T05:44:14.005752+0000 mon.a (mon.0) 76 : audit [DBG] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "osd metadata"}]: dispatch 2026-03-10T05:44:15.334 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:44:15 vm02 bash[17462]: audit 2026-03-10T05:44:14.005999+0000 mon.a (mon.0) 77 : audit [DBG] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "mon metadata"}]: dispatch 2026-03-10T05:44:15.334 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:44:15 vm02 bash[17462]: cluster 2026-03-10T05:44:14.011832+0000 mon.a (mon.0) 78 : cluster [INF] Manager daemon y is now available 2026-03-10T05:44:15.334 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:44:15 vm02 bash[17462]: audit 2026-03-10T05:44:14.031790+0000 mon.a (mon.0) 79 : audit [DBG] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T05:44:15.334 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:44:15 vm02 bash[17462]: audit 2026-03-10T05:44:14.032401+0000 mon.a (mon.0) 80 : audit [DBG] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T05:44:15.334 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:44:15 vm02 bash[17462]: audit 2026-03-10T05:44:14.038330+0000 mon.a (mon.0) 81 : audit [INF] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/mirror_snapshot_schedule"}]: dispatch 2026-03-10T05:44:15.334 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:44:15 vm02 bash[17462]: audit 2026-03-10T05:44:14.039438+0000 mon.a (mon.0) 82 : audit [INF] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/trash_purge_schedule"}]: dispatch 2026-03-10T05:44:15.334 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:44:15 vm02 bash[17462]: audit 2026-03-10T05:44:14.445942+0000 mon.a (mon.0) 83 : audit [INF] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' 2026-03-10T05:44:15.334 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph: Self-signed certificate created 2026-03-10T05:44:15.369 INFO:teuthology.orchestra.run.vm02.stderr:Creating initial admin user... 2026-03-10T05:44:15.742 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph: {"username": "admin", "password": "$2b$12$gt64uvKn5eCWrP22xTMJwOsscT4qtv918A4HBjHIfEo/qEuP/0H3u", "roles": ["administrator"], "name": null, "email": null, "lastUpdate": 1773121455, "enabled": true, "pwdExpirationDate": null, "pwdUpdateRequired": true} 2026-03-10T05:44:15.775 INFO:teuthology.orchestra.run.vm02.stderr:Fetching dashboard port number... 2026-03-10T05:44:15.974 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph: 8443 2026-03-10T05:44:16.028 INFO:teuthology.orchestra.run.vm02.stderr:firewalld does not appear to be present 2026-03-10T05:44:16.028 INFO:teuthology.orchestra.run.vm02.stderr:Not possible to open ports <[8443]>. firewalld.service is not available 2026-03-10T05:44:16.029 INFO:teuthology.orchestra.run.vm02.stderr:Ceph Dashboard is now available at: 2026-03-10T05:44:16.029 INFO:teuthology.orchestra.run.vm02.stderr: 2026-03-10T05:44:16.029 INFO:teuthology.orchestra.run.vm02.stderr: URL: https://vm02.local:8443/ 2026-03-10T05:44:16.029 INFO:teuthology.orchestra.run.vm02.stderr: User: admin 2026-03-10T05:44:16.029 INFO:teuthology.orchestra.run.vm02.stderr: Password: 9pf5w65gpl 2026-03-10T05:44:16.029 INFO:teuthology.orchestra.run.vm02.stderr: 2026-03-10T05:44:16.029 INFO:teuthology.orchestra.run.vm02.stderr:Enabling autotune for osd_memory_target 2026-03-10T05:44:16.316 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:44:16 vm02 bash[17462]: cephadm 2026-03-10T05:44:14.331409+0000 mgr.y (mgr.14152) 1 : cephadm [INF] [10/Mar/2026:05:44:14] ENGINE Bus STARTING 2026-03-10T05:44:16.316 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:44:16 vm02 bash[17462]: cephadm 2026-03-10T05:44:14.442025+0000 mgr.y (mgr.14152) 2 : cephadm [INF] [10/Mar/2026:05:44:14] ENGINE Serving on https://192.168.123.102:7150 2026-03-10T05:44:16.316 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:44:16 vm02 bash[17462]: cephadm 2026-03-10T05:44:14.442331+0000 mgr.y (mgr.14152) 3 : cephadm [INF] [10/Mar/2026:05:44:14] ENGINE Bus STARTED 2026-03-10T05:44:16.316 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:44:16 vm02 bash[17462]: cluster 2026-03-10T05:44:15.010831+0000 mon.a (mon.0) 84 : cluster [DBG] mgrmap e11: y(active, since 1.06556s) 2026-03-10T05:44:16.316 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:44:16 vm02 bash[17462]: audit 2026-03-10T05:44:15.012085+0000 mgr.y (mgr.14152) 4 : audit [DBG] from='client.14156 -' entity='client.admin' cmd=[{"prefix": "get_command_descriptions"}]: dispatch 2026-03-10T05:44:16.316 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:44:16 vm02 bash[17462]: audit 2026-03-10T05:44:15.016265+0000 mgr.y (mgr.14152) 5 : audit [DBG] from='client.14156 -' entity='client.admin' cmd=[{"prefix": "mgr_status"}]: dispatch 2026-03-10T05:44:16.316 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:44:16 vm02 bash[17462]: audit 2026-03-10T05:44:15.332429+0000 mon.a (mon.0) 85 : audit [INF] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' 2026-03-10T05:44:16.317 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:44:16 vm02 bash[17462]: audit 2026-03-10T05:44:15.334842+0000 mon.a (mon.0) 86 : audit [INF] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' 2026-03-10T05:44:16.317 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:44:16 vm02 bash[17462]: audit 2026-03-10T05:44:15.740762+0000 mon.a (mon.0) 87 : audit [INF] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' 2026-03-10T05:44:16.317 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:44:16 vm02 bash[17462]: audit 2026-03-10T05:44:15.974080+0000 mon.a (mon.0) 88 : audit [DBG] from='client.? 192.168.123.102:0/2967378037' entity='client.admin' cmd=[{"prefix": "config get", "who": "mgr", "key": "mgr/dashboard/ssl_server_port"}]: dispatch 2026-03-10T05:44:16.618 INFO:teuthology.orchestra.run.vm02.stderr:/usr/bin/ceph: set mgr/dashboard/cluster/status 2026-03-10T05:44:16.653 INFO:teuthology.orchestra.run.vm02.stderr:You can access the Ceph CLI with: 2026-03-10T05:44:16.653 INFO:teuthology.orchestra.run.vm02.stderr: 2026-03-10T05:44:16.653 INFO:teuthology.orchestra.run.vm02.stderr: sudo /home/ubuntu/cephtest/cephadm shell --fsid 107483ae-1c44-11f1-b530-c1172cd6122a -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring 2026-03-10T05:44:16.654 INFO:teuthology.orchestra.run.vm02.stderr: 2026-03-10T05:44:16.654 INFO:teuthology.orchestra.run.vm02.stderr:Please consider enabling telemetry to help improve Ceph: 2026-03-10T05:44:16.654 INFO:teuthology.orchestra.run.vm02.stderr: 2026-03-10T05:44:16.654 INFO:teuthology.orchestra.run.vm02.stderr: ceph telemetry on 2026-03-10T05:44:16.654 INFO:teuthology.orchestra.run.vm02.stderr: 2026-03-10T05:44:16.654 INFO:teuthology.orchestra.run.vm02.stderr:For more information see: 2026-03-10T05:44:16.654 INFO:teuthology.orchestra.run.vm02.stderr: 2026-03-10T05:44:16.654 INFO:teuthology.orchestra.run.vm02.stderr: https://docs.ceph.com/docs/master/mgr/telemetry/ 2026-03-10T05:44:16.654 INFO:teuthology.orchestra.run.vm02.stderr: 2026-03-10T05:44:16.654 INFO:teuthology.orchestra.run.vm02.stderr:Bootstrap complete. 2026-03-10T05:44:16.663 INFO:tasks.cephadm:Fetching config... 2026-03-10T05:44:16.664 DEBUG:teuthology.orchestra.run.vm02:> set -ex 2026-03-10T05:44:16.664 DEBUG:teuthology.orchestra.run.vm02:> dd if=/etc/ceph/ceph.conf of=/dev/stdout 2026-03-10T05:44:16.666 INFO:tasks.cephadm:Fetching client.admin keyring... 2026-03-10T05:44:16.666 DEBUG:teuthology.orchestra.run.vm02:> set -ex 2026-03-10T05:44:16.666 DEBUG:teuthology.orchestra.run.vm02:> dd if=/etc/ceph/ceph.client.admin.keyring of=/dev/stdout 2026-03-10T05:44:16.711 INFO:tasks.cephadm:Fetching mon keyring... 2026-03-10T05:44:16.711 DEBUG:teuthology.orchestra.run.vm02:> set -ex 2026-03-10T05:44:16.711 DEBUG:teuthology.orchestra.run.vm02:> sudo dd if=/var/lib/ceph/107483ae-1c44-11f1-b530-c1172cd6122a/mon.a/keyring of=/dev/stdout 2026-03-10T05:44:16.759 INFO:tasks.cephadm:Fetching pub ssh key... 2026-03-10T05:44:16.759 DEBUG:teuthology.orchestra.run.vm02:> set -ex 2026-03-10T05:44:16.759 DEBUG:teuthology.orchestra.run.vm02:> dd if=/home/ubuntu/cephtest/ceph.pub of=/dev/stdout 2026-03-10T05:44:16.803 INFO:tasks.cephadm:Installing pub ssh key for root users... 2026-03-10T05:44:16.803 DEBUG:teuthology.orchestra.run.vm02:> sudo install -d -m 0700 /root/.ssh && echo 'ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDhoGVKGmHR1/pl1LL6b4N5J3bxsJRd2mlCKPnD/BqsLyVmZmQmN4NwbU0hDNkhXRnpgWau71Aw/1+vbFuOcxid4vnDGukwRWilpRr0BurwJMzb6KBLB1AsMCpMv5WqaEF9MFU1jqwmHFmwch51xyiTp6QNlQx2hJKmgK3gCwClOfNhk4Lx52y6Oqw9geUBUMRVLjdIQx1xArqZhDeSnMqTSinld6Nff3tbVxoHJ/vrXsigc6RZeauvtJ1bsrC39l6/OlrpQS8ZBl33qv59Hozg/f+h/sqFE0NGBLxb6xYXV1gSdBZyqYsnQLPXQT9voL0d3CLx+xgra5ET246R5wiCy5Wbckgy9EP/dt+Ud671wdX715Eslwb/2K/aR7/t4bAAYOZGs+oA2wX3g/1NI0a2oElR4P4jXlt6D5+ZXMzeDkz1kR3uZV0bnFy8cALGz4boOyvbCU+RPadynCnXo3pI3AiCWVf7nUrIi4+7sNTF9t/CWPexFyL0igNgMjkvaFM= ceph-107483ae-1c44-11f1-b530-c1172cd6122a' | sudo tee -a /root/.ssh/authorized_keys && sudo chmod 0600 /root/.ssh/authorized_keys 2026-03-10T05:44:16.861 INFO:teuthology.orchestra.run.vm02.stdout:ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDhoGVKGmHR1/pl1LL6b4N5J3bxsJRd2mlCKPnD/BqsLyVmZmQmN4NwbU0hDNkhXRnpgWau71Aw/1+vbFuOcxid4vnDGukwRWilpRr0BurwJMzb6KBLB1AsMCpMv5WqaEF9MFU1jqwmHFmwch51xyiTp6QNlQx2hJKmgK3gCwClOfNhk4Lx52y6Oqw9geUBUMRVLjdIQx1xArqZhDeSnMqTSinld6Nff3tbVxoHJ/vrXsigc6RZeauvtJ1bsrC39l6/OlrpQS8ZBl33qv59Hozg/f+h/sqFE0NGBLxb6xYXV1gSdBZyqYsnQLPXQT9voL0d3CLx+xgra5ET246R5wiCy5Wbckgy9EP/dt+Ud671wdX715Eslwb/2K/aR7/t4bAAYOZGs+oA2wX3g/1NI0a2oElR4P4jXlt6D5+ZXMzeDkz1kR3uZV0bnFy8cALGz4boOyvbCU+RPadynCnXo3pI3AiCWVf7nUrIi4+7sNTF9t/CWPexFyL0igNgMjkvaFM= ceph-107483ae-1c44-11f1-b530-c1172cd6122a 2026-03-10T05:44:16.868 DEBUG:teuthology.orchestra.run.vm05:> sudo install -d -m 0700 /root/.ssh && echo 'ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDhoGVKGmHR1/pl1LL6b4N5J3bxsJRd2mlCKPnD/BqsLyVmZmQmN4NwbU0hDNkhXRnpgWau71Aw/1+vbFuOcxid4vnDGukwRWilpRr0BurwJMzb6KBLB1AsMCpMv5WqaEF9MFU1jqwmHFmwch51xyiTp6QNlQx2hJKmgK3gCwClOfNhk4Lx52y6Oqw9geUBUMRVLjdIQx1xArqZhDeSnMqTSinld6Nff3tbVxoHJ/vrXsigc6RZeauvtJ1bsrC39l6/OlrpQS8ZBl33qv59Hozg/f+h/sqFE0NGBLxb6xYXV1gSdBZyqYsnQLPXQT9voL0d3CLx+xgra5ET246R5wiCy5Wbckgy9EP/dt+Ud671wdX715Eslwb/2K/aR7/t4bAAYOZGs+oA2wX3g/1NI0a2oElR4P4jXlt6D5+ZXMzeDkz1kR3uZV0bnFy8cALGz4boOyvbCU+RPadynCnXo3pI3AiCWVf7nUrIi4+7sNTF9t/CWPexFyL0igNgMjkvaFM= ceph-107483ae-1c44-11f1-b530-c1172cd6122a' | sudo tee -a /root/.ssh/authorized_keys && sudo chmod 0600 /root/.ssh/authorized_keys 2026-03-10T05:44:16.879 INFO:teuthology.orchestra.run.vm05.stdout:ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDhoGVKGmHR1/pl1LL6b4N5J3bxsJRd2mlCKPnD/BqsLyVmZmQmN4NwbU0hDNkhXRnpgWau71Aw/1+vbFuOcxid4vnDGukwRWilpRr0BurwJMzb6KBLB1AsMCpMv5WqaEF9MFU1jqwmHFmwch51xyiTp6QNlQx2hJKmgK3gCwClOfNhk4Lx52y6Oqw9geUBUMRVLjdIQx1xArqZhDeSnMqTSinld6Nff3tbVxoHJ/vrXsigc6RZeauvtJ1bsrC39l6/OlrpQS8ZBl33qv59Hozg/f+h/sqFE0NGBLxb6xYXV1gSdBZyqYsnQLPXQT9voL0d3CLx+xgra5ET246R5wiCy5Wbckgy9EP/dt+Ud671wdX715Eslwb/2K/aR7/t4bAAYOZGs+oA2wX3g/1NI0a2oElR4P4jXlt6D5+ZXMzeDkz1kR3uZV0bnFy8cALGz4boOyvbCU+RPadynCnXo3pI3AiCWVf7nUrIi4+7sNTF9t/CWPexFyL0igNgMjkvaFM= ceph-107483ae-1c44-11f1-b530-c1172cd6122a 2026-03-10T05:44:16.884 DEBUG:teuthology.orchestra.run.vm02:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 107483ae-1c44-11f1-b530-c1172cd6122a -- ceph config set mgr mgr/cephadm/allow_ptrace true 2026-03-10T05:44:17.197 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:44:17 vm02 bash[17462]: audit 2026-03-10T05:44:15.299941+0000 mgr.y (mgr.14152) 6 : audit [DBG] from='client.14164 -' entity='client.admin' cmd=[{"prefix": "dashboard create-self-signed-cert", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T05:44:17.197 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:44:17 vm02 bash[17462]: audit 2026-03-10T05:44:15.590713+0000 mgr.y (mgr.14152) 7 : audit [DBG] from='client.14166 -' entity='client.admin' cmd=[{"prefix": "dashboard ac-user-create", "username": "admin", "rolename": "administrator", "force_password": true, "pwd_update_required": true, "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T05:44:17.197 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:44:17 vm02 bash[17462]: audit 2026-03-10T05:44:16.616352+0000 mon.a (mon.0) 89 : audit [INF] from='client.? 192.168.123.102:0/2174342322' entity='client.admin' 2026-03-10T05:44:17.197 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:44:17 vm02 bash[17462]: cluster 2026-03-10T05:44:16.743464+0000 mon.a (mon.0) 90 : cluster [DBG] mgrmap e12: y(active, since 2s) 2026-03-10T05:44:17.197 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:44:17 vm02 bash[17462]: audit 2026-03-10T05:44:16.982631+0000 mon.a (mon.0) 91 : audit [INF] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' 2026-03-10T05:44:17.435 INFO:tasks.cephadm:Distributing conf and client.admin keyring to all hosts + 0755 2026-03-10T05:44:17.435 DEBUG:teuthology.orchestra.run.vm02:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 107483ae-1c44-11f1-b530-c1172cd6122a -- ceph orch client-keyring set client.admin '*' --mode 0755 2026-03-10T05:44:17.833 INFO:tasks.cephadm:Writing (initial) conf and keyring to vm05 2026-03-10T05:44:17.833 DEBUG:teuthology.orchestra.run.vm05:> set -ex 2026-03-10T05:44:17.833 DEBUG:teuthology.orchestra.run.vm05:> dd of=/etc/ceph/ceph.conf 2026-03-10T05:44:17.836 DEBUG:teuthology.orchestra.run.vm05:> set -ex 2026-03-10T05:44:17.836 DEBUG:teuthology.orchestra.run.vm05:> dd of=/etc/ceph/ceph.client.admin.keyring 2026-03-10T05:44:17.879 INFO:tasks.cephadm:Adding host vm05 to orchestrator... 2026-03-10T05:44:17.880 DEBUG:teuthology.orchestra.run.vm02:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 107483ae-1c44-11f1-b530-c1172cd6122a -- ceph orch host add vm05 2026-03-10T05:44:18.333 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:44:18 vm02 bash[17462]: audit 2026-03-10T05:44:17.247070+0000 mon.a (mon.0) 92 : audit [INF] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' 2026-03-10T05:44:18.333 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:44:18 vm02 bash[17462]: audit 2026-03-10T05:44:17.390448+0000 mon.a (mon.0) 93 : audit [INF] from='client.? 192.168.123.102:0/3004166379' entity='client.admin' 2026-03-10T05:44:18.333 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:44:18 vm02 bash[17462]: audit 2026-03-10T05:44:17.787926+0000 mon.a (mon.0) 94 : audit [INF] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' 2026-03-10T05:44:18.755 INFO:teuthology.orchestra.run.vm02.stdout:Added host 'vm05' with addr '192.168.123.105' 2026-03-10T05:44:18.801 DEBUG:teuthology.orchestra.run.vm02:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 107483ae-1c44-11f1-b530-c1172cd6122a -- ceph orch host ls --format=json 2026-03-10T05:44:19.154 INFO:teuthology.orchestra.run.vm02.stdout: 2026-03-10T05:44:19.154 INFO:teuthology.orchestra.run.vm02.stdout:[{"addr": "192.168.123.102", "hostname": "vm02", "labels": [], "status": ""}, {"addr": "192.168.123.105", "hostname": "vm05", "labels": [], "status": ""}] 2026-03-10T05:44:19.199 INFO:tasks.cephadm:Setting crush tunables to default 2026-03-10T05:44:19.200 DEBUG:teuthology.orchestra.run.vm02:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 107483ae-1c44-11f1-b530-c1172cd6122a -- ceph osd crush tunables default 2026-03-10T05:44:19.424 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:44:19 vm02 bash[17462]: audit 2026-03-10T05:44:17.785003+0000 mgr.y (mgr.14152) 8 : audit [DBG] from='client.14176 -' entity='client.admin' cmd=[{"prefix": "orch client-keyring set", "entity": "client.admin", "placement": "*", "mode": "0755", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T05:44:19.424 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:44:19 vm02 bash[17462]: audit 2026-03-10T05:44:18.240551+0000 mgr.y (mgr.14152) 9 : audit [DBG] from='client.14178 -' entity='client.admin' cmd=[{"prefix": "orch host add", "hostname": "vm05", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T05:44:19.424 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:44:19 vm02 bash[17462]: audit 2026-03-10T05:44:18.754961+0000 mon.a (mon.0) 95 : audit [INF] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' 2026-03-10T05:44:20.254 INFO:teuthology.orchestra.run.vm02.stderr:adjusted tunables profile to default 2026-03-10T05:44:20.320 INFO:tasks.cephadm:Adding mon.a on vm02 2026-03-10T05:44:20.321 INFO:tasks.cephadm:Adding mon.c on vm02 2026-03-10T05:44:20.321 INFO:tasks.cephadm:Adding mon.b on vm05 2026-03-10T05:44:20.321 DEBUG:teuthology.orchestra.run.vm05:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 107483ae-1c44-11f1-b530-c1172cd6122a -- ceph orch apply mon '3;vm02:192.168.123.102=a;vm02:[v2:192.168.123.102:3301,v1:192.168.123.102:6790]=c;vm05:192.168.123.105=b' 2026-03-10T05:44:20.583 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:44:20 vm02 bash[17462]: cephadm 2026-03-10T05:44:18.538845+0000 mgr.y (mgr.14152) 10 : cephadm [INF] Deploying cephadm binary to vm05 2026-03-10T05:44:20.584 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:44:20 vm02 bash[17462]: cephadm 2026-03-10T05:44:18.755236+0000 mgr.y (mgr.14152) 11 : cephadm [INF] Added host vm05 2026-03-10T05:44:20.584 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:44:20 vm02 bash[17462]: audit 2026-03-10T05:44:19.154477+0000 mgr.y (mgr.14152) 12 : audit [DBG] from='client.14180 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-10T05:44:20.584 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:44:20 vm02 bash[17462]: audit 2026-03-10T05:44:19.583223+0000 mon.a (mon.0) 96 : audit [INF] from='client.? 192.168.123.102:0/3993713218' entity='client.admin' cmd=[{"prefix": "osd crush tunables", "profile": "default"}]: dispatch 2026-03-10T05:44:20.732 INFO:teuthology.orchestra.run.vm05.stdout:Scheduled mon update... 2026-03-10T05:44:20.779 DEBUG:teuthology.orchestra.run.vm02:mon.c> sudo journalctl -f -n 0 -u ceph-107483ae-1c44-11f1-b530-c1172cd6122a@mon.c.service 2026-03-10T05:44:20.780 DEBUG:teuthology.orchestra.run.vm05:mon.b> sudo journalctl -f -n 0 -u ceph-107483ae-1c44-11f1-b530-c1172cd6122a@mon.b.service 2026-03-10T05:44:20.780 INFO:tasks.cephadm:Waiting for 3 mons in monmap... 2026-03-10T05:44:20.780 DEBUG:teuthology.orchestra.run.vm05:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 107483ae-1c44-11f1-b530-c1172cd6122a -- ceph mon dump -f json 2026-03-10T05:44:21.234 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-10T05:44:21.234 INFO:teuthology.orchestra.run.vm05.stdout:{"epoch":1,"fsid":"107483ae-1c44-11f1-b530-c1172cd6122a","modified":"2026-03-10T05:43:50.866640Z","created":"2026-03-10T05:43:50.866640Z","min_mon_release":17,"min_mon_release_name":"quincy","election_strategy":1,"disallowed_leaders: ":"","stretch_mode":false,"tiebreaker_mon":"","features":{"persistent":["kraken","luminous","mimic","osdmap-prune","nautilus","octopus","pacific","elector-pinging","quincy"],"optional":[]},"mons":[{"rank":0,"name":"a","public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.102:3300","nonce":0},{"type":"v1","addr":"192.168.123.102:6789","nonce":0}]},"addr":"192.168.123.102:6789/0","public_addr":"192.168.123.102:6789/0","priority":0,"weight":0,"crush_location":"{}"}],"quorum":[0]} 2026-03-10T05:44:21.237 INFO:teuthology.orchestra.run.vm05.stderr:dumped monmap epoch 1 2026-03-10T05:44:21.544 INFO:journalctl@ceph.mgr.y.vm02.stdout:Mar 10 05:44:21 vm02 bash[17731]: debug 2026-03-10T05:44:21.323+0000 7f3d96f8d700 -1 log_channel(cephadm) log [ERR] : Failed to apply mon spec ServiceSpec.from_json(yaml.safe_load('''service_type: mon 2026-03-10T05:44:21.544 INFO:journalctl@ceph.mgr.y.vm02.stdout:Mar 10 05:44:21 vm02 bash[17731]: service_name: mon 2026-03-10T05:44:21.544 INFO:journalctl@ceph.mgr.y.vm02.stdout:Mar 10 05:44:21 vm02 bash[17731]: placement: 2026-03-10T05:44:21.544 INFO:journalctl@ceph.mgr.y.vm02.stdout:Mar 10 05:44:21 vm02 bash[17731]: count: 3 2026-03-10T05:44:21.544 INFO:journalctl@ceph.mgr.y.vm02.stdout:Mar 10 05:44:21 vm02 bash[17731]: hosts: 2026-03-10T05:44:21.544 INFO:journalctl@ceph.mgr.y.vm02.stdout:Mar 10 05:44:21 vm02 bash[17731]: - vm02:192.168.123.102=a 2026-03-10T05:44:21.544 INFO:journalctl@ceph.mgr.y.vm02.stdout:Mar 10 05:44:21 vm02 bash[17731]: - vm02:[v2:192.168.123.102:3301,v1:192.168.123.102:6790]=c 2026-03-10T05:44:21.544 INFO:journalctl@ceph.mgr.y.vm02.stdout:Mar 10 05:44:21 vm02 bash[17731]: - vm05:192.168.123.105=b 2026-03-10T05:44:21.544 INFO:journalctl@ceph.mgr.y.vm02.stdout:Mar 10 05:44:21 vm02 bash[17731]: ''')): Cannot place on vm05: Unknown hosts 2026-03-10T05:44:21.544 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:44:21 vm02 bash[17462]: audit 2026-03-10T05:44:20.251316+0000 mon.a (mon.0) 97 : audit [INF] from='client.? 192.168.123.102:0/3993713218' entity='client.admin' cmd='[{"prefix": "osd crush tunables", "profile": "default"}]': finished 2026-03-10T05:44:21.544 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:44:21 vm02 bash[17462]: cluster 2026-03-10T05:44:20.251415+0000 mon.a (mon.0) 98 : cluster [DBG] osdmap e4: 0 total, 0 up, 0 in 2026-03-10T05:44:21.544 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:44:21 vm02 bash[17462]: cluster 2026-03-10T05:44:20.265542+0000 mon.a (mon.0) 99 : cluster [DBG] mgrmap e13: y(active, since 6s) 2026-03-10T05:44:21.544 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:44:21 vm02 bash[17462]: audit 2026-03-10T05:44:20.731834+0000 mon.a (mon.0) 100 : audit [INF] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' 2026-03-10T05:44:21.544 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:44:21 vm02 bash[17462]: audit 2026-03-10T05:44:21.234740+0000 mon.a (mon.0) 101 : audit [DBG] from='client.? 192.168.123.105:0/476808107' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch 2026-03-10T05:44:22.279 INFO:tasks.cephadm:Waiting for 3 mons in monmap... 2026-03-10T05:44:22.279 DEBUG:teuthology.orchestra.run.vm05:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 107483ae-1c44-11f1-b530-c1172cd6122a -- ceph mon dump -f json 2026-03-10T05:44:22.583 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:44:22 vm02 bash[17462]: audit 2026-03-10T05:44:20.727358+0000 mgr.y (mgr.14152) 13 : audit [DBG] from='client.14184 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mon", "placement": "3;vm02:192.168.123.102=a;vm02:[v2:192.168.123.102:3301,v1:192.168.123.102:6790]=c;vm05:192.168.123.105=b", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T05:44:22.584 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:44:22 vm02 bash[17462]: cephadm 2026-03-10T05:44:20.728477+0000 mgr.y (mgr.14152) 14 : cephadm [INF] Saving service mon spec with placement vm02:192.168.123.102=a;vm02:[v2:192.168.123.102:3301,v1:192.168.123.102:6790]=c;vm05:192.168.123.105=b;count:3 2026-03-10T05:44:22.584 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:44:22 vm02 bash[17462]: audit 2026-03-10T05:44:21.323867+0000 mon.a (mon.0) 102 : audit [INF] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' 2026-03-10T05:44:22.584 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:44:22 vm02 bash[17462]: audit 2026-03-10T05:44:21.324336+0000 mon.a (mon.0) 103 : audit [INF] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm02", "name": "osd_memory_target"}]: dispatch 2026-03-10T05:44:22.584 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:44:22 vm02 bash[17462]: audit 2026-03-10T05:44:21.327105+0000 mon.a (mon.0) 104 : audit [INF] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' 2026-03-10T05:44:22.584 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:44:22 vm02 bash[17462]: audit 2026-03-10T05:44:21.328187+0000 mon.a (mon.0) 105 : audit [INF] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-10T05:44:22.584 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:44:22 vm02 bash[17462]: audit 2026-03-10T05:44:21.329413+0000 mon.a (mon.0) 106 : audit [DBG] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch 2026-03-10T05:44:22.584 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:44:22 vm02 bash[17462]: audit 2026-03-10T05:44:21.330105+0000 mon.a (mon.0) 107 : audit [DBG] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:44:22.584 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:44:22 vm02 bash[17462]: audit 2026-03-10T05:44:21.549674+0000 mon.a (mon.0) 108 : audit [INF] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' 2026-03-10T05:44:22.584 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:44:22 vm02 bash[17462]: audit 2026-03-10T05:44:21.550402+0000 mon.a (mon.0) 109 : audit [DBG] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T05:44:22.584 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:44:22 vm02 bash[17462]: audit 2026-03-10T05:44:21.551185+0000 mon.a (mon.0) 110 : audit [DBG] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:44:22.584 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:44:22 vm02 bash[17462]: audit 2026-03-10T05:44:21.551771+0000 mon.a (mon.0) 111 : audit [INF] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T05:44:22.584 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:44:22 vm02 bash[17462]: audit 2026-03-10T05:44:21.652588+0000 mon.a (mon.0) 112 : audit [INF] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' 2026-03-10T05:44:22.584 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:44:22 vm02 bash[17462]: audit 2026-03-10T05:44:21.670467+0000 mon.a (mon.0) 113 : audit [INF] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' 2026-03-10T05:44:22.772 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-10T05:44:22.772 INFO:teuthology.orchestra.run.vm05.stdout:{"epoch":1,"fsid":"107483ae-1c44-11f1-b530-c1172cd6122a","modified":"2026-03-10T05:43:50.866640Z","created":"2026-03-10T05:43:50.866640Z","min_mon_release":17,"min_mon_release_name":"quincy","election_strategy":1,"disallowed_leaders: ":"","stretch_mode":false,"tiebreaker_mon":"","features":{"persistent":["kraken","luminous","mimic","osdmap-prune","nautilus","octopus","pacific","elector-pinging","quincy"],"optional":[]},"mons":[{"rank":0,"name":"a","public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.102:3300","nonce":0},{"type":"v1","addr":"192.168.123.102:6789","nonce":0}]},"addr":"192.168.123.102:6789/0","public_addr":"192.168.123.102:6789/0","priority":0,"weight":0,"crush_location":"{}"}],"quorum":[0]} 2026-03-10T05:44:22.774 INFO:teuthology.orchestra.run.vm05.stderr:dumped monmap epoch 1 2026-03-10T05:44:23.583 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:44:23 vm02 bash[17462]: cephadm 2026-03-10T05:44:21.327767+0000 mgr.y (mgr.14152) 15 : cephadm [ERR] Failed to apply mon spec ServiceSpec.from_json(yaml.safe_load('''service_type: mon 2026-03-10T05:44:23.583 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:44:23 vm02 bash[17462]: service_name: mon 2026-03-10T05:44:23.584 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:44:23 vm02 bash[17462]: placement: 2026-03-10T05:44:23.584 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:44:23 vm02 bash[17462]: count: 3 2026-03-10T05:44:23.584 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:44:23 vm02 bash[17462]: hosts: 2026-03-10T05:44:23.584 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:44:23 vm02 bash[17462]: - vm02:192.168.123.102=a 2026-03-10T05:44:23.584 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:44:23 vm02 bash[17462]: - vm02:[v2:192.168.123.102:3301,v1:192.168.123.102:6790]=c 2026-03-10T05:44:23.584 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:44:23 vm02 bash[17462]: - vm05:192.168.123.105=b 2026-03-10T05:44:23.584 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:44:23 vm02 bash[17462]: ''')): Cannot place on vm05: Unknown hosts 2026-03-10T05:44:23.584 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:44:23 vm02 bash[17462]: cephadm 2026-03-10T05:44:21.327861+0000 mgr.y (mgr.14152) 16 : cephadm [INF] Reconfiguring mon.a (unknown last config time)... 2026-03-10T05:44:23.584 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:44:23 vm02 bash[17462]: cephadm 2026-03-10T05:44:21.330693+0000 mgr.y (mgr.14152) 17 : cephadm [INF] Reconfiguring daemon mon.a on vm02 2026-03-10T05:44:23.584 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:44:23 vm02 bash[17462]: cephadm 2026-03-10T05:44:21.552529+0000 mgr.y (mgr.14152) 18 : cephadm [INF] Updating vm02:/etc/ceph/ceph.conf 2026-03-10T05:44:23.584 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:44:23 vm02 bash[17462]: cephadm 2026-03-10T05:44:21.601373+0000 mgr.y (mgr.14152) 19 : cephadm [INF] Updating vm02:/etc/ceph/ceph.client.admin.keyring 2026-03-10T05:44:23.584 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:44:23 vm02 bash[17462]: audit 2026-03-10T05:44:22.273354+0000 mon.a (mon.0) 114 : audit [INF] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' 2026-03-10T05:44:23.584 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:44:23 vm02 bash[17462]: audit 2026-03-10T05:44:22.578059+0000 mon.a (mon.0) 115 : audit [INF] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' 2026-03-10T05:44:23.584 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:44:23 vm02 bash[17462]: audit 2026-03-10T05:44:22.772217+0000 mon.a (mon.0) 116 : audit [DBG] from='client.? 192.168.123.105:0/2045936000' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch 2026-03-10T05:44:23.827 INFO:tasks.cephadm:Waiting for 3 mons in monmap... 2026-03-10T05:44:23.828 DEBUG:teuthology.orchestra.run.vm05:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 107483ae-1c44-11f1-b530-c1172cd6122a -- ceph mon dump -f json 2026-03-10T05:44:24.262 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-10T05:44:24.262 INFO:teuthology.orchestra.run.vm05.stdout:{"epoch":1,"fsid":"107483ae-1c44-11f1-b530-c1172cd6122a","modified":"2026-03-10T05:43:50.866640Z","created":"2026-03-10T05:43:50.866640Z","min_mon_release":17,"min_mon_release_name":"quincy","election_strategy":1,"disallowed_leaders: ":"","stretch_mode":false,"tiebreaker_mon":"","features":{"persistent":["kraken","luminous","mimic","osdmap-prune","nautilus","octopus","pacific","elector-pinging","quincy"],"optional":[]},"mons":[{"rank":0,"name":"a","public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.102:3300","nonce":0},{"type":"v1","addr":"192.168.123.102:6789","nonce":0}]},"addr":"192.168.123.102:6789/0","public_addr":"192.168.123.102:6789/0","priority":0,"weight":0,"crush_location":"{}"}],"quorum":[0]} 2026-03-10T05:44:24.264 INFO:teuthology.orchestra.run.vm05.stderr:dumped monmap epoch 1 2026-03-10T05:44:24.583 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:44:24 vm02 bash[17462]: audit 2026-03-10T05:44:24.262093+0000 mon.a (mon.0) 117 : audit [DBG] from='client.? 192.168.123.105:0/2440387890' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch 2026-03-10T05:44:25.307 INFO:tasks.cephadm:Waiting for 3 mons in monmap... 2026-03-10T05:44:25.308 DEBUG:teuthology.orchestra.run.vm05:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 107483ae-1c44-11f1-b530-c1172cd6122a -- ceph mon dump -f json 2026-03-10T05:44:25.334 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:44:25 vm02 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:44:25.334 INFO:journalctl@ceph.mgr.y.vm02.stdout:Mar 10 05:44:25 vm02 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:44:25.833 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:44:25 vm02 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:44:25.834 INFO:journalctl@ceph.mgr.y.vm02.stdout:Mar 10 05:44:25 vm02 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:44:25.834 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:44:25 vm02 bash[22526]: debug 2026-03-10T05:44:25.543+0000 7f9c44e89700 1 mon.c@-1(synchronizing).paxosservice(auth 1..3) refresh upgraded, format 0 -> 3 2026-03-10T05:44:26.758 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:44:26 vm05 bash[17864]: debug 2026-03-10T05:44:26.702+0000 7f7c74033700 1 mon.b@-1(synchronizing).paxosservice(auth 1..3) refresh upgraded, format 0 -> 3 2026-03-10T05:44:30.834 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:44:30 vm02 bash[17462]: cephadm 2026-03-10T05:44:25.438899+0000 mgr.y (mgr.14152) 21 : cephadm [INF] Deploying daemon mon.b on vm05 2026-03-10T05:44:30.834 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:44:30 vm02 bash[17462]: cluster 2026-03-10T05:44:25.552705+0000 mon.a (mon.0) 128 : cluster [INF] mon.a calling monitor election 2026-03-10T05:44:30.834 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:44:30 vm02 bash[17462]: audit 2026-03-10T05:44:25.554723+0000 mon.a (mon.0) 129 : audit [DBG] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-10T05:44:30.834 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:44:30 vm02 bash[17462]: audit 2026-03-10T05:44:25.555050+0000 mon.a (mon.0) 130 : audit [DBG] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T05:44:30.834 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:44:30 vm02 bash[17462]: audit 2026-03-10T05:44:26.549043+0000 mon.a (mon.0) 131 : audit [DBG] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T05:44:30.834 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:44:30 vm02 bash[17462]: audit 2026-03-10T05:44:26.715273+0000 mon.a (mon.0) 132 : audit [DBG] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T05:44:30.834 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:44:30 vm02 bash[17462]: audit 2026-03-10T05:44:27.549229+0000 mon.a (mon.0) 133 : audit [DBG] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T05:44:30.834 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:44:30 vm02 bash[17462]: cluster 2026-03-10T05:44:27.551561+0000 mon.c (mon.1) 1 : cluster [INF] mon.c calling monitor election 2026-03-10T05:44:30.834 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:44:30 vm02 bash[17462]: audit 2026-03-10T05:44:27.715753+0000 mon.a (mon.0) 134 : audit [DBG] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T05:44:30.834 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:44:30 vm02 bash[17462]: audit 2026-03-10T05:44:28.549280+0000 mon.a (mon.0) 135 : audit [DBG] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T05:44:30.834 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:44:30 vm02 bash[17462]: audit 2026-03-10T05:44:28.715616+0000 mon.a (mon.0) 136 : audit [DBG] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T05:44:30.834 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:44:30 vm02 bash[17462]: audit 2026-03-10T05:44:29.549299+0000 mon.a (mon.0) 137 : audit [DBG] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T05:44:30.834 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:44:30 vm02 bash[17462]: audit 2026-03-10T05:44:29.715836+0000 mon.a (mon.0) 138 : audit [DBG] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T05:44:30.834 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:44:30 vm02 bash[17462]: audit 2026-03-10T05:44:30.549403+0000 mon.a (mon.0) 139 : audit [DBG] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T05:44:30.834 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:44:30 vm02 bash[17462]: cluster 2026-03-10T05:44:30.557502+0000 mon.a (mon.0) 140 : cluster [INF] mon.a is new leader, mons a,c in quorum (ranks 0,1) 2026-03-10T05:44:30.834 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:44:30 vm02 bash[17462]: cluster 2026-03-10T05:44:30.561447+0000 mon.a (mon.0) 141 : cluster [DBG] monmap e2: 2 mons at {a=[v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0],c=[v2:192.168.123.102:3301/0,v1:192.168.123.102:6790/0]} 2026-03-10T05:44:30.834 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:44:30 vm02 bash[17462]: cluster 2026-03-10T05:44:30.561532+0000 mon.a (mon.0) 142 : cluster [DBG] fsmap 2026-03-10T05:44:30.834 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:44:30 vm02 bash[17462]: cluster 2026-03-10T05:44:30.561600+0000 mon.a (mon.0) 143 : cluster [DBG] osdmap e4: 0 total, 0 up, 0 in 2026-03-10T05:44:30.834 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:44:30 vm02 bash[17462]: cluster 2026-03-10T05:44:30.561855+0000 mon.a (mon.0) 144 : cluster [DBG] mgrmap e13: y(active, since 16s) 2026-03-10T05:44:30.834 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:44:30 vm02 bash[17462]: cluster 2026-03-10T05:44:30.566426+0000 mon.a (mon.0) 145 : cluster [INF] overall HEALTH_OK 2026-03-10T05:44:30.834 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:44:30 vm02 bash[17462]: audit 2026-03-10T05:44:30.569139+0000 mon.a (mon.0) 146 : audit [INF] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' 2026-03-10T05:44:30.834 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:44:30 vm02 bash[17462]: audit 2026-03-10T05:44:30.570226+0000 mon.a (mon.0) 147 : audit [DBG] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T05:44:30.834 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:44:30 vm02 bash[17462]: audit 2026-03-10T05:44:30.571490+0000 mon.a (mon.0) 148 : audit [DBG] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:44:30.834 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:44:30 vm02 bash[17462]: audit 2026-03-10T05:44:30.571869+0000 mon.a (mon.0) 149 : audit [INF] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T05:44:30.834 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:44:30 vm02 bash[22526]: cephadm 2026-03-10T05:44:25.438899+0000 mgr.y (mgr.14152) 21 : cephadm [INF] Deploying daemon mon.b on vm05 2026-03-10T05:44:30.834 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:44:30 vm02 bash[22526]: cluster 2026-03-10T05:44:25.552705+0000 mon.a (mon.0) 128 : cluster [INF] mon.a calling monitor election 2026-03-10T05:44:30.834 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:44:30 vm02 bash[22526]: audit 2026-03-10T05:44:25.554723+0000 mon.a (mon.0) 129 : audit [DBG] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-10T05:44:30.834 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:44:30 vm02 bash[22526]: audit 2026-03-10T05:44:25.555050+0000 mon.a (mon.0) 130 : audit [DBG] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T05:44:30.834 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:44:30 vm02 bash[22526]: audit 2026-03-10T05:44:26.549043+0000 mon.a (mon.0) 131 : audit [DBG] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T05:44:30.834 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:44:30 vm02 bash[22526]: audit 2026-03-10T05:44:26.715273+0000 mon.a (mon.0) 132 : audit [DBG] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T05:44:30.834 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:44:30 vm02 bash[22526]: audit 2026-03-10T05:44:27.549229+0000 mon.a (mon.0) 133 : audit [DBG] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T05:44:30.834 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:44:30 vm02 bash[22526]: cluster 2026-03-10T05:44:27.551561+0000 mon.c (mon.1) 1 : cluster [INF] mon.c calling monitor election 2026-03-10T05:44:30.834 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:44:30 vm02 bash[22526]: audit 2026-03-10T05:44:27.715753+0000 mon.a (mon.0) 134 : audit [DBG] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T05:44:30.834 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:44:30 vm02 bash[22526]: audit 2026-03-10T05:44:28.549280+0000 mon.a (mon.0) 135 : audit [DBG] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T05:44:30.834 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:44:30 vm02 bash[22526]: audit 2026-03-10T05:44:28.715616+0000 mon.a (mon.0) 136 : audit [DBG] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T05:44:30.834 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:44:30 vm02 bash[22526]: audit 2026-03-10T05:44:29.549299+0000 mon.a (mon.0) 137 : audit [DBG] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T05:44:30.835 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:44:30 vm02 bash[22526]: audit 2026-03-10T05:44:29.715836+0000 mon.a (mon.0) 138 : audit [DBG] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T05:44:30.835 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:44:30 vm02 bash[22526]: audit 2026-03-10T05:44:30.549403+0000 mon.a (mon.0) 139 : audit [DBG] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T05:44:30.835 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:44:30 vm02 bash[22526]: cluster 2026-03-10T05:44:30.557502+0000 mon.a (mon.0) 140 : cluster [INF] mon.a is new leader, mons a,c in quorum (ranks 0,1) 2026-03-10T05:44:30.835 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:44:30 vm02 bash[22526]: cluster 2026-03-10T05:44:30.561447+0000 mon.a (mon.0) 141 : cluster [DBG] monmap e2: 2 mons at {a=[v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0],c=[v2:192.168.123.102:3301/0,v1:192.168.123.102:6790/0]} 2026-03-10T05:44:30.835 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:44:30 vm02 bash[22526]: cluster 2026-03-10T05:44:30.561532+0000 mon.a (mon.0) 142 : cluster [DBG] fsmap 2026-03-10T05:44:30.835 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:44:30 vm02 bash[22526]: cluster 2026-03-10T05:44:30.561600+0000 mon.a (mon.0) 143 : cluster [DBG] osdmap e4: 0 total, 0 up, 0 in 2026-03-10T05:44:30.835 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:44:30 vm02 bash[22526]: cluster 2026-03-10T05:44:30.561855+0000 mon.a (mon.0) 144 : cluster [DBG] mgrmap e13: y(active, since 16s) 2026-03-10T05:44:30.835 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:44:30 vm02 bash[22526]: cluster 2026-03-10T05:44:30.566426+0000 mon.a (mon.0) 145 : cluster [INF] overall HEALTH_OK 2026-03-10T05:44:30.835 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:44:30 vm02 bash[22526]: audit 2026-03-10T05:44:30.569139+0000 mon.a (mon.0) 146 : audit [INF] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' 2026-03-10T05:44:30.835 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:44:30 vm02 bash[22526]: audit 2026-03-10T05:44:30.570226+0000 mon.a (mon.0) 147 : audit [DBG] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T05:44:30.835 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:44:30 vm02 bash[22526]: audit 2026-03-10T05:44:30.571490+0000 mon.a (mon.0) 148 : audit [DBG] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:44:30.835 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:44:30 vm02 bash[22526]: audit 2026-03-10T05:44:30.571869+0000 mon.a (mon.0) 149 : audit [INF] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T05:44:35.993 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:44:35 vm02 bash[17462]: audit 2026-03-10T05:44:30.720243+0000 mon.a (mon.0) 151 : audit [DBG] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-10T05:44:35.993 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:44:35 vm02 bash[17462]: audit 2026-03-10T05:44:30.720297+0000 mon.a (mon.0) 152 : audit [DBG] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T05:44:35.993 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:44:35 vm02 bash[17462]: audit 2026-03-10T05:44:30.720330+0000 mon.a (mon.0) 153 : audit [DBG] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T05:44:35.993 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:44:35 vm02 bash[17462]: cluster 2026-03-10T05:44:30.720421+0000 mon.a (mon.0) 154 : cluster [INF] mon.a calling monitor election 2026-03-10T05:44:35.993 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:44:35 vm02 bash[17462]: cluster 2026-03-10T05:44:30.722731+0000 mon.c (mon.1) 2 : cluster [INF] mon.c calling monitor election 2026-03-10T05:44:35.993 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:44:35 vm02 bash[17462]: audit 2026-03-10T05:44:31.716035+0000 mon.a (mon.0) 155 : audit [DBG] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T05:44:35.993 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:44:35 vm02 bash[17462]: audit 2026-03-10T05:44:32.716227+0000 mon.a (mon.0) 156 : audit [DBG] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T05:44:35.993 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:44:35 vm02 bash[17462]: audit 2026-03-10T05:44:33.716381+0000 mon.a (mon.0) 157 : audit [DBG] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T05:44:35.993 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:44:35 vm02 bash[17462]: audit 2026-03-10T05:44:34.716301+0000 mon.a (mon.0) 158 : audit [DBG] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T05:44:35.993 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:44:35 vm02 bash[17462]: cluster 2026-03-10T05:44:34.752386+0000 mgr.y (mgr.14152) 22 : cluster [DBG] pgmap v4: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T05:44:35.993 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:44:35 vm02 bash[17462]: audit 2026-03-10T05:44:35.716593+0000 mon.a (mon.0) 159 : audit [DBG] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T05:44:35.993 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:44:35 vm02 bash[17462]: cluster 2026-03-10T05:44:35.721636+0000 mon.a (mon.0) 160 : cluster [INF] mon.a is new leader, mons a,c in quorum (ranks 0,1) 2026-03-10T05:44:35.993 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:44:35 vm02 bash[17462]: cluster 2026-03-10T05:44:35.724418+0000 mon.a (mon.0) 161 : cluster [DBG] monmap e3: 3 mons at {a=[v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0],b=[v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0],c=[v2:192.168.123.102:3301/0,v1:192.168.123.102:6790/0]} 2026-03-10T05:44:35.993 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:44:35 vm02 bash[17462]: cluster 2026-03-10T05:44:35.724499+0000 mon.a (mon.0) 162 : cluster [DBG] fsmap 2026-03-10T05:44:35.993 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:44:35 vm02 bash[17462]: cluster 2026-03-10T05:44:35.724571+0000 mon.a (mon.0) 163 : cluster [DBG] osdmap e4: 0 total, 0 up, 0 in 2026-03-10T05:44:35.993 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:44:35 vm02 bash[17462]: cluster 2026-03-10T05:44:35.724860+0000 mon.a (mon.0) 164 : cluster [DBG] mgrmap e13: y(active, since 21s) 2026-03-10T05:44:35.993 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:44:35 vm02 bash[17462]: cluster 2026-03-10T05:44:35.729593+0000 mon.a (mon.0) 165 : cluster [INF] overall HEALTH_OK 2026-03-10T05:44:35.993 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:44:35 vm02 bash[17462]: audit 2026-03-10T05:44:35.732099+0000 mon.a (mon.0) 166 : audit [INF] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' 2026-03-10T05:44:35.993 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:44:35 vm02 bash[17462]: audit 2026-03-10T05:44:35.735666+0000 mon.a (mon.0) 167 : audit [INF] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' 2026-03-10T05:44:35.993 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:44:35 vm02 bash[17462]: audit 2026-03-10T05:44:35.738349+0000 mon.a (mon.0) 168 : audit [INF] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' 2026-03-10T05:44:35.993 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:44:35 vm02 bash[22526]: audit 2026-03-10T05:44:30.720243+0000 mon.a (mon.0) 151 : audit [DBG] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-10T05:44:35.994 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:44:35 vm02 bash[22526]: audit 2026-03-10T05:44:30.720297+0000 mon.a (mon.0) 152 : audit [DBG] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T05:44:35.994 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:44:35 vm02 bash[22526]: audit 2026-03-10T05:44:30.720330+0000 mon.a (mon.0) 153 : audit [DBG] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T05:44:35.994 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:44:35 vm02 bash[22526]: cluster 2026-03-10T05:44:30.720421+0000 mon.a (mon.0) 154 : cluster [INF] mon.a calling monitor election 2026-03-10T05:44:35.994 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:44:35 vm02 bash[22526]: cluster 2026-03-10T05:44:30.722731+0000 mon.c (mon.1) 2 : cluster [INF] mon.c calling monitor election 2026-03-10T05:44:35.994 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:44:35 vm02 bash[22526]: audit 2026-03-10T05:44:31.716035+0000 mon.a (mon.0) 155 : audit [DBG] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T05:44:35.994 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:44:35 vm02 bash[22526]: audit 2026-03-10T05:44:32.716227+0000 mon.a (mon.0) 156 : audit [DBG] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T05:44:35.994 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:44:35 vm02 bash[22526]: audit 2026-03-10T05:44:33.716381+0000 mon.a (mon.0) 157 : audit [DBG] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T05:44:35.994 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:44:35 vm02 bash[22526]: audit 2026-03-10T05:44:34.716301+0000 mon.a (mon.0) 158 : audit [DBG] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T05:44:35.994 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:44:35 vm02 bash[22526]: cluster 2026-03-10T05:44:34.752386+0000 mgr.y (mgr.14152) 22 : cluster [DBG] pgmap v4: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T05:44:35.994 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:44:35 vm02 bash[22526]: audit 2026-03-10T05:44:35.716593+0000 mon.a (mon.0) 159 : audit [DBG] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T05:44:35.994 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:44:35 vm02 bash[22526]: cluster 2026-03-10T05:44:35.721636+0000 mon.a (mon.0) 160 : cluster [INF] mon.a is new leader, mons a,c in quorum (ranks 0,1) 2026-03-10T05:44:35.994 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:44:35 vm02 bash[22526]: cluster 2026-03-10T05:44:35.724418+0000 mon.a (mon.0) 161 : cluster [DBG] monmap e3: 3 mons at {a=[v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0],b=[v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0],c=[v2:192.168.123.102:3301/0,v1:192.168.123.102:6790/0]} 2026-03-10T05:44:35.994 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:44:35 vm02 bash[22526]: cluster 2026-03-10T05:44:35.724499+0000 mon.a (mon.0) 162 : cluster [DBG] fsmap 2026-03-10T05:44:35.994 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:44:35 vm02 bash[22526]: cluster 2026-03-10T05:44:35.724571+0000 mon.a (mon.0) 163 : cluster [DBG] osdmap e4: 0 total, 0 up, 0 in 2026-03-10T05:44:35.994 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:44:35 vm02 bash[22526]: cluster 2026-03-10T05:44:35.724860+0000 mon.a (mon.0) 164 : cluster [DBG] mgrmap e13: y(active, since 21s) 2026-03-10T05:44:35.994 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:44:35 vm02 bash[22526]: cluster 2026-03-10T05:44:35.729593+0000 mon.a (mon.0) 165 : cluster [INF] overall HEALTH_OK 2026-03-10T05:44:35.994 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:44:35 vm02 bash[22526]: audit 2026-03-10T05:44:35.732099+0000 mon.a (mon.0) 166 : audit [INF] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' 2026-03-10T05:44:35.994 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:44:35 vm02 bash[22526]: audit 2026-03-10T05:44:35.735666+0000 mon.a (mon.0) 167 : audit [INF] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' 2026-03-10T05:44:35.994 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:44:35 vm02 bash[22526]: audit 2026-03-10T05:44:35.738349+0000 mon.a (mon.0) 168 : audit [INF] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' 2026-03-10T05:44:36.193 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-10T05:44:36.193 INFO:teuthology.orchestra.run.vm05.stdout:{"epoch":3,"fsid":"107483ae-1c44-11f1-b530-c1172cd6122a","modified":"2026-03-10T05:44:30.716574Z","created":"2026-03-10T05:43:50.866640Z","min_mon_release":17,"min_mon_release_name":"quincy","election_strategy":1,"disallowed_leaders: ":"","stretch_mode":false,"tiebreaker_mon":"","features":{"persistent":["kraken","luminous","mimic","osdmap-prune","nautilus","octopus","pacific","elector-pinging","quincy"],"optional":[]},"mons":[{"rank":0,"name":"a","public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.102:3300","nonce":0},{"type":"v1","addr":"192.168.123.102:6789","nonce":0}]},"addr":"192.168.123.102:6789/0","public_addr":"192.168.123.102:6789/0","priority":0,"weight":0,"crush_location":"{}"},{"rank":1,"name":"c","public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.102:3301","nonce":0},{"type":"v1","addr":"192.168.123.102:6790","nonce":0}]},"addr":"192.168.123.102:6790/0","public_addr":"192.168.123.102:6790/0","priority":0,"weight":0,"crush_location":"{}"},{"rank":2,"name":"b","public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.105:3300","nonce":0},{"type":"v1","addr":"192.168.123.105:6789","nonce":0}]},"addr":"192.168.123.105:6789/0","public_addr":"192.168.123.105:6789/0","priority":0,"weight":0,"crush_location":"{}"}],"quorum":[0,1]} 2026-03-10T05:44:36.195 INFO:teuthology.orchestra.run.vm05.stderr:dumped monmap epoch 3 2026-03-10T05:44:36.260 INFO:tasks.cephadm:Generating final ceph.conf file... 2026-03-10T05:44:36.260 DEBUG:teuthology.orchestra.run.vm02:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 107483ae-1c44-11f1-b530-c1172cd6122a -- ceph config generate-minimal-conf 2026-03-10T05:44:36.668 INFO:teuthology.orchestra.run.vm02.stdout:# minimal ceph.conf for 107483ae-1c44-11f1-b530-c1172cd6122a 2026-03-10T05:44:36.668 INFO:teuthology.orchestra.run.vm02.stdout:[global] 2026-03-10T05:44:36.668 INFO:teuthology.orchestra.run.vm02.stdout: fsid = 107483ae-1c44-11f1-b530-c1172cd6122a 2026-03-10T05:44:36.668 INFO:teuthology.orchestra.run.vm02.stdout: mon_host = [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] [v2:192.168.123.102:3301/0,v1:192.168.123.102:6790/0] 2026-03-10T05:44:36.714 INFO:tasks.cephadm:Distributing (final) config and client.admin keyring... 2026-03-10T05:44:36.714 DEBUG:teuthology.orchestra.run.vm02:> set -ex 2026-03-10T05:44:36.714 DEBUG:teuthology.orchestra.run.vm02:> sudo dd of=/etc/ceph/ceph.conf 2026-03-10T05:44:36.722 DEBUG:teuthology.orchestra.run.vm02:> set -ex 2026-03-10T05:44:36.722 DEBUG:teuthology.orchestra.run.vm02:> sudo dd of=/etc/ceph/ceph.client.admin.keyring 2026-03-10T05:44:36.772 DEBUG:teuthology.orchestra.run.vm05:> set -ex 2026-03-10T05:44:36.772 DEBUG:teuthology.orchestra.run.vm05:> sudo dd of=/etc/ceph/ceph.conf 2026-03-10T05:44:36.778 DEBUG:teuthology.orchestra.run.vm05:> set -ex 2026-03-10T05:44:36.778 DEBUG:teuthology.orchestra.run.vm05:> sudo dd of=/etc/ceph/ceph.client.admin.keyring 2026-03-10T05:44:36.828 INFO:tasks.cephadm:Adding mgr.y on vm02 2026-03-10T05:44:36.828 INFO:tasks.cephadm:Adding mgr.x on vm05 2026-03-10T05:44:36.828 DEBUG:teuthology.orchestra.run.vm05:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 107483ae-1c44-11f1-b530-c1172cd6122a -- ceph orch apply mgr '2;vm02=y;vm05=x' 2026-03-10T05:44:37.084 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:44:36 vm02 bash[17462]: cephadm 2026-03-10T05:44:35.732719+0000 mgr.y (mgr.14152) 23 : cephadm [INF] Updating vm05:/etc/ceph/ceph.conf 2026-03-10T05:44:37.084 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:44:36 vm02 bash[17462]: cephadm 2026-03-10T05:44:35.736144+0000 mgr.y (mgr.14152) 24 : cephadm [INF] Updating vm02:/etc/ceph/ceph.conf 2026-03-10T05:44:37.084 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:44:36 vm02 bash[17462]: cephadm 2026-03-10T05:44:35.792393+0000 mgr.y (mgr.14152) 25 : cephadm [INF] Updating vm05:/etc/ceph/ceph.client.admin.keyring 2026-03-10T05:44:37.084 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:44:36 vm02 bash[17462]: audit 2026-03-10T05:44:35.797588+0000 mon.a (mon.0) 169 : audit [INF] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' 2026-03-10T05:44:37.084 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:44:36 vm02 bash[17462]: audit 2026-03-10T05:44:35.841321+0000 mon.a (mon.0) 170 : audit [INF] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' 2026-03-10T05:44:37.084 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:44:36 vm02 bash[17462]: audit 2026-03-10T05:44:35.844919+0000 mon.a (mon.0) 171 : audit [INF] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' 2026-03-10T05:44:37.084 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:44:36 vm02 bash[17462]: cephadm 2026-03-10T05:44:35.845761+0000 mgr.y (mgr.14152) 26 : cephadm [INF] Reconfiguring mon.c (monmap changed)... 2026-03-10T05:44:37.084 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:44:36 vm02 bash[17462]: audit 2026-03-10T05:44:35.845927+0000 mon.a (mon.0) 172 : audit [INF] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-10T05:44:37.084 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:44:36 vm02 bash[17462]: audit 2026-03-10T05:44:35.846494+0000 mon.a (mon.0) 173 : audit [DBG] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch 2026-03-10T05:44:37.084 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:44:36 vm02 bash[17462]: audit 2026-03-10T05:44:35.846895+0000 mon.a (mon.0) 174 : audit [DBG] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:44:37.084 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:44:36 vm02 bash[17462]: cephadm 2026-03-10T05:44:35.847359+0000 mgr.y (mgr.14152) 27 : cephadm [INF] Reconfiguring daemon mon.c on vm02 2026-03-10T05:44:37.084 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:44:36 vm02 bash[17462]: audit 2026-03-10T05:44:36.075540+0000 mon.a (mon.0) 175 : audit [INF] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' 2026-03-10T05:44:37.084 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:44:36 vm02 bash[17462]: cephadm 2026-03-10T05:44:36.075924+0000 mgr.y (mgr.14152) 28 : cephadm [INF] Reconfiguring mon.a (monmap changed)... 2026-03-10T05:44:37.084 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:44:36 vm02 bash[17462]: audit 2026-03-10T05:44:36.076200+0000 mon.a (mon.0) 176 : audit [INF] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-10T05:44:37.084 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:44:36 vm02 bash[17462]: audit 2026-03-10T05:44:36.076614+0000 mon.a (mon.0) 177 : audit [DBG] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch 2026-03-10T05:44:37.084 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:44:36 vm02 bash[17462]: audit 2026-03-10T05:44:36.076976+0000 mon.a (mon.0) 178 : audit [DBG] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:44:37.084 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:44:36 vm02 bash[17462]: cephadm 2026-03-10T05:44:36.077382+0000 mgr.y (mgr.14152) 29 : cephadm [INF] Reconfiguring daemon mon.a on vm02 2026-03-10T05:44:37.084 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:44:36 vm02 bash[17462]: audit 2026-03-10T05:44:36.192968+0000 mon.a (mon.0) 179 : audit [DBG] from='client.? 192.168.123.105:0/410320117' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch 2026-03-10T05:44:37.084 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:44:36 vm02 bash[17462]: audit 2026-03-10T05:44:36.277115+0000 mon.a (mon.0) 180 : audit [INF] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' 2026-03-10T05:44:37.084 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:44:36 vm02 bash[17462]: cephadm 2026-03-10T05:44:36.277518+0000 mgr.y (mgr.14152) 30 : cephadm [INF] Reconfiguring mon.b (monmap changed)... 2026-03-10T05:44:37.084 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:44:36 vm02 bash[17462]: audit 2026-03-10T05:44:36.277668+0000 mon.a (mon.0) 181 : audit [INF] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-10T05:44:37.084 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:44:36 vm02 bash[17462]: audit 2026-03-10T05:44:36.278190+0000 mon.a (mon.0) 182 : audit [DBG] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch 2026-03-10T05:44:37.084 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:44:36 vm02 bash[17462]: audit 2026-03-10T05:44:36.279651+0000 mon.a (mon.0) 183 : audit [DBG] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:44:37.084 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:44:36 vm02 bash[17462]: cephadm 2026-03-10T05:44:36.280044+0000 mgr.y (mgr.14152) 31 : cephadm [INF] Reconfiguring daemon mon.b on vm05 2026-03-10T05:44:37.084 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:44:36 vm02 bash[17462]: audit 2026-03-10T05:44:36.510498+0000 mon.a (mon.0) 184 : audit [INF] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' 2026-03-10T05:44:37.084 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:44:36 vm02 bash[17462]: audit 2026-03-10T05:44:36.512487+0000 mon.a (mon.0) 185 : audit [DBG] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T05:44:37.084 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:44:36 vm02 bash[17462]: audit 2026-03-10T05:44:36.513037+0000 mon.a (mon.0) 186 : audit [DBG] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:44:37.084 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:44:36 vm02 bash[17462]: audit 2026-03-10T05:44:36.513383+0000 mon.a (mon.0) 187 : audit [INF] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T05:44:37.084 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:44:36 vm02 bash[17462]: audit 2026-03-10T05:44:36.578121+0000 mon.a (mon.0) 188 : audit [INF] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' 2026-03-10T05:44:37.084 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:44:36 vm02 bash[17462]: audit 2026-03-10T05:44:36.582618+0000 mon.a (mon.0) 189 : audit [INF] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' 2026-03-10T05:44:37.084 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:44:36 vm02 bash[17462]: audit 2026-03-10T05:44:36.585815+0000 mon.a (mon.0) 190 : audit [INF] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' 2026-03-10T05:44:37.084 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:44:36 vm02 bash[17462]: audit 2026-03-10T05:44:36.668608+0000 mon.a (mon.0) 191 : audit [DBG] from='client.? 192.168.123.102:0/2043928545' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:44:37.084 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:44:36 vm02 bash[17462]: audit 2026-03-10T05:44:36.717126+0000 mon.a (mon.0) 192 : audit [DBG] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T05:44:37.084 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:44:36 vm02 bash[22526]: cephadm 2026-03-10T05:44:35.732719+0000 mgr.y (mgr.14152) 23 : cephadm [INF] Updating vm05:/etc/ceph/ceph.conf 2026-03-10T05:44:37.084 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:44:36 vm02 bash[22526]: cephadm 2026-03-10T05:44:35.736144+0000 mgr.y (mgr.14152) 24 : cephadm [INF] Updating vm02:/etc/ceph/ceph.conf 2026-03-10T05:44:37.084 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:44:36 vm02 bash[22526]: cephadm 2026-03-10T05:44:35.792393+0000 mgr.y (mgr.14152) 25 : cephadm [INF] Updating vm05:/etc/ceph/ceph.client.admin.keyring 2026-03-10T05:44:37.084 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:44:36 vm02 bash[22526]: audit 2026-03-10T05:44:35.797588+0000 mon.a (mon.0) 169 : audit [INF] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' 2026-03-10T05:44:37.084 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:44:36 vm02 bash[22526]: audit 2026-03-10T05:44:35.841321+0000 mon.a (mon.0) 170 : audit [INF] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' 2026-03-10T05:44:37.084 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:44:36 vm02 bash[22526]: audit 2026-03-10T05:44:35.844919+0000 mon.a (mon.0) 171 : audit [INF] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' 2026-03-10T05:44:37.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:44:36 vm02 bash[22526]: cephadm 2026-03-10T05:44:35.845761+0000 mgr.y (mgr.14152) 26 : cephadm [INF] Reconfiguring mon.c (monmap changed)... 2026-03-10T05:44:37.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:44:36 vm02 bash[22526]: audit 2026-03-10T05:44:35.845927+0000 mon.a (mon.0) 172 : audit [INF] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-10T05:44:37.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:44:36 vm02 bash[22526]: audit 2026-03-10T05:44:35.846494+0000 mon.a (mon.0) 173 : audit [DBG] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch 2026-03-10T05:44:37.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:44:36 vm02 bash[22526]: audit 2026-03-10T05:44:35.846895+0000 mon.a (mon.0) 174 : audit [DBG] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:44:37.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:44:36 vm02 bash[22526]: cephadm 2026-03-10T05:44:35.847359+0000 mgr.y (mgr.14152) 27 : cephadm [INF] Reconfiguring daemon mon.c on vm02 2026-03-10T05:44:37.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:44:36 vm02 bash[22526]: audit 2026-03-10T05:44:36.075540+0000 mon.a (mon.0) 175 : audit [INF] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' 2026-03-10T05:44:37.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:44:36 vm02 bash[22526]: cephadm 2026-03-10T05:44:36.075924+0000 mgr.y (mgr.14152) 28 : cephadm [INF] Reconfiguring mon.a (monmap changed)... 2026-03-10T05:44:37.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:44:36 vm02 bash[22526]: audit 2026-03-10T05:44:36.076200+0000 mon.a (mon.0) 176 : audit [INF] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-10T05:44:37.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:44:36 vm02 bash[22526]: audit 2026-03-10T05:44:36.076614+0000 mon.a (mon.0) 177 : audit [DBG] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch 2026-03-10T05:44:37.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:44:36 vm02 bash[22526]: audit 2026-03-10T05:44:36.076976+0000 mon.a (mon.0) 178 : audit [DBG] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:44:37.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:44:36 vm02 bash[22526]: cephadm 2026-03-10T05:44:36.077382+0000 mgr.y (mgr.14152) 29 : cephadm [INF] Reconfiguring daemon mon.a on vm02 2026-03-10T05:44:37.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:44:36 vm02 bash[22526]: audit 2026-03-10T05:44:36.192968+0000 mon.a (mon.0) 179 : audit [DBG] from='client.? 192.168.123.105:0/410320117' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch 2026-03-10T05:44:37.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:44:36 vm02 bash[22526]: audit 2026-03-10T05:44:36.277115+0000 mon.a (mon.0) 180 : audit [INF] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' 2026-03-10T05:44:37.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:44:36 vm02 bash[22526]: cephadm 2026-03-10T05:44:36.277518+0000 mgr.y (mgr.14152) 30 : cephadm [INF] Reconfiguring mon.b (monmap changed)... 2026-03-10T05:44:37.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:44:36 vm02 bash[22526]: audit 2026-03-10T05:44:36.277668+0000 mon.a (mon.0) 181 : audit [INF] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-10T05:44:37.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:44:36 vm02 bash[22526]: audit 2026-03-10T05:44:36.278190+0000 mon.a (mon.0) 182 : audit [DBG] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch 2026-03-10T05:44:37.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:44:36 vm02 bash[22526]: audit 2026-03-10T05:44:36.279651+0000 mon.a (mon.0) 183 : audit [DBG] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:44:37.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:44:36 vm02 bash[22526]: cephadm 2026-03-10T05:44:36.280044+0000 mgr.y (mgr.14152) 31 : cephadm [INF] Reconfiguring daemon mon.b on vm05 2026-03-10T05:44:37.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:44:36 vm02 bash[22526]: audit 2026-03-10T05:44:36.510498+0000 mon.a (mon.0) 184 : audit [INF] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' 2026-03-10T05:44:37.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:44:36 vm02 bash[22526]: audit 2026-03-10T05:44:36.512487+0000 mon.a (mon.0) 185 : audit [DBG] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T05:44:37.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:44:36 vm02 bash[22526]: audit 2026-03-10T05:44:36.513037+0000 mon.a (mon.0) 186 : audit [DBG] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:44:37.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:44:36 vm02 bash[22526]: audit 2026-03-10T05:44:36.513383+0000 mon.a (mon.0) 187 : audit [INF] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T05:44:37.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:44:36 vm02 bash[22526]: audit 2026-03-10T05:44:36.578121+0000 mon.a (mon.0) 188 : audit [INF] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' 2026-03-10T05:44:37.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:44:36 vm02 bash[22526]: audit 2026-03-10T05:44:36.582618+0000 mon.a (mon.0) 189 : audit [INF] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' 2026-03-10T05:44:37.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:44:36 vm02 bash[22526]: audit 2026-03-10T05:44:36.585815+0000 mon.a (mon.0) 190 : audit [INF] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' 2026-03-10T05:44:37.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:44:36 vm02 bash[22526]: audit 2026-03-10T05:44:36.668608+0000 mon.a (mon.0) 191 : audit [DBG] from='client.? 192.168.123.102:0/2043928545' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:44:37.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:44:36 vm02 bash[22526]: audit 2026-03-10T05:44:36.717126+0000 mon.a (mon.0) 192 : audit [DBG] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T05:44:37.257 INFO:teuthology.orchestra.run.vm05.stdout:Scheduled mgr update... 2026-03-10T05:44:37.303 DEBUG:teuthology.orchestra.run.vm05:mgr.x> sudo journalctl -f -n 0 -u ceph-107483ae-1c44-11f1-b530-c1172cd6122a@mgr.x.service 2026-03-10T05:44:37.303 INFO:tasks.cephadm:Deploying OSDs... 2026-03-10T05:44:37.304 DEBUG:teuthology.orchestra.run.vm02:> set -ex 2026-03-10T05:44:37.304 DEBUG:teuthology.orchestra.run.vm02:> dd if=/scratch_devs of=/dev/stdout 2026-03-10T05:44:37.307 DEBUG:teuthology.orchestra.run:got remote process result: 1 2026-03-10T05:44:37.307 DEBUG:teuthology.orchestra.run.vm02:> ls /dev/[sv]d? 2026-03-10T05:44:37.352 INFO:teuthology.orchestra.run.vm02.stdout:/dev/vda 2026-03-10T05:44:37.352 INFO:teuthology.orchestra.run.vm02.stdout:/dev/vdb 2026-03-10T05:44:37.352 INFO:teuthology.orchestra.run.vm02.stdout:/dev/vdc 2026-03-10T05:44:37.352 INFO:teuthology.orchestra.run.vm02.stdout:/dev/vdd 2026-03-10T05:44:37.352 INFO:teuthology.orchestra.run.vm02.stdout:/dev/vde 2026-03-10T05:44:37.352 WARNING:teuthology.misc:Removing root device: /dev/vda from device list 2026-03-10T05:44:37.352 DEBUG:teuthology.misc:devs=['/dev/vdb', '/dev/vdc', '/dev/vdd', '/dev/vde'] 2026-03-10T05:44:37.352 DEBUG:teuthology.orchestra.run.vm02:> stat /dev/vdb 2026-03-10T05:44:37.396 INFO:teuthology.orchestra.run.vm02.stdout: File: /dev/vdb 2026-03-10T05:44:37.396 INFO:teuthology.orchestra.run.vm02.stdout: Size: 0 Blocks: 0 IO Block: 4096 block special file 2026-03-10T05:44:37.396 INFO:teuthology.orchestra.run.vm02.stdout:Device: 5h/5d Inode: 24 Links: 1 Device type: fe,10 2026-03-10T05:44:37.396 INFO:teuthology.orchestra.run.vm02.stdout:Access: (0660/brw-rw----) Uid: ( 0/ root) Gid: ( 6/ disk) 2026-03-10T05:44:37.396 INFO:teuthology.orchestra.run.vm02.stdout:Access: 2026-03-10 05:44:20.919994867 +0000 2026-03-10T05:44:37.396 INFO:teuthology.orchestra.run.vm02.stdout:Modify: 2026-03-10 05:44:20.123994867 +0000 2026-03-10T05:44:37.396 INFO:teuthology.orchestra.run.vm02.stdout:Change: 2026-03-10 05:44:20.123994867 +0000 2026-03-10T05:44:37.396 INFO:teuthology.orchestra.run.vm02.stdout: Birth: - 2026-03-10T05:44:37.396 DEBUG:teuthology.orchestra.run.vm02:> sudo dd if=/dev/vdb of=/dev/null count=1 2026-03-10T05:44:37.443 INFO:teuthology.orchestra.run.vm02.stderr:1+0 records in 2026-03-10T05:44:37.443 INFO:teuthology.orchestra.run.vm02.stderr:1+0 records out 2026-03-10T05:44:37.443 INFO:teuthology.orchestra.run.vm02.stderr:512 bytes copied, 9.7131e-05 s, 5.3 MB/s 2026-03-10T05:44:37.443 DEBUG:teuthology.orchestra.run.vm02:> ! mount | grep -v devtmpfs | grep -q /dev/vdb 2026-03-10T05:44:37.489 DEBUG:teuthology.orchestra.run.vm02:> stat /dev/vdc 2026-03-10T05:44:37.532 INFO:teuthology.orchestra.run.vm02.stdout: File: /dev/vdc 2026-03-10T05:44:37.532 INFO:teuthology.orchestra.run.vm02.stdout: Size: 0 Blocks: 0 IO Block: 4096 block special file 2026-03-10T05:44:37.532 INFO:teuthology.orchestra.run.vm02.stdout:Device: 5h/5d Inode: 25 Links: 1 Device type: fe,20 2026-03-10T05:44:37.532 INFO:teuthology.orchestra.run.vm02.stdout:Access: (0660/brw-rw----) Uid: ( 0/ root) Gid: ( 6/ disk) 2026-03-10T05:44:37.532 INFO:teuthology.orchestra.run.vm02.stdout:Access: 2026-03-10 05:44:21.015994867 +0000 2026-03-10T05:44:37.532 INFO:teuthology.orchestra.run.vm02.stdout:Modify: 2026-03-10 05:44:20.123994867 +0000 2026-03-10T05:44:37.532 INFO:teuthology.orchestra.run.vm02.stdout:Change: 2026-03-10 05:44:20.123994867 +0000 2026-03-10T05:44:37.532 INFO:teuthology.orchestra.run.vm02.stdout: Birth: - 2026-03-10T05:44:37.532 DEBUG:teuthology.orchestra.run.vm02:> sudo dd if=/dev/vdc of=/dev/null count=1 2026-03-10T05:44:37.578 INFO:teuthology.orchestra.run.vm02.stderr:1+0 records in 2026-03-10T05:44:37.578 INFO:teuthology.orchestra.run.vm02.stderr:1+0 records out 2026-03-10T05:44:37.578 INFO:teuthology.orchestra.run.vm02.stderr:512 bytes copied, 0.000125205 s, 4.1 MB/s 2026-03-10T05:44:37.579 DEBUG:teuthology.orchestra.run.vm02:> ! mount | grep -v devtmpfs | grep -q /dev/vdc 2026-03-10T05:44:37.625 DEBUG:teuthology.orchestra.run.vm02:> stat /dev/vdd 2026-03-10T05:44:37.672 INFO:teuthology.orchestra.run.vm02.stdout: File: /dev/vdd 2026-03-10T05:44:37.672 INFO:teuthology.orchestra.run.vm02.stdout: Size: 0 Blocks: 0 IO Block: 4096 block special file 2026-03-10T05:44:37.672 INFO:teuthology.orchestra.run.vm02.stdout:Device: 5h/5d Inode: 26 Links: 1 Device type: fe,30 2026-03-10T05:44:37.672 INFO:teuthology.orchestra.run.vm02.stdout:Access: (0660/brw-rw----) Uid: ( 0/ root) Gid: ( 6/ disk) 2026-03-10T05:44:37.672 INFO:teuthology.orchestra.run.vm02.stdout:Access: 2026-03-10 05:44:21.103994867 +0000 2026-03-10T05:44:37.672 INFO:teuthology.orchestra.run.vm02.stdout:Modify: 2026-03-10 05:44:20.127994867 +0000 2026-03-10T05:44:37.672 INFO:teuthology.orchestra.run.vm02.stdout:Change: 2026-03-10 05:44:20.127994867 +0000 2026-03-10T05:44:37.672 INFO:teuthology.orchestra.run.vm02.stdout: Birth: - 2026-03-10T05:44:37.672 DEBUG:teuthology.orchestra.run.vm02:> sudo dd if=/dev/vdd of=/dev/null count=1 2026-03-10T05:44:37.721 INFO:teuthology.orchestra.run.vm02.stderr:1+0 records in 2026-03-10T05:44:37.721 INFO:teuthology.orchestra.run.vm02.stderr:1+0 records out 2026-03-10T05:44:37.721 INFO:teuthology.orchestra.run.vm02.stderr:512 bytes copied, 0.000103313 s, 5.0 MB/s 2026-03-10T05:44:37.721 DEBUG:teuthology.orchestra.run.vm02:> ! mount | grep -v devtmpfs | grep -q /dev/vdd 2026-03-10T05:44:37.769 DEBUG:teuthology.orchestra.run.vm02:> stat /dev/vde 2026-03-10T05:44:37.804 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:44:37 vm05 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:44:37.804 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:44:37 vm05 bash[17864]: cephadm 2026-03-10T05:44:25.438899+0000 mgr.y (mgr.14152) 21 : cephadm [INF] Deploying daemon mon.b on vm05 2026-03-10T05:44:37.804 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:44:37 vm05 bash[17864]: cluster 2026-03-10T05:44:25.552705+0000 mon.a (mon.0) 128 : cluster [INF] mon.a calling monitor election 2026-03-10T05:44:37.804 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:44:37 vm05 bash[17864]: audit 2026-03-10T05:44:25.554723+0000 mon.a (mon.0) 129 : audit [DBG] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-10T05:44:37.804 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:44:37 vm05 bash[17864]: audit 2026-03-10T05:44:25.555050+0000 mon.a (mon.0) 130 : audit [DBG] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T05:44:37.804 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:44:37 vm05 bash[17864]: audit 2026-03-10T05:44:26.549043+0000 mon.a (mon.0) 131 : audit [DBG] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T05:44:37.804 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:44:37 vm05 bash[17864]: audit 2026-03-10T05:44:26.715273+0000 mon.a (mon.0) 132 : audit [DBG] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T05:44:37.804 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:44:37 vm05 bash[17864]: audit 2026-03-10T05:44:27.549229+0000 mon.a (mon.0) 133 : audit [DBG] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T05:44:37.804 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:44:37 vm05 bash[17864]: cluster 2026-03-10T05:44:27.551561+0000 mon.c (mon.1) 1 : cluster [INF] mon.c calling monitor election 2026-03-10T05:44:37.804 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:44:37 vm05 bash[17864]: audit 2026-03-10T05:44:27.715753+0000 mon.a (mon.0) 134 : audit [DBG] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T05:44:37.804 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:44:37 vm05 bash[17864]: audit 2026-03-10T05:44:28.549280+0000 mon.a (mon.0) 135 : audit [DBG] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T05:44:37.804 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:44:37 vm05 bash[17864]: audit 2026-03-10T05:44:28.715616+0000 mon.a (mon.0) 136 : audit [DBG] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T05:44:37.804 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:44:37 vm05 bash[17864]: audit 2026-03-10T05:44:29.549299+0000 mon.a (mon.0) 137 : audit [DBG] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T05:44:37.804 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:44:37 vm05 bash[17864]: audit 2026-03-10T05:44:29.715836+0000 mon.a (mon.0) 138 : audit [DBG] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T05:44:37.804 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:44:37 vm05 bash[17864]: audit 2026-03-10T05:44:30.549403+0000 mon.a (mon.0) 139 : audit [DBG] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T05:44:37.804 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:44:37 vm05 bash[17864]: cluster 2026-03-10T05:44:30.557502+0000 mon.a (mon.0) 140 : cluster [INF] mon.a is new leader, mons a,c in quorum (ranks 0,1) 2026-03-10T05:44:37.804 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:44:37 vm05 bash[17864]: cluster 2026-03-10T05:44:30.561447+0000 mon.a (mon.0) 141 : cluster [DBG] monmap e2: 2 mons at {a=[v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0],c=[v2:192.168.123.102:3301/0,v1:192.168.123.102:6790/0]} 2026-03-10T05:44:37.805 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:44:37 vm05 bash[17864]: cluster 2026-03-10T05:44:30.561532+0000 mon.a (mon.0) 142 : cluster [DBG] fsmap 2026-03-10T05:44:37.805 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:44:37 vm05 bash[17864]: cluster 2026-03-10T05:44:30.561600+0000 mon.a (mon.0) 143 : cluster [DBG] osdmap e4: 0 total, 0 up, 0 in 2026-03-10T05:44:37.805 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:44:37 vm05 bash[17864]: cluster 2026-03-10T05:44:30.561855+0000 mon.a (mon.0) 144 : cluster [DBG] mgrmap e13: y(active, since 16s) 2026-03-10T05:44:37.805 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:44:37 vm05 bash[17864]: cluster 2026-03-10T05:44:30.566426+0000 mon.a (mon.0) 145 : cluster [INF] overall HEALTH_OK 2026-03-10T05:44:37.805 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:44:37 vm05 bash[17864]: audit 2026-03-10T05:44:30.569139+0000 mon.a (mon.0) 146 : audit [INF] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' 2026-03-10T05:44:37.805 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:44:37 vm05 bash[17864]: audit 2026-03-10T05:44:30.570226+0000 mon.a (mon.0) 147 : audit [DBG] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T05:44:37.805 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:44:37 vm05 bash[17864]: audit 2026-03-10T05:44:30.571490+0000 mon.a (mon.0) 148 : audit [DBG] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:44:37.805 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:44:37 vm05 bash[17864]: audit 2026-03-10T05:44:30.571869+0000 mon.a (mon.0) 149 : audit [INF] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T05:44:37.805 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:44:37 vm05 bash[17864]: audit 2026-03-10T05:44:30.720243+0000 mon.a (mon.0) 151 : audit [DBG] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-10T05:44:37.805 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:44:37 vm05 bash[17864]: audit 2026-03-10T05:44:30.720297+0000 mon.a (mon.0) 152 : audit [DBG] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T05:44:37.805 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:44:37 vm05 bash[17864]: audit 2026-03-10T05:44:30.720330+0000 mon.a (mon.0) 153 : audit [DBG] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T05:44:37.805 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:44:37 vm05 bash[17864]: cluster 2026-03-10T05:44:30.720421+0000 mon.a (mon.0) 154 : cluster [INF] mon.a calling monitor election 2026-03-10T05:44:37.805 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:44:37 vm05 bash[17864]: cluster 2026-03-10T05:44:30.722731+0000 mon.c (mon.1) 2 : cluster [INF] mon.c calling monitor election 2026-03-10T05:44:37.805 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:44:37 vm05 bash[17864]: audit 2026-03-10T05:44:31.716035+0000 mon.a (mon.0) 155 : audit [DBG] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T05:44:37.805 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:44:37 vm05 bash[17864]: audit 2026-03-10T05:44:32.716227+0000 mon.a (mon.0) 156 : audit [DBG] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T05:44:37.805 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:44:37 vm05 bash[17864]: audit 2026-03-10T05:44:33.716381+0000 mon.a (mon.0) 157 : audit [DBG] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T05:44:37.805 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:44:37 vm05 bash[17864]: audit 2026-03-10T05:44:34.716301+0000 mon.a (mon.0) 158 : audit [DBG] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T05:44:37.805 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:44:37 vm05 bash[17864]: cluster 2026-03-10T05:44:34.752386+0000 mgr.y (mgr.14152) 22 : cluster [DBG] pgmap v4: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T05:44:37.805 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:44:37 vm05 bash[17864]: audit 2026-03-10T05:44:35.716593+0000 mon.a (mon.0) 159 : audit [DBG] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T05:44:37.805 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:44:37 vm05 bash[17864]: cluster 2026-03-10T05:44:35.721636+0000 mon.a (mon.0) 160 : cluster [INF] mon.a is new leader, mons a,c in quorum (ranks 0,1) 2026-03-10T05:44:37.805 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:44:37 vm05 bash[17864]: cluster 2026-03-10T05:44:35.724418+0000 mon.a (mon.0) 161 : cluster [DBG] monmap e3: 3 mons at {a=[v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0],b=[v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0],c=[v2:192.168.123.102:3301/0,v1:192.168.123.102:6790/0]} 2026-03-10T05:44:37.805 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:44:37 vm05 bash[17864]: cluster 2026-03-10T05:44:35.724499+0000 mon.a (mon.0) 162 : cluster [DBG] fsmap 2026-03-10T05:44:37.805 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:44:37 vm05 bash[17864]: cluster 2026-03-10T05:44:35.724571+0000 mon.a (mon.0) 163 : cluster [DBG] osdmap e4: 0 total, 0 up, 0 in 2026-03-10T05:44:37.805 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:44:37 vm05 bash[17864]: cluster 2026-03-10T05:44:35.724860+0000 mon.a (mon.0) 164 : cluster [DBG] mgrmap e13: y(active, since 21s) 2026-03-10T05:44:37.805 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:44:37 vm05 bash[17864]: cluster 2026-03-10T05:44:35.729593+0000 mon.a (mon.0) 165 : cluster [INF] overall HEALTH_OK 2026-03-10T05:44:37.805 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:44:37 vm05 bash[17864]: audit 2026-03-10T05:44:35.732099+0000 mon.a (mon.0) 166 : audit [INF] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' 2026-03-10T05:44:37.805 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:44:37 vm05 bash[17864]: audit 2026-03-10T05:44:35.735666+0000 mon.a (mon.0) 167 : audit [INF] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' 2026-03-10T05:44:37.805 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:44:37 vm05 bash[17864]: audit 2026-03-10T05:44:35.738349+0000 mon.a (mon.0) 168 : audit [INF] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' 2026-03-10T05:44:37.805 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:44:37 vm05 bash[17864]: cephadm 2026-03-10T05:44:35.732719+0000 mgr.y (mgr.14152) 23 : cephadm [INF] Updating vm05:/etc/ceph/ceph.conf 2026-03-10T05:44:37.805 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:44:37 vm05 bash[17864]: cephadm 2026-03-10T05:44:35.736144+0000 mgr.y (mgr.14152) 24 : cephadm [INF] Updating vm02:/etc/ceph/ceph.conf 2026-03-10T05:44:37.805 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:44:37 vm05 bash[17864]: cephadm 2026-03-10T05:44:35.792393+0000 mgr.y (mgr.14152) 25 : cephadm [INF] Updating vm05:/etc/ceph/ceph.client.admin.keyring 2026-03-10T05:44:37.805 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:44:37 vm05 bash[17864]: audit 2026-03-10T05:44:35.797588+0000 mon.a (mon.0) 169 : audit [INF] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' 2026-03-10T05:44:37.805 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:44:37 vm05 bash[17864]: audit 2026-03-10T05:44:35.841321+0000 mon.a (mon.0) 170 : audit [INF] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' 2026-03-10T05:44:37.805 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:44:37 vm05 bash[17864]: audit 2026-03-10T05:44:35.844919+0000 mon.a (mon.0) 171 : audit [INF] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' 2026-03-10T05:44:37.805 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:44:37 vm05 bash[17864]: cephadm 2026-03-10T05:44:35.845761+0000 mgr.y (mgr.14152) 26 : cephadm [INF] Reconfiguring mon.c (monmap changed)... 2026-03-10T05:44:37.805 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:44:37 vm05 bash[17864]: audit 2026-03-10T05:44:35.845927+0000 mon.a (mon.0) 172 : audit [INF] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-10T05:44:37.805 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:44:37 vm05 bash[17864]: audit 2026-03-10T05:44:35.846494+0000 mon.a (mon.0) 173 : audit [DBG] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch 2026-03-10T05:44:37.805 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:44:37 vm05 bash[17864]: audit 2026-03-10T05:44:35.846895+0000 mon.a (mon.0) 174 : audit [DBG] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:44:37.805 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:44:37 vm05 bash[17864]: cephadm 2026-03-10T05:44:35.847359+0000 mgr.y (mgr.14152) 27 : cephadm [INF] Reconfiguring daemon mon.c on vm02 2026-03-10T05:44:37.805 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:44:37 vm05 bash[17864]: audit 2026-03-10T05:44:36.075540+0000 mon.a (mon.0) 175 : audit [INF] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' 2026-03-10T05:44:37.805 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:44:37 vm05 bash[17864]: cephadm 2026-03-10T05:44:36.075924+0000 mgr.y (mgr.14152) 28 : cephadm [INF] Reconfiguring mon.a (monmap changed)... 2026-03-10T05:44:37.805 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:44:37 vm05 bash[17864]: audit 2026-03-10T05:44:36.076200+0000 mon.a (mon.0) 176 : audit [INF] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-10T05:44:37.805 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:44:37 vm05 bash[17864]: audit 2026-03-10T05:44:36.076614+0000 mon.a (mon.0) 177 : audit [DBG] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch 2026-03-10T05:44:37.805 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:44:37 vm05 bash[17864]: audit 2026-03-10T05:44:36.076976+0000 mon.a (mon.0) 178 : audit [DBG] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:44:37.805 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:44:37 vm05 bash[17864]: cephadm 2026-03-10T05:44:36.077382+0000 mgr.y (mgr.14152) 29 : cephadm [INF] Reconfiguring daemon mon.a on vm02 2026-03-10T05:44:37.805 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:44:37 vm05 bash[17864]: audit 2026-03-10T05:44:36.192968+0000 mon.a (mon.0) 179 : audit [DBG] from='client.? 192.168.123.105:0/410320117' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch 2026-03-10T05:44:37.805 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:44:37 vm05 bash[17864]: audit 2026-03-10T05:44:36.277115+0000 mon.a (mon.0) 180 : audit [INF] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' 2026-03-10T05:44:37.805 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:44:37 vm05 bash[17864]: cephadm 2026-03-10T05:44:36.277518+0000 mgr.y (mgr.14152) 30 : cephadm [INF] Reconfiguring mon.b (monmap changed)... 2026-03-10T05:44:37.805 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:44:37 vm05 bash[17864]: audit 2026-03-10T05:44:36.277668+0000 mon.a (mon.0) 181 : audit [INF] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-10T05:44:37.805 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:44:37 vm05 bash[17864]: audit 2026-03-10T05:44:36.278190+0000 mon.a (mon.0) 182 : audit [DBG] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch 2026-03-10T05:44:37.805 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:44:37 vm05 bash[17864]: audit 2026-03-10T05:44:36.279651+0000 mon.a (mon.0) 183 : audit [DBG] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:44:37.805 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:44:37 vm05 bash[17864]: cephadm 2026-03-10T05:44:36.280044+0000 mgr.y (mgr.14152) 31 : cephadm [INF] Reconfiguring daemon mon.b on vm05 2026-03-10T05:44:37.805 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:44:37 vm05 bash[17864]: audit 2026-03-10T05:44:36.510498+0000 mon.a (mon.0) 184 : audit [INF] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' 2026-03-10T05:44:37.805 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:44:37 vm05 bash[17864]: audit 2026-03-10T05:44:36.512487+0000 mon.a (mon.0) 185 : audit [DBG] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T05:44:37.805 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:44:37 vm05 bash[17864]: audit 2026-03-10T05:44:36.513037+0000 mon.a (mon.0) 186 : audit [DBG] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:44:37.805 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:44:37 vm05 bash[17864]: audit 2026-03-10T05:44:36.513383+0000 mon.a (mon.0) 187 : audit [INF] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T05:44:37.805 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:44:37 vm05 bash[17864]: audit 2026-03-10T05:44:36.578121+0000 mon.a (mon.0) 188 : audit [INF] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' 2026-03-10T05:44:37.806 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:44:37 vm05 bash[17864]: audit 2026-03-10T05:44:36.582618+0000 mon.a (mon.0) 189 : audit [INF] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' 2026-03-10T05:44:37.806 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:44:37 vm05 bash[17864]: audit 2026-03-10T05:44:36.585815+0000 mon.a (mon.0) 190 : audit [INF] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' 2026-03-10T05:44:37.806 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:44:37 vm05 bash[17864]: audit 2026-03-10T05:44:36.668608+0000 mon.a (mon.0) 191 : audit [DBG] from='client.? 192.168.123.102:0/2043928545' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:44:37.806 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:44:37 vm05 bash[17864]: audit 2026-03-10T05:44:36.717126+0000 mon.a (mon.0) 192 : audit [DBG] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T05:44:37.816 INFO:teuthology.orchestra.run.vm02.stdout: File: /dev/vde 2026-03-10T05:44:37.816 INFO:teuthology.orchestra.run.vm02.stdout: Size: 0 Blocks: 0 IO Block: 4096 block special file 2026-03-10T05:44:37.816 INFO:teuthology.orchestra.run.vm02.stdout:Device: 5h/5d Inode: 27 Links: 1 Device type: fe,40 2026-03-10T05:44:37.816 INFO:teuthology.orchestra.run.vm02.stdout:Access: (0660/brw-rw----) Uid: ( 0/ root) Gid: ( 6/ disk) 2026-03-10T05:44:37.816 INFO:teuthology.orchestra.run.vm02.stdout:Access: 2026-03-10 05:44:21.191994867 +0000 2026-03-10T05:44:37.816 INFO:teuthology.orchestra.run.vm02.stdout:Modify: 2026-03-10 05:44:20.119994867 +0000 2026-03-10T05:44:37.816 INFO:teuthology.orchestra.run.vm02.stdout:Change: 2026-03-10 05:44:20.119994867 +0000 2026-03-10T05:44:37.816 INFO:teuthology.orchestra.run.vm02.stdout: Birth: - 2026-03-10T05:44:37.816 DEBUG:teuthology.orchestra.run.vm02:> sudo dd if=/dev/vde of=/dev/null count=1 2026-03-10T05:44:37.864 INFO:teuthology.orchestra.run.vm02.stderr:1+0 records in 2026-03-10T05:44:37.864 INFO:teuthology.orchestra.run.vm02.stderr:1+0 records out 2026-03-10T05:44:37.864 INFO:teuthology.orchestra.run.vm02.stderr:512 bytes copied, 0.000152826 s, 3.4 MB/s 2026-03-10T05:44:37.864 DEBUG:teuthology.orchestra.run.vm02:> ! mount | grep -v devtmpfs | grep -q /dev/vde 2026-03-10T05:44:37.909 DEBUG:teuthology.orchestra.run.vm05:> set -ex 2026-03-10T05:44:37.909 DEBUG:teuthology.orchestra.run.vm05:> dd if=/scratch_devs of=/dev/stdout 2026-03-10T05:44:37.912 DEBUG:teuthology.orchestra.run:got remote process result: 1 2026-03-10T05:44:37.913 DEBUG:teuthology.orchestra.run.vm05:> ls /dev/[sv]d? 2026-03-10T05:44:37.958 INFO:teuthology.orchestra.run.vm05.stdout:/dev/vda 2026-03-10T05:44:37.958 INFO:teuthology.orchestra.run.vm05.stdout:/dev/vdb 2026-03-10T05:44:37.958 INFO:teuthology.orchestra.run.vm05.stdout:/dev/vdc 2026-03-10T05:44:37.958 INFO:teuthology.orchestra.run.vm05.stdout:/dev/vdd 2026-03-10T05:44:37.958 INFO:teuthology.orchestra.run.vm05.stdout:/dev/vde 2026-03-10T05:44:37.959 WARNING:teuthology.misc:Removing root device: /dev/vda from device list 2026-03-10T05:44:37.959 DEBUG:teuthology.misc:devs=['/dev/vdb', '/dev/vdc', '/dev/vdd', '/dev/vde'] 2026-03-10T05:44:37.959 DEBUG:teuthology.orchestra.run.vm05:> stat /dev/vdb 2026-03-10T05:44:38.005 INFO:teuthology.orchestra.run.vm05.stdout: File: /dev/vdb 2026-03-10T05:44:38.005 INFO:teuthology.orchestra.run.vm05.stdout: Size: 0 Blocks: 0 IO Block: 4096 block special file 2026-03-10T05:44:38.005 INFO:teuthology.orchestra.run.vm05.stdout:Device: 5h/5d Inode: 24 Links: 1 Device type: fe,10 2026-03-10T05:44:38.005 INFO:teuthology.orchestra.run.vm05.stdout:Access: (0660/brw-rw----) Uid: ( 0/ root) Gid: ( 6/ disk) 2026-03-10T05:44:38.005 INFO:teuthology.orchestra.run.vm05.stdout:Access: 2026-03-10 05:44:24.374141532 +0000 2026-03-10T05:44:38.005 INFO:teuthology.orchestra.run.vm05.stdout:Modify: 2026-03-10 05:44:23.590141532 +0000 2026-03-10T05:44:38.005 INFO:teuthology.orchestra.run.vm05.stdout:Change: 2026-03-10 05:44:23.590141532 +0000 2026-03-10T05:44:38.005 INFO:teuthology.orchestra.run.vm05.stdout: Birth: - 2026-03-10T05:44:38.005 DEBUG:teuthology.orchestra.run.vm05:> sudo dd if=/dev/vdb of=/dev/null count=1 2026-03-10T05:44:38.051 INFO:teuthology.orchestra.run.vm05.stderr:1+0 records in 2026-03-10T05:44:38.051 INFO:teuthology.orchestra.run.vm05.stderr:1+0 records out 2026-03-10T05:44:38.051 INFO:teuthology.orchestra.run.vm05.stderr:512 bytes copied, 0.000165269 s, 3.1 MB/s 2026-03-10T05:44:38.052 DEBUG:teuthology.orchestra.run.vm05:> ! mount | grep -v devtmpfs | grep -q /dev/vdb 2026-03-10T05:44:38.062 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:44:37 vm05 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:44:38.066 DEBUG:teuthology.orchestra.run.vm05:> stat /dev/vdc 2026-03-10T05:44:38.111 INFO:teuthology.orchestra.run.vm05.stdout: File: /dev/vdc 2026-03-10T05:44:38.111 INFO:teuthology.orchestra.run.vm05.stdout: Size: 0 Blocks: 0 IO Block: 4096 block special file 2026-03-10T05:44:38.112 INFO:teuthology.orchestra.run.vm05.stdout:Device: 5h/5d Inode: 25 Links: 1 Device type: fe,20 2026-03-10T05:44:38.112 INFO:teuthology.orchestra.run.vm05.stdout:Access: (0660/brw-rw----) Uid: ( 0/ root) Gid: ( 6/ disk) 2026-03-10T05:44:38.112 INFO:teuthology.orchestra.run.vm05.stdout:Access: 2026-03-10 05:44:24.458141532 +0000 2026-03-10T05:44:38.112 INFO:teuthology.orchestra.run.vm05.stdout:Modify: 2026-03-10 05:44:23.594141532 +0000 2026-03-10T05:44:38.112 INFO:teuthology.orchestra.run.vm05.stdout:Change: 2026-03-10 05:44:23.594141532 +0000 2026-03-10T05:44:38.112 INFO:teuthology.orchestra.run.vm05.stdout: Birth: - 2026-03-10T05:44:38.112 DEBUG:teuthology.orchestra.run.vm05:> sudo dd if=/dev/vdc of=/dev/null count=1 2026-03-10T05:44:38.160 INFO:teuthology.orchestra.run.vm05.stderr:1+0 records in 2026-03-10T05:44:38.160 INFO:teuthology.orchestra.run.vm05.stderr:1+0 records out 2026-03-10T05:44:38.160 INFO:teuthology.orchestra.run.vm05.stderr:512 bytes copied, 0.000127168 s, 4.0 MB/s 2026-03-10T05:44:38.160 DEBUG:teuthology.orchestra.run.vm05:> ! mount | grep -v devtmpfs | grep -q /dev/vdc 2026-03-10T05:44:38.206 DEBUG:teuthology.orchestra.run.vm05:> stat /dev/vdd 2026-03-10T05:44:38.252 INFO:teuthology.orchestra.run.vm05.stdout: File: /dev/vdd 2026-03-10T05:44:38.252 INFO:teuthology.orchestra.run.vm05.stdout: Size: 0 Blocks: 0 IO Block: 4096 block special file 2026-03-10T05:44:38.252 INFO:teuthology.orchestra.run.vm05.stdout:Device: 5h/5d Inode: 26 Links: 1 Device type: fe,30 2026-03-10T05:44:38.252 INFO:teuthology.orchestra.run.vm05.stdout:Access: (0660/brw-rw----) Uid: ( 0/ root) Gid: ( 6/ disk) 2026-03-10T05:44:38.252 INFO:teuthology.orchestra.run.vm05.stdout:Access: 2026-03-10 05:44:24.542141532 +0000 2026-03-10T05:44:38.252 INFO:teuthology.orchestra.run.vm05.stdout:Modify: 2026-03-10 05:44:23.594141532 +0000 2026-03-10T05:44:38.252 INFO:teuthology.orchestra.run.vm05.stdout:Change: 2026-03-10 05:44:23.594141532 +0000 2026-03-10T05:44:38.252 INFO:teuthology.orchestra.run.vm05.stdout: Birth: - 2026-03-10T05:44:38.252 DEBUG:teuthology.orchestra.run.vm05:> sudo dd if=/dev/vdd of=/dev/null count=1 2026-03-10T05:44:38.300 INFO:teuthology.orchestra.run.vm05.stderr:1+0 records in 2026-03-10T05:44:38.300 INFO:teuthology.orchestra.run.vm05.stderr:1+0 records out 2026-03-10T05:44:38.301 INFO:teuthology.orchestra.run.vm05.stderr:512 bytes copied, 0.000134091 s, 3.8 MB/s 2026-03-10T05:44:38.301 INFO:journalctl@ceph.mgr.x.vm05.stdout:Mar 10 05:44:38 vm05 bash[18520]: debug 2026-03-10T05:44:38.098+0000 7f3de1778000 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member 2026-03-10T05:44:38.301 DEBUG:teuthology.orchestra.run.vm05:> ! mount | grep -v devtmpfs | grep -q /dev/vdd 2026-03-10T05:44:38.349 DEBUG:teuthology.orchestra.run.vm05:> stat /dev/vde 2026-03-10T05:44:38.396 INFO:teuthology.orchestra.run.vm05.stdout: File: /dev/vde 2026-03-10T05:44:38.396 INFO:teuthology.orchestra.run.vm05.stdout: Size: 0 Blocks: 0 IO Block: 4096 block special file 2026-03-10T05:44:38.396 INFO:teuthology.orchestra.run.vm05.stdout:Device: 5h/5d Inode: 27 Links: 1 Device type: fe,40 2026-03-10T05:44:38.396 INFO:teuthology.orchestra.run.vm05.stdout:Access: (0660/brw-rw----) Uid: ( 0/ root) Gid: ( 6/ disk) 2026-03-10T05:44:38.396 INFO:teuthology.orchestra.run.vm05.stdout:Access: 2026-03-10 05:44:24.630141532 +0000 2026-03-10T05:44:38.396 INFO:teuthology.orchestra.run.vm05.stdout:Modify: 2026-03-10 05:44:23.590141532 +0000 2026-03-10T05:44:38.396 INFO:teuthology.orchestra.run.vm05.stdout:Change: 2026-03-10 05:44:23.590141532 +0000 2026-03-10T05:44:38.396 INFO:teuthology.orchestra.run.vm05.stdout: Birth: - 2026-03-10T05:44:38.396 DEBUG:teuthology.orchestra.run.vm05:> sudo dd if=/dev/vde of=/dev/null count=1 2026-03-10T05:44:38.443 INFO:teuthology.orchestra.run.vm05.stderr:1+0 records in 2026-03-10T05:44:38.443 INFO:teuthology.orchestra.run.vm05.stderr:1+0 records out 2026-03-10T05:44:38.443 INFO:teuthology.orchestra.run.vm05.stderr:512 bytes copied, 0.000193502 s, 2.6 MB/s 2026-03-10T05:44:38.444 DEBUG:teuthology.orchestra.run.vm05:> ! mount | grep -v devtmpfs | grep -q /dev/vde 2026-03-10T05:44:38.490 INFO:tasks.cephadm:Deploying osd.0 on vm02 with /dev/vde... 2026-03-10T05:44:38.490 DEBUG:teuthology.orchestra.run.vm02:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 ceph-volume -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 107483ae-1c44-11f1-b530-c1172cd6122a -- lvm zap /dev/vde 2026-03-10T05:44:38.684 INFO:journalctl@ceph.mgr.x.vm05.stdout:Mar 10 05:44:38 vm05 bash[18520]: debug 2026-03-10T05:44:38.378+0000 7f3de1778000 -1 mgr[py] Module rook has missing NOTIFY_TYPES member 2026-03-10T05:44:38.834 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:44:38 vm02 bash[22526]: cluster 2026-03-10T05:44:32.712427+0000 mon.b (mon.2) 1 : cluster [INF] mon.b calling monitor election 2026-03-10T05:44:38.834 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:44:38 vm02 bash[22526]: cephadm 2026-03-10T05:44:37.314192+0000 mgr.y (mgr.14152) 37 : cephadm [INF] Deploying daemon mgr.x on vm05 2026-03-10T05:44:38.834 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:44:38 vm02 bash[22526]: cluster 2026-03-10T05:44:37.732618+0000 mon.b (mon.2) 2 : cluster [INF] mon.b calling monitor election 2026-03-10T05:44:38.834 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:44:38 vm02 bash[22526]: cluster 2026-03-10T05:44:37.738342+0000 mon.a (mon.0) 203 : cluster [INF] mon.a calling monitor election 2026-03-10T05:44:38.834 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:44:38 vm02 bash[22526]: cluster 2026-03-10T05:44:37.738639+0000 mon.c (mon.1) 3 : cluster [INF] mon.c calling monitor election 2026-03-10T05:44:38.834 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:44:38 vm02 bash[22526]: cluster 2026-03-10T05:44:37.740584+0000 mon.a (mon.0) 204 : cluster [INF] mon.a is new leader, mons a,c,b in quorum (ranks 0,1,2) 2026-03-10T05:44:38.834 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:44:38 vm02 bash[22526]: cluster 2026-03-10T05:44:37.743548+0000 mon.a (mon.0) 205 : cluster [DBG] monmap e3: 3 mons at {a=[v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0],b=[v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0],c=[v2:192.168.123.102:3301/0,v1:192.168.123.102:6790/0]} 2026-03-10T05:44:38.834 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:44:38 vm02 bash[22526]: cluster 2026-03-10T05:44:37.743577+0000 mon.a (mon.0) 206 : cluster [DBG] fsmap 2026-03-10T05:44:38.834 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:44:38 vm02 bash[22526]: cluster 2026-03-10T05:44:37.743593+0000 mon.a (mon.0) 207 : cluster [DBG] osdmap e4: 0 total, 0 up, 0 in 2026-03-10T05:44:38.834 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:44:38 vm02 bash[22526]: cluster 2026-03-10T05:44:37.743693+0000 mon.a (mon.0) 208 : cluster [DBG] mgrmap e13: y(active, since 23s) 2026-03-10T05:44:38.834 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:44:38 vm02 bash[22526]: cluster 2026-03-10T05:44:37.746344+0000 mon.a (mon.0) 209 : cluster [INF] overall HEALTH_OK 2026-03-10T05:44:38.834 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:44:38 vm02 bash[22526]: audit 2026-03-10T05:44:37.873280+0000 mon.a (mon.0) 210 : audit [INF] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' 2026-03-10T05:44:38.834 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:44:38 vm02 bash[22526]: audit 2026-03-10T05:44:37.874917+0000 mon.a (mon.0) 211 : audit [DBG] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T05:44:38.834 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:44:38 vm02 bash[22526]: audit 2026-03-10T05:44:37.875633+0000 mon.a (mon.0) 212 : audit [DBG] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:44:38.834 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:44:38 vm02 bash[22526]: audit 2026-03-10T05:44:37.875975+0000 mon.a (mon.0) 213 : audit [INF] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T05:44:38.834 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:44:38 vm02 bash[17462]: cluster 2026-03-10T05:44:32.712427+0000 mon.b (mon.2) 1 : cluster [INF] mon.b calling monitor election 2026-03-10T05:44:38.834 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:44:38 vm02 bash[17462]: cephadm 2026-03-10T05:44:37.314192+0000 mgr.y (mgr.14152) 37 : cephadm [INF] Deploying daemon mgr.x on vm05 2026-03-10T05:44:38.834 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:44:38 vm02 bash[17462]: cluster 2026-03-10T05:44:37.732618+0000 mon.b (mon.2) 2 : cluster [INF] mon.b calling monitor election 2026-03-10T05:44:38.834 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:44:38 vm02 bash[17462]: cluster 2026-03-10T05:44:37.738342+0000 mon.a (mon.0) 203 : cluster [INF] mon.a calling monitor election 2026-03-10T05:44:38.834 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:44:38 vm02 bash[17462]: cluster 2026-03-10T05:44:37.738639+0000 mon.c (mon.1) 3 : cluster [INF] mon.c calling monitor election 2026-03-10T05:44:38.834 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:44:38 vm02 bash[17462]: cluster 2026-03-10T05:44:37.740584+0000 mon.a (mon.0) 204 : cluster [INF] mon.a is new leader, mons a,c,b in quorum (ranks 0,1,2) 2026-03-10T05:44:38.834 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:44:38 vm02 bash[17462]: cluster 2026-03-10T05:44:37.743548+0000 mon.a (mon.0) 205 : cluster [DBG] monmap e3: 3 mons at {a=[v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0],b=[v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0],c=[v2:192.168.123.102:3301/0,v1:192.168.123.102:6790/0]} 2026-03-10T05:44:38.834 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:44:38 vm02 bash[17462]: cluster 2026-03-10T05:44:37.743577+0000 mon.a (mon.0) 206 : cluster [DBG] fsmap 2026-03-10T05:44:38.834 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:44:38 vm02 bash[17462]: cluster 2026-03-10T05:44:37.743593+0000 mon.a (mon.0) 207 : cluster [DBG] osdmap e4: 0 total, 0 up, 0 in 2026-03-10T05:44:38.834 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:44:38 vm02 bash[17462]: cluster 2026-03-10T05:44:37.743693+0000 mon.a (mon.0) 208 : cluster [DBG] mgrmap e13: y(active, since 23s) 2026-03-10T05:44:38.834 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:44:38 vm02 bash[17462]: cluster 2026-03-10T05:44:37.746344+0000 mon.a (mon.0) 209 : cluster [INF] overall HEALTH_OK 2026-03-10T05:44:38.834 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:44:38 vm02 bash[17462]: audit 2026-03-10T05:44:37.873280+0000 mon.a (mon.0) 210 : audit [INF] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' 2026-03-10T05:44:38.834 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:44:38 vm02 bash[17462]: audit 2026-03-10T05:44:37.874917+0000 mon.a (mon.0) 211 : audit [DBG] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T05:44:38.834 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:44:38 vm02 bash[17462]: audit 2026-03-10T05:44:37.875633+0000 mon.a (mon.0) 212 : audit [DBG] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:44:38.834 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:44:38 vm02 bash[17462]: audit 2026-03-10T05:44:37.875975+0000 mon.a (mon.0) 213 : audit [INF] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T05:44:39.008 INFO:journalctl@ceph.mgr.x.vm05.stdout:Mar 10 05:44:38 vm05 bash[18520]: debug 2026-03-10T05:44:38.822+0000 7f3de1778000 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member 2026-03-10T05:44:39.008 INFO:journalctl@ceph.mgr.x.vm05.stdout:Mar 10 05:44:38 vm05 bash[18520]: debug 2026-03-10T05:44:38.902+0000 7f3de1778000 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member 2026-03-10T05:44:39.008 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:44:38 vm05 bash[17864]: cluster 2026-03-10T05:44:32.712427+0000 mon.b (mon.2) 1 : cluster [INF] mon.b calling monitor election 2026-03-10T05:44:39.008 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:44:38 vm05 bash[17864]: cephadm 2026-03-10T05:44:37.314192+0000 mgr.y (mgr.14152) 37 : cephadm [INF] Deploying daemon mgr.x on vm05 2026-03-10T05:44:39.008 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:44:38 vm05 bash[17864]: cluster 2026-03-10T05:44:37.732618+0000 mon.b (mon.2) 2 : cluster [INF] mon.b calling monitor election 2026-03-10T05:44:39.008 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:44:38 vm05 bash[17864]: cluster 2026-03-10T05:44:37.738342+0000 mon.a (mon.0) 203 : cluster [INF] mon.a calling monitor election 2026-03-10T05:44:39.008 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:44:38 vm05 bash[17864]: cluster 2026-03-10T05:44:37.738639+0000 mon.c (mon.1) 3 : cluster [INF] mon.c calling monitor election 2026-03-10T05:44:39.008 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:44:38 vm05 bash[17864]: cluster 2026-03-10T05:44:37.740584+0000 mon.a (mon.0) 204 : cluster [INF] mon.a is new leader, mons a,c,b in quorum (ranks 0,1,2) 2026-03-10T05:44:39.008 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:44:38 vm05 bash[17864]: cluster 2026-03-10T05:44:37.743548+0000 mon.a (mon.0) 205 : cluster [DBG] monmap e3: 3 mons at {a=[v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0],b=[v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0],c=[v2:192.168.123.102:3301/0,v1:192.168.123.102:6790/0]} 2026-03-10T05:44:39.008 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:44:38 vm05 bash[17864]: cluster 2026-03-10T05:44:37.743577+0000 mon.a (mon.0) 206 : cluster [DBG] fsmap 2026-03-10T05:44:39.008 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:44:38 vm05 bash[17864]: cluster 2026-03-10T05:44:37.743593+0000 mon.a (mon.0) 207 : cluster [DBG] osdmap e4: 0 total, 0 up, 0 in 2026-03-10T05:44:39.008 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:44:38 vm05 bash[17864]: cluster 2026-03-10T05:44:37.743693+0000 mon.a (mon.0) 208 : cluster [DBG] mgrmap e13: y(active, since 23s) 2026-03-10T05:44:39.008 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:44:38 vm05 bash[17864]: cluster 2026-03-10T05:44:37.746344+0000 mon.a (mon.0) 209 : cluster [INF] overall HEALTH_OK 2026-03-10T05:44:39.008 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:44:38 vm05 bash[17864]: audit 2026-03-10T05:44:37.873280+0000 mon.a (mon.0) 210 : audit [INF] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' 2026-03-10T05:44:39.008 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:44:38 vm05 bash[17864]: audit 2026-03-10T05:44:37.874917+0000 mon.a (mon.0) 211 : audit [DBG] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T05:44:39.008 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:44:38 vm05 bash[17864]: audit 2026-03-10T05:44:37.875633+0000 mon.a (mon.0) 212 : audit [DBG] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:44:39.008 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:44:38 vm05 bash[17864]: audit 2026-03-10T05:44:37.875975+0000 mon.a (mon.0) 213 : audit [INF] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T05:44:39.054 INFO:teuthology.orchestra.run.vm02.stdout: 2026-03-10T05:44:39.066 DEBUG:teuthology.orchestra.run.vm02:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 107483ae-1c44-11f1-b530-c1172cd6122a -- ceph orch daemon add osd vm02:/dev/vde 2026-03-10T05:44:39.353 INFO:journalctl@ceph.mgr.x.vm05.stdout:Mar 10 05:44:39 vm05 bash[18520]: debug 2026-03-10T05:44:39.078+0000 7f3de1778000 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member 2026-03-10T05:44:39.353 INFO:journalctl@ceph.mgr.x.vm05.stdout:Mar 10 05:44:39 vm05 bash[18520]: debug 2026-03-10T05:44:39.170+0000 7f3de1778000 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member 2026-03-10T05:44:39.353 INFO:journalctl@ceph.mgr.x.vm05.stdout:Mar 10 05:44:39 vm05 bash[18520]: debug 2026-03-10T05:44:39.222+0000 7f3de1778000 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member 2026-03-10T05:44:39.689 INFO:journalctl@ceph.mgr.x.vm05.stdout:Mar 10 05:44:39 vm05 bash[18520]: debug 2026-03-10T05:44:39.342+0000 7f3de1778000 -1 mgr[py] Module influx has missing NOTIFY_TYPES member 2026-03-10T05:44:39.689 INFO:journalctl@ceph.mgr.x.vm05.stdout:Mar 10 05:44:39 vm05 bash[18520]: debug 2026-03-10T05:44:39.394+0000 7f3de1778000 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member 2026-03-10T05:44:39.689 INFO:journalctl@ceph.mgr.x.vm05.stdout:Mar 10 05:44:39 vm05 bash[18520]: debug 2026-03-10T05:44:39.454+0000 7f3de1778000 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member 2026-03-10T05:44:39.691 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:44:39 vm02 bash[17462]: audit 2026-03-10T05:44:38.716982+0000 mon.a (mon.0) 214 : audit [DBG] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T05:44:39.691 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:44:39 vm02 bash[17462]: cluster 2026-03-10T05:44:38.752771+0000 mgr.y (mgr.14152) 38 : cluster [DBG] pgmap v6: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T05:44:39.691 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:44:39 vm02 bash[17462]: audit 2026-03-10T05:44:39.432015+0000 mon.a (mon.0) 215 : audit [DBG] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-10T05:44:39.691 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:44:39 vm02 bash[17462]: audit 2026-03-10T05:44:39.433383+0000 mon.a (mon.0) 216 : audit [INF] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-10T05:44:39.691 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:44:39 vm02 bash[17462]: audit 2026-03-10T05:44:39.433741+0000 mon.a (mon.0) 217 : audit [DBG] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:44:39.691 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:44:39 vm02 bash[22526]: audit 2026-03-10T05:44:38.716982+0000 mon.a (mon.0) 214 : audit [DBG] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T05:44:39.691 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:44:39 vm02 bash[22526]: cluster 2026-03-10T05:44:38.752771+0000 mgr.y (mgr.14152) 38 : cluster [DBG] pgmap v6: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T05:44:39.691 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:44:39 vm02 bash[22526]: audit 2026-03-10T05:44:39.432015+0000 mon.a (mon.0) 215 : audit [DBG] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-10T05:44:39.691 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:44:39 vm02 bash[22526]: audit 2026-03-10T05:44:39.433383+0000 mon.a (mon.0) 216 : audit [INF] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-10T05:44:39.691 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:44:39 vm02 bash[22526]: audit 2026-03-10T05:44:39.433741+0000 mon.a (mon.0) 217 : audit [DBG] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:44:39.965 INFO:journalctl@ceph.mgr.x.vm05.stdout:Mar 10 05:44:39 vm05 bash[18520]: debug 2026-03-10T05:44:39.910+0000 7f3de1778000 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member 2026-03-10T05:44:39.966 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:44:39 vm05 bash[17864]: audit 2026-03-10T05:44:38.716982+0000 mon.a (mon.0) 214 : audit [DBG] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T05:44:39.966 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:44:39 vm05 bash[17864]: cluster 2026-03-10T05:44:38.752771+0000 mgr.y (mgr.14152) 38 : cluster [DBG] pgmap v6: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T05:44:39.966 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:44:39 vm05 bash[17864]: audit 2026-03-10T05:44:39.432015+0000 mon.a (mon.0) 215 : audit [DBG] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-10T05:44:39.966 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:44:39 vm05 bash[17864]: audit 2026-03-10T05:44:39.433383+0000 mon.a (mon.0) 216 : audit [INF] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-10T05:44:39.966 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:44:39 vm05 bash[17864]: audit 2026-03-10T05:44:39.433741+0000 mon.a (mon.0) 217 : audit [DBG] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:44:40.258 INFO:journalctl@ceph.mgr.x.vm05.stdout:Mar 10 05:44:39 vm05 bash[18520]: debug 2026-03-10T05:44:39.954+0000 7f3de1778000 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member 2026-03-10T05:44:40.258 INFO:journalctl@ceph.mgr.x.vm05.stdout:Mar 10 05:44:40 vm05 bash[18520]: debug 2026-03-10T05:44:40.006+0000 7f3de1778000 -1 mgr[py] Module progress has missing NOTIFY_TYPES member 2026-03-10T05:44:40.682 INFO:journalctl@ceph.mgr.x.vm05.stdout:Mar 10 05:44:40 vm05 bash[18520]: debug 2026-03-10T05:44:40.274+0000 7f3de1778000 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member 2026-03-10T05:44:40.682 INFO:journalctl@ceph.mgr.x.vm05.stdout:Mar 10 05:44:40 vm05 bash[18520]: debug 2026-03-10T05:44:40.326+0000 7f3de1778000 -1 mgr[py] Module crash has missing NOTIFY_TYPES member 2026-03-10T05:44:40.682 INFO:journalctl@ceph.mgr.x.vm05.stdout:Mar 10 05:44:40 vm05 bash[18520]: debug 2026-03-10T05:44:40.374+0000 7f3de1778000 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member 2026-03-10T05:44:40.682 INFO:journalctl@ceph.mgr.x.vm05.stdout:Mar 10 05:44:40 vm05 bash[18520]: debug 2026-03-10T05:44:40.446+0000 7f3de1778000 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member 2026-03-10T05:44:40.959 INFO:journalctl@ceph.mgr.x.vm05.stdout:Mar 10 05:44:40 vm05 bash[18520]: debug 2026-03-10T05:44:40.738+0000 7f3de1778000 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member 2026-03-10T05:44:40.959 INFO:journalctl@ceph.mgr.x.vm05.stdout:Mar 10 05:44:40 vm05 bash[18520]: debug 2026-03-10T05:44:40.902+0000 7f3de1778000 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member 2026-03-10T05:44:40.959 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:44:40 vm05 bash[17864]: audit 2026-03-10T05:44:39.430745+0000 mgr.y (mgr.14152) 39 : audit [DBG] from='client.14220 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm02:/dev/vde", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T05:44:41.083 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:44:40 vm02 bash[17462]: audit 2026-03-10T05:44:39.430745+0000 mgr.y (mgr.14152) 39 : audit [DBG] from='client.14220 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm02:/dev/vde", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T05:44:41.084 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:44:40 vm02 bash[22526]: audit 2026-03-10T05:44:39.430745+0000 mgr.y (mgr.14152) 39 : audit [DBG] from='client.14220 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm02:/dev/vde", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T05:44:41.258 INFO:journalctl@ceph.mgr.x.vm05.stdout:Mar 10 05:44:40 vm05 bash[18520]: debug 2026-03-10T05:44:40.950+0000 7f3de1778000 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member 2026-03-10T05:44:41.258 INFO:journalctl@ceph.mgr.x.vm05.stdout:Mar 10 05:44:41 vm05 bash[18520]: debug 2026-03-10T05:44:41.002+0000 7f3de1778000 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member 2026-03-10T05:44:41.258 INFO:journalctl@ceph.mgr.x.vm05.stdout:Mar 10 05:44:41 vm05 bash[18520]: debug 2026-03-10T05:44:41.130+0000 7f3de1778000 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member 2026-03-10T05:44:42.007 INFO:journalctl@ceph.mgr.x.vm05.stdout:Mar 10 05:44:41 vm05 bash[18520]: debug 2026-03-10T05:44:41.558+0000 7f3de1778000 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member 2026-03-10T05:44:42.008 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:44:41 vm05 bash[17864]: audit 2026-03-10T05:44:40.689095+0000 mon.a (mon.0) 218 : audit [INF] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' 2026-03-10T05:44:42.008 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:44:41 vm05 bash[17864]: audit 2026-03-10T05:44:40.695854+0000 mon.a (mon.0) 219 : audit [INF] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' 2026-03-10T05:44:42.008 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:44:41 vm05 bash[17864]: cephadm 2026-03-10T05:44:40.697065+0000 mgr.y (mgr.14152) 40 : cephadm [INF] Reconfiguring mgr.y (unknown last config time)... 2026-03-10T05:44:42.008 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:44:41 vm05 bash[17864]: audit 2026-03-10T05:44:40.697239+0000 mon.a (mon.0) 220 : audit [INF] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.y", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch 2026-03-10T05:44:42.008 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:44:41 vm05 bash[17864]: audit 2026-03-10T05:44:40.697641+0000 mon.a (mon.0) 221 : audit [DBG] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "mgr services"}]: dispatch 2026-03-10T05:44:42.008 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:44:41 vm05 bash[17864]: audit 2026-03-10T05:44:40.697963+0000 mon.a (mon.0) 222 : audit [DBG] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:44:42.008 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:44:41 vm05 bash[17864]: cephadm 2026-03-10T05:44:40.698391+0000 mgr.y (mgr.14152) 41 : cephadm [INF] Reconfiguring daemon mgr.y on vm02 2026-03-10T05:44:42.008 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:44:41 vm05 bash[17864]: audit 2026-03-10T05:44:40.742144+0000 mon.a (mon.0) 223 : audit [INF] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' 2026-03-10T05:44:42.008 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:44:41 vm05 bash[17864]: cluster 2026-03-10T05:44:40.752964+0000 mgr.y (mgr.14152) 42 : cluster [DBG] pgmap v7: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T05:44:42.008 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:44:41 vm05 bash[17864]: audit 2026-03-10T05:44:40.900977+0000 mon.a (mon.0) 224 : audit [INF] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' 2026-03-10T05:44:42.008 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:44:41 vm05 bash[17864]: audit 2026-03-10T05:44:40.902037+0000 mon.a (mon.0) 225 : audit [DBG] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T05:44:42.008 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:44:41 vm05 bash[17864]: audit 2026-03-10T05:44:40.902615+0000 mon.a (mon.0) 226 : audit [DBG] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:44:42.008 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:44:41 vm05 bash[17864]: audit 2026-03-10T05:44:40.902936+0000 mon.a (mon.0) 227 : audit [INF] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T05:44:42.008 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:44:41 vm05 bash[17864]: audit 2026-03-10T05:44:40.906086+0000 mon.a (mon.0) 228 : audit [INF] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' 2026-03-10T05:44:42.008 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:44:41 vm05 bash[17864]: cluster 2026-03-10T05:44:41.571075+0000 mon.a (mon.0) 229 : cluster [DBG] Standby manager daemon x started 2026-03-10T05:44:42.008 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:44:41 vm05 bash[17864]: audit 2026-03-10T05:44:41.572047+0000 mon.a (mon.0) 230 : audit [DBG] from='mgr.? 192.168.123.105:0/264173658' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/x/crt"}]: dispatch 2026-03-10T05:44:42.008 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:44:41 vm05 bash[17864]: audit 2026-03-10T05:44:41.572280+0000 mon.a (mon.0) 231 : audit [DBG] from='mgr.? 192.168.123.105:0/264173658' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/crt"}]: dispatch 2026-03-10T05:44:42.008 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:44:41 vm05 bash[17864]: audit 2026-03-10T05:44:41.572828+0000 mon.a (mon.0) 232 : audit [DBG] from='mgr.? 192.168.123.105:0/264173658' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/x/key"}]: dispatch 2026-03-10T05:44:42.008 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:44:41 vm05 bash[17864]: audit 2026-03-10T05:44:41.573000+0000 mon.a (mon.0) 233 : audit [DBG] from='mgr.? 192.168.123.105:0/264173658' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/key"}]: dispatch 2026-03-10T05:44:42.045 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:44:41 vm02 bash[17462]: audit 2026-03-10T05:44:40.689095+0000 mon.a (mon.0) 218 : audit [INF] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' 2026-03-10T05:44:42.045 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:44:41 vm02 bash[17462]: audit 2026-03-10T05:44:40.695854+0000 mon.a (mon.0) 219 : audit [INF] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' 2026-03-10T05:44:42.045 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:44:41 vm02 bash[17462]: cephadm 2026-03-10T05:44:40.697065+0000 mgr.y (mgr.14152) 40 : cephadm [INF] Reconfiguring mgr.y (unknown last config time)... 2026-03-10T05:44:42.045 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:44:41 vm02 bash[17462]: audit 2026-03-10T05:44:40.697239+0000 mon.a (mon.0) 220 : audit [INF] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.y", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch 2026-03-10T05:44:42.045 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:44:41 vm02 bash[17462]: audit 2026-03-10T05:44:40.697641+0000 mon.a (mon.0) 221 : audit [DBG] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "mgr services"}]: dispatch 2026-03-10T05:44:42.045 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:44:41 vm02 bash[17462]: audit 2026-03-10T05:44:40.697963+0000 mon.a (mon.0) 222 : audit [DBG] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:44:42.045 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:44:41 vm02 bash[17462]: cephadm 2026-03-10T05:44:40.698391+0000 mgr.y (mgr.14152) 41 : cephadm [INF] Reconfiguring daemon mgr.y on vm02 2026-03-10T05:44:42.045 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:44:41 vm02 bash[17462]: audit 2026-03-10T05:44:40.742144+0000 mon.a (mon.0) 223 : audit [INF] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' 2026-03-10T05:44:42.045 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:44:41 vm02 bash[17462]: cluster 2026-03-10T05:44:40.752964+0000 mgr.y (mgr.14152) 42 : cluster [DBG] pgmap v7: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T05:44:42.045 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:44:41 vm02 bash[17462]: audit 2026-03-10T05:44:40.900977+0000 mon.a (mon.0) 224 : audit [INF] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' 2026-03-10T05:44:42.045 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:44:41 vm02 bash[17462]: audit 2026-03-10T05:44:40.902037+0000 mon.a (mon.0) 225 : audit [DBG] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T05:44:42.045 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:44:41 vm02 bash[17462]: audit 2026-03-10T05:44:40.902615+0000 mon.a (mon.0) 226 : audit [DBG] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:44:42.045 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:44:41 vm02 bash[17462]: audit 2026-03-10T05:44:40.902936+0000 mon.a (mon.0) 227 : audit [INF] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T05:44:42.045 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:44:41 vm02 bash[17462]: audit 2026-03-10T05:44:40.906086+0000 mon.a (mon.0) 228 : audit [INF] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' 2026-03-10T05:44:42.045 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:44:41 vm02 bash[17462]: cluster 2026-03-10T05:44:41.571075+0000 mon.a (mon.0) 229 : cluster [DBG] Standby manager daemon x started 2026-03-10T05:44:42.045 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:44:41 vm02 bash[17462]: audit 2026-03-10T05:44:41.572047+0000 mon.a (mon.0) 230 : audit [DBG] from='mgr.? 192.168.123.105:0/264173658' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/x/crt"}]: dispatch 2026-03-10T05:44:42.045 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:44:41 vm02 bash[17462]: audit 2026-03-10T05:44:41.572280+0000 mon.a (mon.0) 231 : audit [DBG] from='mgr.? 192.168.123.105:0/264173658' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/crt"}]: dispatch 2026-03-10T05:44:42.045 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:44:41 vm02 bash[17462]: audit 2026-03-10T05:44:41.572828+0000 mon.a (mon.0) 232 : audit [DBG] from='mgr.? 192.168.123.105:0/264173658' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/x/key"}]: dispatch 2026-03-10T05:44:42.045 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:44:41 vm02 bash[17462]: audit 2026-03-10T05:44:41.573000+0000 mon.a (mon.0) 233 : audit [DBG] from='mgr.? 192.168.123.105:0/264173658' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/key"}]: dispatch 2026-03-10T05:44:42.045 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:44:41 vm02 bash[22526]: audit 2026-03-10T05:44:40.689095+0000 mon.a (mon.0) 218 : audit [INF] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' 2026-03-10T05:44:42.046 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:44:41 vm02 bash[22526]: audit 2026-03-10T05:44:40.695854+0000 mon.a (mon.0) 219 : audit [INF] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' 2026-03-10T05:44:42.046 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:44:41 vm02 bash[22526]: cephadm 2026-03-10T05:44:40.697065+0000 mgr.y (mgr.14152) 40 : cephadm [INF] Reconfiguring mgr.y (unknown last config time)... 2026-03-10T05:44:42.046 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:44:41 vm02 bash[22526]: audit 2026-03-10T05:44:40.697239+0000 mon.a (mon.0) 220 : audit [INF] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.y", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch 2026-03-10T05:44:42.046 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:44:41 vm02 bash[22526]: audit 2026-03-10T05:44:40.697641+0000 mon.a (mon.0) 221 : audit [DBG] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "mgr services"}]: dispatch 2026-03-10T05:44:42.046 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:44:41 vm02 bash[22526]: audit 2026-03-10T05:44:40.697963+0000 mon.a (mon.0) 222 : audit [DBG] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:44:42.046 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:44:41 vm02 bash[22526]: cephadm 2026-03-10T05:44:40.698391+0000 mgr.y (mgr.14152) 41 : cephadm [INF] Reconfiguring daemon mgr.y on vm02 2026-03-10T05:44:42.046 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:44:41 vm02 bash[22526]: audit 2026-03-10T05:44:40.742144+0000 mon.a (mon.0) 223 : audit [INF] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' 2026-03-10T05:44:42.046 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:44:41 vm02 bash[22526]: cluster 2026-03-10T05:44:40.752964+0000 mgr.y (mgr.14152) 42 : cluster [DBG] pgmap v7: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T05:44:42.046 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:44:41 vm02 bash[22526]: audit 2026-03-10T05:44:40.900977+0000 mon.a (mon.0) 224 : audit [INF] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' 2026-03-10T05:44:42.046 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:44:41 vm02 bash[22526]: audit 2026-03-10T05:44:40.902037+0000 mon.a (mon.0) 225 : audit [DBG] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T05:44:42.046 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:44:41 vm02 bash[22526]: audit 2026-03-10T05:44:40.902615+0000 mon.a (mon.0) 226 : audit [DBG] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:44:42.046 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:44:41 vm02 bash[22526]: audit 2026-03-10T05:44:40.902936+0000 mon.a (mon.0) 227 : audit [INF] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T05:44:42.046 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:44:41 vm02 bash[22526]: audit 2026-03-10T05:44:40.906086+0000 mon.a (mon.0) 228 : audit [INF] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' 2026-03-10T05:44:42.046 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:44:41 vm02 bash[22526]: cluster 2026-03-10T05:44:41.571075+0000 mon.a (mon.0) 229 : cluster [DBG] Standby manager daemon x started 2026-03-10T05:44:42.046 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:44:41 vm02 bash[22526]: audit 2026-03-10T05:44:41.572047+0000 mon.a (mon.0) 230 : audit [DBG] from='mgr.? 192.168.123.105:0/264173658' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/x/crt"}]: dispatch 2026-03-10T05:44:42.046 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:44:41 vm02 bash[22526]: audit 2026-03-10T05:44:41.572280+0000 mon.a (mon.0) 231 : audit [DBG] from='mgr.? 192.168.123.105:0/264173658' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/crt"}]: dispatch 2026-03-10T05:44:42.046 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:44:41 vm02 bash[22526]: audit 2026-03-10T05:44:41.572828+0000 mon.a (mon.0) 232 : audit [DBG] from='mgr.? 192.168.123.105:0/264173658' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/x/key"}]: dispatch 2026-03-10T05:44:42.046 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:44:41 vm02 bash[22526]: audit 2026-03-10T05:44:41.573000+0000 mon.a (mon.0) 233 : audit [DBG] from='mgr.? 192.168.123.105:0/264173658' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/key"}]: dispatch 2026-03-10T05:44:43.258 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:44:42 vm05 bash[17864]: cluster 2026-03-10T05:44:41.917337+0000 mon.a (mon.0) 234 : cluster [DBG] mgrmap e14: y(active, since 27s), standbys: x 2026-03-10T05:44:43.258 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:44:42 vm05 bash[17864]: audit 2026-03-10T05:44:41.917447+0000 mon.a (mon.0) 235 : audit [DBG] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "mgr metadata", "who": "x", "id": "x"}]: dispatch 2026-03-10T05:44:43.258 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:44:42 vm05 bash[17864]: audit 2026-03-10T05:44:42.557237+0000 mon.a (mon.0) 236 : audit [INF] from='client.? 192.168.123.102:0/197350836' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "181bfe3a-c244-4b31-bf3a-c6074cc650d1"}]: dispatch 2026-03-10T05:44:43.258 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:44:42 vm05 bash[17864]: audit 2026-03-10T05:44:42.563139+0000 mon.a (mon.0) 237 : audit [INF] from='client.? 192.168.123.102:0/197350836' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "181bfe3a-c244-4b31-bf3a-c6074cc650d1"}]': finished 2026-03-10T05:44:43.258 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:44:42 vm05 bash[17864]: cluster 2026-03-10T05:44:42.563221+0000 mon.a (mon.0) 238 : cluster [DBG] osdmap e5: 1 total, 0 up, 1 in 2026-03-10T05:44:43.258 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:44:42 vm05 bash[17864]: audit 2026-03-10T05:44:42.563310+0000 mon.a (mon.0) 239 : audit [DBG] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T05:44:43.333 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:44:42 vm02 bash[17462]: cluster 2026-03-10T05:44:41.917337+0000 mon.a (mon.0) 234 : cluster [DBG] mgrmap e14: y(active, since 27s), standbys: x 2026-03-10T05:44:43.334 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:44:42 vm02 bash[17462]: audit 2026-03-10T05:44:41.917447+0000 mon.a (mon.0) 235 : audit [DBG] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "mgr metadata", "who": "x", "id": "x"}]: dispatch 2026-03-10T05:44:43.334 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:44:42 vm02 bash[17462]: audit 2026-03-10T05:44:42.557237+0000 mon.a (mon.0) 236 : audit [INF] from='client.? 192.168.123.102:0/197350836' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "181bfe3a-c244-4b31-bf3a-c6074cc650d1"}]: dispatch 2026-03-10T05:44:43.334 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:44:42 vm02 bash[17462]: audit 2026-03-10T05:44:42.563139+0000 mon.a (mon.0) 237 : audit [INF] from='client.? 192.168.123.102:0/197350836' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "181bfe3a-c244-4b31-bf3a-c6074cc650d1"}]': finished 2026-03-10T05:44:43.334 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:44:42 vm02 bash[17462]: cluster 2026-03-10T05:44:42.563221+0000 mon.a (mon.0) 238 : cluster [DBG] osdmap e5: 1 total, 0 up, 1 in 2026-03-10T05:44:43.334 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:44:42 vm02 bash[17462]: audit 2026-03-10T05:44:42.563310+0000 mon.a (mon.0) 239 : audit [DBG] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T05:44:43.334 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:44:42 vm02 bash[22526]: cluster 2026-03-10T05:44:41.917337+0000 mon.a (mon.0) 234 : cluster [DBG] mgrmap e14: y(active, since 27s), standbys: x 2026-03-10T05:44:43.334 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:44:42 vm02 bash[22526]: audit 2026-03-10T05:44:41.917447+0000 mon.a (mon.0) 235 : audit [DBG] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "mgr metadata", "who": "x", "id": "x"}]: dispatch 2026-03-10T05:44:43.334 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:44:42 vm02 bash[22526]: audit 2026-03-10T05:44:42.557237+0000 mon.a (mon.0) 236 : audit [INF] from='client.? 192.168.123.102:0/197350836' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "181bfe3a-c244-4b31-bf3a-c6074cc650d1"}]: dispatch 2026-03-10T05:44:43.334 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:44:42 vm02 bash[22526]: audit 2026-03-10T05:44:42.563139+0000 mon.a (mon.0) 237 : audit [INF] from='client.? 192.168.123.102:0/197350836' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "181bfe3a-c244-4b31-bf3a-c6074cc650d1"}]': finished 2026-03-10T05:44:43.334 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:44:42 vm02 bash[22526]: cluster 2026-03-10T05:44:42.563221+0000 mon.a (mon.0) 238 : cluster [DBG] osdmap e5: 1 total, 0 up, 1 in 2026-03-10T05:44:43.334 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:44:42 vm02 bash[22526]: audit 2026-03-10T05:44:42.563310+0000 mon.a (mon.0) 239 : audit [DBG] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T05:44:44.258 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:44:43 vm05 bash[17864]: cluster 2026-03-10T05:44:42.753154+0000 mgr.y (mgr.14152) 43 : cluster [DBG] pgmap v9: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T05:44:44.258 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:44:43 vm05 bash[17864]: audit 2026-03-10T05:44:43.131361+0000 mon.a (mon.0) 240 : audit [DBG] from='client.? 192.168.123.102:0/718993221' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-10T05:44:44.333 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:44:43 vm02 bash[17462]: cluster 2026-03-10T05:44:42.753154+0000 mgr.y (mgr.14152) 43 : cluster [DBG] pgmap v9: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T05:44:44.333 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:44:43 vm02 bash[17462]: audit 2026-03-10T05:44:43.131361+0000 mon.a (mon.0) 240 : audit [DBG] from='client.? 192.168.123.102:0/718993221' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-10T05:44:44.334 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:44:43 vm02 bash[22526]: cluster 2026-03-10T05:44:42.753154+0000 mgr.y (mgr.14152) 43 : cluster [DBG] pgmap v9: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T05:44:44.334 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:44:43 vm02 bash[22526]: audit 2026-03-10T05:44:43.131361+0000 mon.a (mon.0) 240 : audit [DBG] from='client.? 192.168.123.102:0/718993221' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-10T05:44:46.258 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:44:45 vm05 bash[17864]: cluster 2026-03-10T05:44:44.753380+0000 mgr.y (mgr.14152) 44 : cluster [DBG] pgmap v10: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T05:44:46.333 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:44:45 vm02 bash[17462]: cluster 2026-03-10T05:44:44.753380+0000 mgr.y (mgr.14152) 44 : cluster [DBG] pgmap v10: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T05:44:46.333 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:44:45 vm02 bash[22526]: cluster 2026-03-10T05:44:44.753380+0000 mgr.y (mgr.14152) 44 : cluster [DBG] pgmap v10: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T05:44:48.203 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:44:47 vm02 bash[17462]: cluster 2026-03-10T05:44:46.753601+0000 mgr.y (mgr.14152) 45 : cluster [DBG] pgmap v11: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T05:44:48.203 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:44:47 vm02 bash[22526]: cluster 2026-03-10T05:44:46.753601+0000 mgr.y (mgr.14152) 45 : cluster [DBG] pgmap v11: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T05:44:48.258 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:44:47 vm05 bash[17864]: cluster 2026-03-10T05:44:46.753601+0000 mgr.y (mgr.14152) 45 : cluster [DBG] pgmap v11: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T05:44:49.083 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:44:48 vm02 bash[17462]: audit 2026-03-10T05:44:48.468499+0000 mon.a (mon.0) 241 : audit [INF] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.0"}]: dispatch 2026-03-10T05:44:49.083 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:44:48 vm02 bash[17462]: audit 2026-03-10T05:44:48.469079+0000 mon.a (mon.0) 242 : audit [DBG] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:44:49.083 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:44:48 vm02 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:44:49.084 INFO:journalctl@ceph.mgr.y.vm02.stdout:Mar 10 05:44:48 vm02 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:44:49.084 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:44:48 vm02 bash[22526]: audit 2026-03-10T05:44:48.468499+0000 mon.a (mon.0) 241 : audit [INF] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.0"}]: dispatch 2026-03-10T05:44:49.084 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:44:48 vm02 bash[22526]: audit 2026-03-10T05:44:48.469079+0000 mon.a (mon.0) 242 : audit [DBG] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:44:49.084 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:44:48 vm02 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:44:49.258 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:44:48 vm05 bash[17864]: audit 2026-03-10T05:44:48.468499+0000 mon.a (mon.0) 241 : audit [INF] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.0"}]: dispatch 2026-03-10T05:44:49.258 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:44:48 vm05 bash[17864]: audit 2026-03-10T05:44:48.469079+0000 mon.a (mon.0) 242 : audit [DBG] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:44:49.583 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:44:49 vm02 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:44:49.584 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:44:49 vm02 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:44:49.584 INFO:journalctl@ceph.mgr.y.vm02.stdout:Mar 10 05:44:49 vm02 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:44:50.508 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:44:50 vm05 bash[17864]: cephadm 2026-03-10T05:44:48.469510+0000 mgr.y (mgr.14152) 46 : cephadm [INF] Deploying daemon osd.0 on vm02 2026-03-10T05:44:50.508 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:44:50 vm05 bash[17864]: cluster 2026-03-10T05:44:48.753831+0000 mgr.y (mgr.14152) 47 : cluster [DBG] pgmap v12: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T05:44:50.508 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:44:50 vm05 bash[17864]: audit 2026-03-10T05:44:49.206768+0000 mon.a (mon.0) 243 : audit [INF] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' 2026-03-10T05:44:50.508 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:44:50 vm05 bash[17864]: audit 2026-03-10T05:44:49.237705+0000 mon.a (mon.0) 244 : audit [DBG] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T05:44:50.508 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:44:50 vm05 bash[17864]: audit 2026-03-10T05:44:49.239296+0000 mon.a (mon.0) 245 : audit [DBG] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:44:50.508 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:44:50 vm05 bash[17864]: audit 2026-03-10T05:44:49.239675+0000 mon.a (mon.0) 246 : audit [INF] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T05:44:50.583 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:44:50 vm02 bash[17462]: cephadm 2026-03-10T05:44:48.469510+0000 mgr.y (mgr.14152) 46 : cephadm [INF] Deploying daemon osd.0 on vm02 2026-03-10T05:44:50.583 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:44:50 vm02 bash[17462]: cluster 2026-03-10T05:44:48.753831+0000 mgr.y (mgr.14152) 47 : cluster [DBG] pgmap v12: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T05:44:50.583 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:44:50 vm02 bash[17462]: audit 2026-03-10T05:44:49.206768+0000 mon.a (mon.0) 243 : audit [INF] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' 2026-03-10T05:44:50.583 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:44:50 vm02 bash[17462]: audit 2026-03-10T05:44:49.237705+0000 mon.a (mon.0) 244 : audit [DBG] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T05:44:50.583 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:44:50 vm02 bash[17462]: audit 2026-03-10T05:44:49.239296+0000 mon.a (mon.0) 245 : audit [DBG] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:44:50.583 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:44:50 vm02 bash[17462]: audit 2026-03-10T05:44:49.239675+0000 mon.a (mon.0) 246 : audit [INF] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T05:44:50.584 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:44:50 vm02 bash[22526]: cephadm 2026-03-10T05:44:48.469510+0000 mgr.y (mgr.14152) 46 : cephadm [INF] Deploying daemon osd.0 on vm02 2026-03-10T05:44:50.584 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:44:50 vm02 bash[22526]: cluster 2026-03-10T05:44:48.753831+0000 mgr.y (mgr.14152) 47 : cluster [DBG] pgmap v12: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T05:44:50.584 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:44:50 vm02 bash[22526]: audit 2026-03-10T05:44:49.206768+0000 mon.a (mon.0) 243 : audit [INF] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' 2026-03-10T05:44:50.584 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:44:50 vm02 bash[22526]: audit 2026-03-10T05:44:49.237705+0000 mon.a (mon.0) 244 : audit [DBG] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T05:44:50.584 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:44:50 vm02 bash[22526]: audit 2026-03-10T05:44:49.239296+0000 mon.a (mon.0) 245 : audit [DBG] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:44:50.584 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:44:50 vm02 bash[22526]: audit 2026-03-10T05:44:49.239675+0000 mon.a (mon.0) 246 : audit [INF] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T05:44:52.460 INFO:teuthology.orchestra.run.vm02.stdout:Created osd(s) 0 on host 'vm02' 2026-03-10T05:44:52.469 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:44:52 vm02 bash[17462]: cluster 2026-03-10T05:44:50.754110+0000 mgr.y (mgr.14152) 48 : cluster [DBG] pgmap v13: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T05:44:52.469 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:44:52 vm02 bash[17462]: audit 2026-03-10T05:44:52.047220+0000 mon.c (mon.1) 4 : audit [INF] from='osd.0 [v2:192.168.123.102:6802/3358143121,v1:192.168.123.102:6803/3358143121]' entity='osd.0' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]: dispatch 2026-03-10T05:44:52.469 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:44:52 vm02 bash[17462]: audit 2026-03-10T05:44:52.047506+0000 mon.a (mon.0) 247 : audit [INF] from='osd.0 ' entity='osd.0' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]: dispatch 2026-03-10T05:44:52.469 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:44:52 vm02 bash[17462]: audit 2026-03-10T05:44:52.106380+0000 mon.a (mon.0) 248 : audit [INF] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' 2026-03-10T05:44:52.469 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:44:52 vm02 bash[17462]: audit 2026-03-10T05:44:52.114031+0000 mon.a (mon.0) 249 : audit [INF] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' 2026-03-10T05:44:52.469 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:44:52 vm02 bash[22526]: cluster 2026-03-10T05:44:50.754110+0000 mgr.y (mgr.14152) 48 : cluster [DBG] pgmap v13: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T05:44:52.469 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:44:52 vm02 bash[22526]: audit 2026-03-10T05:44:52.047220+0000 mon.c (mon.1) 4 : audit [INF] from='osd.0 [v2:192.168.123.102:6802/3358143121,v1:192.168.123.102:6803/3358143121]' entity='osd.0' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]: dispatch 2026-03-10T05:44:52.469 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:44:52 vm02 bash[22526]: audit 2026-03-10T05:44:52.047506+0000 mon.a (mon.0) 247 : audit [INF] from='osd.0 ' entity='osd.0' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]: dispatch 2026-03-10T05:44:52.469 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:44:52 vm02 bash[22526]: audit 2026-03-10T05:44:52.106380+0000 mon.a (mon.0) 248 : audit [INF] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' 2026-03-10T05:44:52.469 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:44:52 vm02 bash[22526]: audit 2026-03-10T05:44:52.114031+0000 mon.a (mon.0) 249 : audit [INF] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' 2026-03-10T05:44:52.508 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:44:52 vm05 bash[17864]: cluster 2026-03-10T05:44:50.754110+0000 mgr.y (mgr.14152) 48 : cluster [DBG] pgmap v13: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T05:44:52.508 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:44:52 vm05 bash[17864]: audit 2026-03-10T05:44:52.047220+0000 mon.c (mon.1) 4 : audit [INF] from='osd.0 [v2:192.168.123.102:6802/3358143121,v1:192.168.123.102:6803/3358143121]' entity='osd.0' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]: dispatch 2026-03-10T05:44:52.508 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:44:52 vm05 bash[17864]: audit 2026-03-10T05:44:52.047506+0000 mon.a (mon.0) 247 : audit [INF] from='osd.0 ' entity='osd.0' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]: dispatch 2026-03-10T05:44:52.508 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:44:52 vm05 bash[17864]: audit 2026-03-10T05:44:52.106380+0000 mon.a (mon.0) 248 : audit [INF] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' 2026-03-10T05:44:52.508 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:44:52 vm05 bash[17864]: audit 2026-03-10T05:44:52.114031+0000 mon.a (mon.0) 249 : audit [INF] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' 2026-03-10T05:44:52.514 DEBUG:teuthology.orchestra.run.vm02:osd.0> sudo journalctl -f -n 0 -u ceph-107483ae-1c44-11f1-b530-c1172cd6122a@osd.0.service 2026-03-10T05:44:52.515 INFO:tasks.cephadm:Deploying osd.1 on vm02 with /dev/vdd... 2026-03-10T05:44:52.515 DEBUG:teuthology.orchestra.run.vm02:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 ceph-volume -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 107483ae-1c44-11f1-b530-c1172cd6122a -- lvm zap /dev/vdd 2026-03-10T05:44:53.069 INFO:teuthology.orchestra.run.vm02.stdout: 2026-03-10T05:44:53.079 DEBUG:teuthology.orchestra.run.vm02:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 107483ae-1c44-11f1-b530-c1172cd6122a -- ceph orch daemon add osd vm02:/dev/vdd 2026-03-10T05:44:53.331 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:44:53 vm02 bash[17462]: audit 2026-03-10T05:44:52.220109+0000 mon.a (mon.0) 250 : audit [INF] from='osd.0 ' entity='osd.0' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]': finished 2026-03-10T05:44:53.331 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:44:53 vm02 bash[17462]: cluster 2026-03-10T05:44:52.220146+0000 mon.a (mon.0) 251 : cluster [DBG] osdmap e6: 1 total, 0 up, 1 in 2026-03-10T05:44:53.331 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:44:53 vm02 bash[17462]: audit 2026-03-10T05:44:52.220188+0000 mon.a (mon.0) 252 : audit [DBG] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T05:44:53.331 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:44:53 vm02 bash[17462]: audit 2026-03-10T05:44:52.220718+0000 mon.c (mon.1) 5 : audit [INF] from='osd.0 [v2:192.168.123.102:6802/3358143121,v1:192.168.123.102:6803/3358143121]' entity='osd.0' cmd=[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=vm02", "root=default"]}]: dispatch 2026-03-10T05:44:53.331 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:44:53 vm02 bash[17462]: audit 2026-03-10T05:44:52.221117+0000 mon.a (mon.0) 253 : audit [INF] from='osd.0 ' entity='osd.0' cmd=[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=vm02", "root=default"]}]: dispatch 2026-03-10T05:44:53.331 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:44:53 vm02 bash[17462]: audit 2026-03-10T05:44:52.458409+0000 mon.a (mon.0) 254 : audit [INF] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' 2026-03-10T05:44:53.331 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:44:53 vm02 bash[17462]: audit 2026-03-10T05:44:52.479195+0000 mon.a (mon.0) 255 : audit [DBG] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T05:44:53.331 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:44:53 vm02 bash[17462]: audit 2026-03-10T05:44:52.479846+0000 mon.a (mon.0) 256 : audit [DBG] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:44:53.331 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:44:53 vm02 bash[17462]: audit 2026-03-10T05:44:52.480227+0000 mon.a (mon.0) 257 : audit [INF] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T05:44:53.331 INFO:journalctl@ceph.osd.0.vm02.stdout:Mar 10 05:44:53 vm02 bash[25206]: debug 2026-03-10T05:44:53.227+0000 7f0aa4d1a700 -1 osd.0 0 waiting for initial osdmap 2026-03-10T05:44:53.331 INFO:journalctl@ceph.osd.0.vm02.stdout:Mar 10 05:44:53 vm02 bash[25206]: debug 2026-03-10T05:44:53.231+0000 7f0a9eeb0700 -1 osd.0 7 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory 2026-03-10T05:44:53.331 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:44:53 vm02 bash[22526]: audit 2026-03-10T05:44:52.220109+0000 mon.a (mon.0) 250 : audit [INF] from='osd.0 ' entity='osd.0' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]': finished 2026-03-10T05:44:53.331 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:44:53 vm02 bash[22526]: cluster 2026-03-10T05:44:52.220146+0000 mon.a (mon.0) 251 : cluster [DBG] osdmap e6: 1 total, 0 up, 1 in 2026-03-10T05:44:53.331 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:44:53 vm02 bash[22526]: audit 2026-03-10T05:44:52.220188+0000 mon.a (mon.0) 252 : audit [DBG] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T05:44:53.331 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:44:53 vm02 bash[22526]: audit 2026-03-10T05:44:52.220718+0000 mon.c (mon.1) 5 : audit [INF] from='osd.0 [v2:192.168.123.102:6802/3358143121,v1:192.168.123.102:6803/3358143121]' entity='osd.0' cmd=[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=vm02", "root=default"]}]: dispatch 2026-03-10T05:44:53.331 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:44:53 vm02 bash[22526]: audit 2026-03-10T05:44:52.221117+0000 mon.a (mon.0) 253 : audit [INF] from='osd.0 ' entity='osd.0' cmd=[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=vm02", "root=default"]}]: dispatch 2026-03-10T05:44:53.331 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:44:53 vm02 bash[22526]: audit 2026-03-10T05:44:52.458409+0000 mon.a (mon.0) 254 : audit [INF] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' 2026-03-10T05:44:53.331 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:44:53 vm02 bash[22526]: audit 2026-03-10T05:44:52.479195+0000 mon.a (mon.0) 255 : audit [DBG] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T05:44:53.331 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:44:53 vm02 bash[22526]: audit 2026-03-10T05:44:52.479846+0000 mon.a (mon.0) 256 : audit [DBG] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:44:53.331 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:44:53 vm02 bash[22526]: audit 2026-03-10T05:44:52.480227+0000 mon.a (mon.0) 257 : audit [INF] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T05:44:53.508 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:44:53 vm05 bash[17864]: audit 2026-03-10T05:44:52.220109+0000 mon.a (mon.0) 250 : audit [INF] from='osd.0 ' entity='osd.0' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]': finished 2026-03-10T05:44:53.508 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:44:53 vm05 bash[17864]: cluster 2026-03-10T05:44:52.220146+0000 mon.a (mon.0) 251 : cluster [DBG] osdmap e6: 1 total, 0 up, 1 in 2026-03-10T05:44:53.508 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:44:53 vm05 bash[17864]: audit 2026-03-10T05:44:52.220188+0000 mon.a (mon.0) 252 : audit [DBG] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T05:44:53.508 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:44:53 vm05 bash[17864]: audit 2026-03-10T05:44:52.220718+0000 mon.c (mon.1) 5 : audit [INF] from='osd.0 [v2:192.168.123.102:6802/3358143121,v1:192.168.123.102:6803/3358143121]' entity='osd.0' cmd=[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=vm02", "root=default"]}]: dispatch 2026-03-10T05:44:53.508 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:44:53 vm05 bash[17864]: audit 2026-03-10T05:44:52.221117+0000 mon.a (mon.0) 253 : audit [INF] from='osd.0 ' entity='osd.0' cmd=[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=vm02", "root=default"]}]: dispatch 2026-03-10T05:44:53.508 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:44:53 vm05 bash[17864]: audit 2026-03-10T05:44:52.458409+0000 mon.a (mon.0) 254 : audit [INF] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' 2026-03-10T05:44:53.508 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:44:53 vm05 bash[17864]: audit 2026-03-10T05:44:52.479195+0000 mon.a (mon.0) 255 : audit [DBG] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T05:44:53.508 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:44:53 vm05 bash[17864]: audit 2026-03-10T05:44:52.479846+0000 mon.a (mon.0) 256 : audit [DBG] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:44:53.508 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:44:53 vm05 bash[17864]: audit 2026-03-10T05:44:52.480227+0000 mon.a (mon.0) 257 : audit [INF] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T05:44:54.508 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:44:54 vm05 bash[17864]: cluster 2026-03-10T05:44:52.754326+0000 mgr.y (mgr.14152) 49 : cluster [DBG] pgmap v15: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T05:44:54.508 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:44:54 vm05 bash[17864]: audit 2026-03-10T05:44:53.222925+0000 mon.a (mon.0) 258 : audit [INF] from='osd.0 ' entity='osd.0' cmd='[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=vm02", "root=default"]}]': finished 2026-03-10T05:44:54.508 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:44:54 vm05 bash[17864]: cluster 2026-03-10T05:44:53.222967+0000 mon.a (mon.0) 259 : cluster [DBG] osdmap e7: 1 total, 0 up, 1 in 2026-03-10T05:44:54.508 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:44:54 vm05 bash[17864]: audit 2026-03-10T05:44:53.223528+0000 mon.a (mon.0) 260 : audit [DBG] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T05:44:54.508 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:44:54 vm05 bash[17864]: audit 2026-03-10T05:44:53.227300+0000 mon.a (mon.0) 261 : audit [DBG] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T05:44:54.508 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:44:54 vm05 bash[17864]: audit 2026-03-10T05:44:53.498338+0000 mon.a (mon.0) 262 : audit [DBG] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-10T05:44:54.508 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:44:54 vm05 bash[17864]: audit 2026-03-10T05:44:53.499606+0000 mon.a (mon.0) 263 : audit [INF] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-10T05:44:54.508 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:44:54 vm05 bash[17864]: audit 2026-03-10T05:44:53.499993+0000 mon.a (mon.0) 264 : audit [DBG] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:44:54.508 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:44:54 vm05 bash[17864]: audit 2026-03-10T05:44:54.227636+0000 mon.a (mon.0) 265 : audit [DBG] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T05:44:54.583 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:44:54 vm02 bash[17462]: cluster 2026-03-10T05:44:52.754326+0000 mgr.y (mgr.14152) 49 : cluster [DBG] pgmap v15: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T05:44:54.583 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:44:54 vm02 bash[17462]: audit 2026-03-10T05:44:53.222925+0000 mon.a (mon.0) 258 : audit [INF] from='osd.0 ' entity='osd.0' cmd='[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=vm02", "root=default"]}]': finished 2026-03-10T05:44:54.583 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:44:54 vm02 bash[17462]: cluster 2026-03-10T05:44:53.222967+0000 mon.a (mon.0) 259 : cluster [DBG] osdmap e7: 1 total, 0 up, 1 in 2026-03-10T05:44:54.583 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:44:54 vm02 bash[17462]: audit 2026-03-10T05:44:53.223528+0000 mon.a (mon.0) 260 : audit [DBG] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T05:44:54.583 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:44:54 vm02 bash[17462]: audit 2026-03-10T05:44:53.227300+0000 mon.a (mon.0) 261 : audit [DBG] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T05:44:54.583 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:44:54 vm02 bash[17462]: audit 2026-03-10T05:44:53.498338+0000 mon.a (mon.0) 262 : audit [DBG] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-10T05:44:54.583 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:44:54 vm02 bash[17462]: audit 2026-03-10T05:44:53.499606+0000 mon.a (mon.0) 263 : audit [INF] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-10T05:44:54.583 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:44:54 vm02 bash[17462]: audit 2026-03-10T05:44:53.499993+0000 mon.a (mon.0) 264 : audit [DBG] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:44:54.583 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:44:54 vm02 bash[17462]: audit 2026-03-10T05:44:54.227636+0000 mon.a (mon.0) 265 : audit [DBG] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T05:44:54.584 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:44:54 vm02 bash[22526]: cluster 2026-03-10T05:44:52.754326+0000 mgr.y (mgr.14152) 49 : cluster [DBG] pgmap v15: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T05:44:54.584 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:44:54 vm02 bash[22526]: audit 2026-03-10T05:44:53.222925+0000 mon.a (mon.0) 258 : audit [INF] from='osd.0 ' entity='osd.0' cmd='[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=vm02", "root=default"]}]': finished 2026-03-10T05:44:54.584 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:44:54 vm02 bash[22526]: cluster 2026-03-10T05:44:53.222967+0000 mon.a (mon.0) 259 : cluster [DBG] osdmap e7: 1 total, 0 up, 1 in 2026-03-10T05:44:54.584 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:44:54 vm02 bash[22526]: audit 2026-03-10T05:44:53.223528+0000 mon.a (mon.0) 260 : audit [DBG] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T05:44:54.584 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:44:54 vm02 bash[22526]: audit 2026-03-10T05:44:53.227300+0000 mon.a (mon.0) 261 : audit [DBG] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T05:44:54.584 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:44:54 vm02 bash[22526]: audit 2026-03-10T05:44:53.498338+0000 mon.a (mon.0) 262 : audit [DBG] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-10T05:44:54.584 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:44:54 vm02 bash[22526]: audit 2026-03-10T05:44:53.499606+0000 mon.a (mon.0) 263 : audit [INF] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-10T05:44:54.584 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:44:54 vm02 bash[22526]: audit 2026-03-10T05:44:53.499993+0000 mon.a (mon.0) 264 : audit [DBG] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:44:54.584 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:44:54 vm02 bash[22526]: audit 2026-03-10T05:44:54.227636+0000 mon.a (mon.0) 265 : audit [DBG] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T05:44:55.508 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:44:55 vm05 bash[17864]: cluster 2026-03-10T05:44:53.076277+0000 osd.0 (osd.0) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-10T05:44:55.508 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:44:55 vm05 bash[17864]: cluster 2026-03-10T05:44:53.076358+0000 osd.0 (osd.0) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-10T05:44:55.508 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:44:55 vm05 bash[17864]: audit 2026-03-10T05:44:53.497066+0000 mgr.y (mgr.14152) 50 : audit [DBG] from='client.24121 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm02:/dev/vdd", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T05:44:55.508 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:44:55 vm05 bash[17864]: cluster 2026-03-10T05:44:54.232678+0000 mon.a (mon.0) 266 : cluster [INF] osd.0 [v2:192.168.123.102:6802/3358143121,v1:192.168.123.102:6803/3358143121] boot 2026-03-10T05:44:55.508 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:44:55 vm05 bash[17864]: cluster 2026-03-10T05:44:54.233684+0000 mon.a (mon.0) 267 : cluster [DBG] osdmap e8: 1 total, 1 up, 1 in 2026-03-10T05:44:55.508 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:44:55 vm05 bash[17864]: audit 2026-03-10T05:44:54.234345+0000 mon.a (mon.0) 268 : audit [DBG] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T05:44:55.583 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:44:55 vm02 bash[17462]: cluster 2026-03-10T05:44:53.076277+0000 osd.0 (osd.0) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-10T05:44:55.583 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:44:55 vm02 bash[17462]: cluster 2026-03-10T05:44:53.076358+0000 osd.0 (osd.0) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-10T05:44:55.583 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:44:55 vm02 bash[17462]: audit 2026-03-10T05:44:53.497066+0000 mgr.y (mgr.14152) 50 : audit [DBG] from='client.24121 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm02:/dev/vdd", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T05:44:55.583 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:44:55 vm02 bash[17462]: cluster 2026-03-10T05:44:54.232678+0000 mon.a (mon.0) 266 : cluster [INF] osd.0 [v2:192.168.123.102:6802/3358143121,v1:192.168.123.102:6803/3358143121] boot 2026-03-10T05:44:55.584 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:44:55 vm02 bash[17462]: cluster 2026-03-10T05:44:54.233684+0000 mon.a (mon.0) 267 : cluster [DBG] osdmap e8: 1 total, 1 up, 1 in 2026-03-10T05:44:55.584 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:44:55 vm02 bash[17462]: audit 2026-03-10T05:44:54.234345+0000 mon.a (mon.0) 268 : audit [DBG] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T05:44:55.584 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:44:55 vm02 bash[22526]: cluster 2026-03-10T05:44:53.076277+0000 osd.0 (osd.0) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-10T05:44:55.584 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:44:55 vm02 bash[22526]: cluster 2026-03-10T05:44:53.076358+0000 osd.0 (osd.0) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-10T05:44:55.584 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:44:55 vm02 bash[22526]: audit 2026-03-10T05:44:53.497066+0000 mgr.y (mgr.14152) 50 : audit [DBG] from='client.24121 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm02:/dev/vdd", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T05:44:55.584 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:44:55 vm02 bash[22526]: cluster 2026-03-10T05:44:54.232678+0000 mon.a (mon.0) 266 : cluster [INF] osd.0 [v2:192.168.123.102:6802/3358143121,v1:192.168.123.102:6803/3358143121] boot 2026-03-10T05:44:55.584 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:44:55 vm02 bash[22526]: cluster 2026-03-10T05:44:54.233684+0000 mon.a (mon.0) 267 : cluster [DBG] osdmap e8: 1 total, 1 up, 1 in 2026-03-10T05:44:55.584 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:44:55 vm02 bash[22526]: audit 2026-03-10T05:44:54.234345+0000 mon.a (mon.0) 268 : audit [DBG] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T05:44:56.502 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:44:56 vm02 bash[17462]: cluster 2026-03-10T05:44:54.754514+0000 mgr.y (mgr.14152) 51 : cluster [DBG] pgmap v18: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T05:44:56.502 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:44:56 vm02 bash[22526]: cluster 2026-03-10T05:44:54.754514+0000 mgr.y (mgr.14152) 51 : cluster [DBG] pgmap v18: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T05:44:56.507 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:44:56 vm05 bash[17864]: cluster 2026-03-10T05:44:54.754514+0000 mgr.y (mgr.14152) 51 : cluster [DBG] pgmap v18: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T05:44:57.833 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:44:57 vm02 bash[17462]: cephadm 2026-03-10T05:44:56.540896+0000 mgr.y (mgr.14152) 52 : cephadm [INF] Detected new or changed devices on vm02 2026-03-10T05:44:57.834 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:44:57 vm02 bash[17462]: audit 2026-03-10T05:44:56.546033+0000 mon.a (mon.0) 269 : audit [INF] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' 2026-03-10T05:44:57.834 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:44:57 vm02 bash[17462]: audit 2026-03-10T05:44:56.547791+0000 mon.a (mon.0) 270 : audit [INF] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm02", "name": "osd_memory_target"}]: dispatch 2026-03-10T05:44:57.834 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:44:57 vm02 bash[17462]: audit 2026-03-10T05:44:56.550795+0000 mon.a (mon.0) 271 : audit [INF] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' 2026-03-10T05:44:57.834 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:44:57 vm02 bash[17462]: cluster 2026-03-10T05:44:56.754713+0000 mgr.y (mgr.14152) 53 : cluster [DBG] pgmap v19: 0 pgs: ; 0 B data, 4.8 MiB used, 20 GiB / 20 GiB avail 2026-03-10T05:44:57.834 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:44:57 vm02 bash[22526]: cephadm 2026-03-10T05:44:56.540896+0000 mgr.y (mgr.14152) 52 : cephadm [INF] Detected new or changed devices on vm02 2026-03-10T05:44:57.834 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:44:57 vm02 bash[22526]: audit 2026-03-10T05:44:56.546033+0000 mon.a (mon.0) 269 : audit [INF] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' 2026-03-10T05:44:57.834 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:44:57 vm02 bash[22526]: audit 2026-03-10T05:44:56.547791+0000 mon.a (mon.0) 270 : audit [INF] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm02", "name": "osd_memory_target"}]: dispatch 2026-03-10T05:44:57.834 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:44:57 vm02 bash[22526]: audit 2026-03-10T05:44:56.550795+0000 mon.a (mon.0) 271 : audit [INF] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' 2026-03-10T05:44:57.834 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:44:57 vm02 bash[22526]: cluster 2026-03-10T05:44:56.754713+0000 mgr.y (mgr.14152) 53 : cluster [DBG] pgmap v19: 0 pgs: ; 0 B data, 4.8 MiB used, 20 GiB / 20 GiB avail 2026-03-10T05:44:58.008 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:44:57 vm05 bash[17864]: cephadm 2026-03-10T05:44:56.540896+0000 mgr.y (mgr.14152) 52 : cephadm [INF] Detected new or changed devices on vm02 2026-03-10T05:44:58.008 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:44:57 vm05 bash[17864]: audit 2026-03-10T05:44:56.546033+0000 mon.a (mon.0) 269 : audit [INF] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' 2026-03-10T05:44:58.008 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:44:57 vm05 bash[17864]: audit 2026-03-10T05:44:56.547791+0000 mon.a (mon.0) 270 : audit [INF] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm02", "name": "osd_memory_target"}]: dispatch 2026-03-10T05:44:58.008 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:44:57 vm05 bash[17864]: audit 2026-03-10T05:44:56.550795+0000 mon.a (mon.0) 271 : audit [INF] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' 2026-03-10T05:44:58.008 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:44:57 vm05 bash[17864]: cluster 2026-03-10T05:44:56.754713+0000 mgr.y (mgr.14152) 53 : cluster [DBG] pgmap v19: 0 pgs: ; 0 B data, 4.8 MiB used, 20 GiB / 20 GiB avail 2026-03-10T05:44:58.833 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:44:58 vm02 bash[17462]: audit 2026-03-10T05:44:57.538849+0000 mon.b (mon.2) 3 : audit [INF] from='client.? 192.168.123.102:0/636244919' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "c0820da9-42eb-422f-88aa-598d51d4e5e7"}]: dispatch 2026-03-10T05:44:58.834 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:44:58 vm02 bash[17462]: audit 2026-03-10T05:44:57.544341+0000 mon.a (mon.0) 272 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "c0820da9-42eb-422f-88aa-598d51d4e5e7"}]: dispatch 2026-03-10T05:44:58.834 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:44:58 vm02 bash[17462]: audit 2026-03-10T05:44:57.549003+0000 mon.a (mon.0) 273 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "c0820da9-42eb-422f-88aa-598d51d4e5e7"}]': finished 2026-03-10T05:44:58.834 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:44:58 vm02 bash[17462]: cluster 2026-03-10T05:44:57.549494+0000 mon.a (mon.0) 274 : cluster [DBG] osdmap e9: 2 total, 1 up, 2 in 2026-03-10T05:44:58.834 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:44:58 vm02 bash[17462]: audit 2026-03-10T05:44:57.549975+0000 mon.a (mon.0) 275 : audit [DBG] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T05:44:58.834 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:44:58 vm02 bash[17462]: audit 2026-03-10T05:44:58.128110+0000 mon.b (mon.2) 4 : audit [DBG] from='client.? 192.168.123.102:0/3670868092' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-10T05:44:58.834 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:44:58 vm02 bash[22526]: audit 2026-03-10T05:44:57.538849+0000 mon.b (mon.2) 3 : audit [INF] from='client.? 192.168.123.102:0/636244919' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "c0820da9-42eb-422f-88aa-598d51d4e5e7"}]: dispatch 2026-03-10T05:44:58.834 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:44:58 vm02 bash[22526]: audit 2026-03-10T05:44:57.544341+0000 mon.a (mon.0) 272 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "c0820da9-42eb-422f-88aa-598d51d4e5e7"}]: dispatch 2026-03-10T05:44:58.834 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:44:58 vm02 bash[22526]: audit 2026-03-10T05:44:57.549003+0000 mon.a (mon.0) 273 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "c0820da9-42eb-422f-88aa-598d51d4e5e7"}]': finished 2026-03-10T05:44:58.834 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:44:58 vm02 bash[22526]: cluster 2026-03-10T05:44:57.549494+0000 mon.a (mon.0) 274 : cluster [DBG] osdmap e9: 2 total, 1 up, 2 in 2026-03-10T05:44:58.834 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:44:58 vm02 bash[22526]: audit 2026-03-10T05:44:57.549975+0000 mon.a (mon.0) 275 : audit [DBG] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T05:44:58.834 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:44:58 vm02 bash[22526]: audit 2026-03-10T05:44:58.128110+0000 mon.b (mon.2) 4 : audit [DBG] from='client.? 192.168.123.102:0/3670868092' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-10T05:44:59.008 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:44:58 vm05 bash[17864]: audit 2026-03-10T05:44:57.538849+0000 mon.b (mon.2) 3 : audit [INF] from='client.? 192.168.123.102:0/636244919' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "c0820da9-42eb-422f-88aa-598d51d4e5e7"}]: dispatch 2026-03-10T05:44:59.008 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:44:58 vm05 bash[17864]: audit 2026-03-10T05:44:57.544341+0000 mon.a (mon.0) 272 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "c0820da9-42eb-422f-88aa-598d51d4e5e7"}]: dispatch 2026-03-10T05:44:59.008 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:44:58 vm05 bash[17864]: audit 2026-03-10T05:44:57.549003+0000 mon.a (mon.0) 273 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "c0820da9-42eb-422f-88aa-598d51d4e5e7"}]': finished 2026-03-10T05:44:59.008 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:44:58 vm05 bash[17864]: cluster 2026-03-10T05:44:57.549494+0000 mon.a (mon.0) 274 : cluster [DBG] osdmap e9: 2 total, 1 up, 2 in 2026-03-10T05:44:59.008 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:44:58 vm05 bash[17864]: audit 2026-03-10T05:44:57.549975+0000 mon.a (mon.0) 275 : audit [DBG] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T05:44:59.008 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:44:58 vm05 bash[17864]: audit 2026-03-10T05:44:58.128110+0000 mon.b (mon.2) 4 : audit [DBG] from='client.? 192.168.123.102:0/3670868092' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-10T05:44:59.833 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:44:59 vm02 bash[17462]: cluster 2026-03-10T05:44:58.754935+0000 mgr.y (mgr.14152) 54 : cluster [DBG] pgmap v21: 0 pgs: ; 0 B data, 4.8 MiB used, 20 GiB / 20 GiB avail 2026-03-10T05:44:59.834 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:44:59 vm02 bash[22526]: cluster 2026-03-10T05:44:58.754935+0000 mgr.y (mgr.14152) 54 : cluster [DBG] pgmap v21: 0 pgs: ; 0 B data, 4.8 MiB used, 20 GiB / 20 GiB avail 2026-03-10T05:45:00.010 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:44:59 vm05 bash[17864]: cluster 2026-03-10T05:44:58.754935+0000 mgr.y (mgr.14152) 54 : cluster [DBG] pgmap v21: 0 pgs: ; 0 B data, 4.8 MiB used, 20 GiB / 20 GiB avail 2026-03-10T05:45:02.083 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:45:01 vm02 bash[17462]: cluster 2026-03-10T05:45:00.755154+0000 mgr.y (mgr.14152) 55 : cluster [DBG] pgmap v22: 0 pgs: ; 0 B data, 4.8 MiB used, 20 GiB / 20 GiB avail 2026-03-10T05:45:02.083 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:45:01 vm02 bash[22526]: cluster 2026-03-10T05:45:00.755154+0000 mgr.y (mgr.14152) 55 : cluster [DBG] pgmap v22: 0 pgs: ; 0 B data, 4.8 MiB used, 20 GiB / 20 GiB avail 2026-03-10T05:45:02.258 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:45:01 vm05 bash[17864]: cluster 2026-03-10T05:45:00.755154+0000 mgr.y (mgr.14152) 55 : cluster [DBG] pgmap v22: 0 pgs: ; 0 B data, 4.8 MiB used, 20 GiB / 20 GiB avail 2026-03-10T05:45:04.008 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:45:03 vm05 bash[17864]: cluster 2026-03-10T05:45:02.755362+0000 mgr.y (mgr.14152) 56 : cluster [DBG] pgmap v23: 0 pgs: ; 0 B data, 4.8 MiB used, 20 GiB / 20 GiB avail 2026-03-10T05:45:04.008 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:45:03 vm05 bash[17864]: audit 2026-03-10T05:45:03.470800+0000 mon.a (mon.0) 276 : audit [INF] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.1"}]: dispatch 2026-03-10T05:45:04.008 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:45:03 vm05 bash[17864]: audit 2026-03-10T05:45:03.471268+0000 mon.a (mon.0) 277 : audit [DBG] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:45:04.025 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:45:03 vm02 bash[17462]: cluster 2026-03-10T05:45:02.755362+0000 mgr.y (mgr.14152) 56 : cluster [DBG] pgmap v23: 0 pgs: ; 0 B data, 4.8 MiB used, 20 GiB / 20 GiB avail 2026-03-10T05:45:04.025 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:45:03 vm02 bash[17462]: audit 2026-03-10T05:45:03.470800+0000 mon.a (mon.0) 276 : audit [INF] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.1"}]: dispatch 2026-03-10T05:45:04.025 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:45:03 vm02 bash[17462]: audit 2026-03-10T05:45:03.471268+0000 mon.a (mon.0) 277 : audit [DBG] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:45:04.025 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:45:03 vm02 bash[22526]: cluster 2026-03-10T05:45:02.755362+0000 mgr.y (mgr.14152) 56 : cluster [DBG] pgmap v23: 0 pgs: ; 0 B data, 4.8 MiB used, 20 GiB / 20 GiB avail 2026-03-10T05:45:04.025 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:45:03 vm02 bash[22526]: audit 2026-03-10T05:45:03.470800+0000 mon.a (mon.0) 276 : audit [INF] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.1"}]: dispatch 2026-03-10T05:45:04.025 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:45:03 vm02 bash[22526]: audit 2026-03-10T05:45:03.471268+0000 mon.a (mon.0) 277 : audit [DBG] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:45:04.279 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:45:04 vm02 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:45:04.279 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:45:04 vm02 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:45:04.279 INFO:journalctl@ceph.osd.0.vm02.stdout:Mar 10 05:45:04 vm02 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:45:04.279 INFO:journalctl@ceph.osd.0.vm02.stdout:Mar 10 05:45:04 vm02 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:45:04.279 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:45:04 vm02 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:45:04.279 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:45:04 vm02 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:45:04.279 INFO:journalctl@ceph.mgr.y.vm02.stdout:Mar 10 05:45:04 vm02 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:45:04.279 INFO:journalctl@ceph.mgr.y.vm02.stdout:Mar 10 05:45:04 vm02 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:45:05.583 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:45:05 vm02 bash[17462]: cephadm 2026-03-10T05:45:03.471602+0000 mgr.y (mgr.14152) 57 : cephadm [INF] Deploying daemon osd.1 on vm02 2026-03-10T05:45:05.584 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:45:05 vm02 bash[17462]: audit 2026-03-10T05:45:04.299889+0000 mon.a (mon.0) 278 : audit [INF] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' 2026-03-10T05:45:05.584 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:45:05 vm02 bash[17462]: audit 2026-03-10T05:45:04.328383+0000 mon.a (mon.0) 279 : audit [DBG] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T05:45:05.584 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:45:05 vm02 bash[17462]: audit 2026-03-10T05:45:04.334660+0000 mon.a (mon.0) 280 : audit [DBG] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:45:05.584 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:45:05 vm02 bash[17462]: audit 2026-03-10T05:45:04.336416+0000 mon.a (mon.0) 281 : audit [INF] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T05:45:05.584 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:45:05 vm02 bash[22526]: cephadm 2026-03-10T05:45:03.471602+0000 mgr.y (mgr.14152) 57 : cephadm [INF] Deploying daemon osd.1 on vm02 2026-03-10T05:45:05.584 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:45:05 vm02 bash[22526]: audit 2026-03-10T05:45:04.299889+0000 mon.a (mon.0) 278 : audit [INF] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' 2026-03-10T05:45:05.584 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:45:05 vm02 bash[22526]: audit 2026-03-10T05:45:04.328383+0000 mon.a (mon.0) 279 : audit [DBG] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T05:45:05.584 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:45:05 vm02 bash[22526]: audit 2026-03-10T05:45:04.334660+0000 mon.a (mon.0) 280 : audit [DBG] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:45:05.584 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:45:05 vm02 bash[22526]: audit 2026-03-10T05:45:04.336416+0000 mon.a (mon.0) 281 : audit [INF] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T05:45:05.758 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:45:05 vm05 bash[17864]: cephadm 2026-03-10T05:45:03.471602+0000 mgr.y (mgr.14152) 57 : cephadm [INF] Deploying daemon osd.1 on vm02 2026-03-10T05:45:05.758 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:45:05 vm05 bash[17864]: audit 2026-03-10T05:45:04.299889+0000 mon.a (mon.0) 278 : audit [INF] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' 2026-03-10T05:45:05.758 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:45:05 vm05 bash[17864]: audit 2026-03-10T05:45:04.328383+0000 mon.a (mon.0) 279 : audit [DBG] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T05:45:05.758 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:45:05 vm05 bash[17864]: audit 2026-03-10T05:45:04.334660+0000 mon.a (mon.0) 280 : audit [DBG] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:45:05.758 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:45:05 vm05 bash[17864]: audit 2026-03-10T05:45:04.336416+0000 mon.a (mon.0) 281 : audit [INF] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T05:45:06.583 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:45:06 vm02 bash[17462]: cluster 2026-03-10T05:45:04.755743+0000 mgr.y (mgr.14152) 58 : cluster [DBG] pgmap v24: 0 pgs: ; 0 B data, 4.8 MiB used, 20 GiB / 20 GiB avail 2026-03-10T05:45:06.583 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:45:06 vm02 bash[22526]: cluster 2026-03-10T05:45:04.755743+0000 mgr.y (mgr.14152) 58 : cluster [DBG] pgmap v24: 0 pgs: ; 0 B data, 4.8 MiB used, 20 GiB / 20 GiB avail 2026-03-10T05:45:06.757 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:45:06 vm05 bash[17864]: cluster 2026-03-10T05:45:04.755743+0000 mgr.y (mgr.14152) 58 : cluster [DBG] pgmap v24: 0 pgs: ; 0 B data, 4.8 MiB used, 20 GiB / 20 GiB avail 2026-03-10T05:45:07.544 INFO:teuthology.orchestra.run.vm02.stdout:Created osd(s) 1 on host 'vm02' 2026-03-10T05:45:07.567 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:45:07 vm02 bash[17462]: cluster 2026-03-10T05:45:06.755955+0000 mgr.y (mgr.14152) 59 : cluster [DBG] pgmap v25: 0 pgs: ; 0 B data, 4.8 MiB used, 20 GiB / 20 GiB avail 2026-03-10T05:45:07.567 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:45:07 vm02 bash[17462]: audit 2026-03-10T05:45:07.154207+0000 mon.a (mon.0) 282 : audit [INF] from='osd.1 [v2:192.168.123.102:6810/3944310722,v1:192.168.123.102:6811/3944310722]' entity='osd.1' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]: dispatch 2026-03-10T05:45:07.567 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:45:07 vm02 bash[17462]: audit 2026-03-10T05:45:07.172046+0000 mon.a (mon.0) 283 : audit [INF] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' 2026-03-10T05:45:07.567 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:45:07 vm02 bash[17462]: audit 2026-03-10T05:45:07.176467+0000 mon.a (mon.0) 284 : audit [INF] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' 2026-03-10T05:45:07.568 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:45:07 vm02 bash[22526]: cluster 2026-03-10T05:45:06.755955+0000 mgr.y (mgr.14152) 59 : cluster [DBG] pgmap v25: 0 pgs: ; 0 B data, 4.8 MiB used, 20 GiB / 20 GiB avail 2026-03-10T05:45:07.568 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:45:07 vm02 bash[22526]: audit 2026-03-10T05:45:07.154207+0000 mon.a (mon.0) 282 : audit [INF] from='osd.1 [v2:192.168.123.102:6810/3944310722,v1:192.168.123.102:6811/3944310722]' entity='osd.1' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]: dispatch 2026-03-10T05:45:07.568 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:45:07 vm02 bash[22526]: audit 2026-03-10T05:45:07.172046+0000 mon.a (mon.0) 283 : audit [INF] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' 2026-03-10T05:45:07.568 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:45:07 vm02 bash[22526]: audit 2026-03-10T05:45:07.176467+0000 mon.a (mon.0) 284 : audit [INF] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' 2026-03-10T05:45:07.628 DEBUG:teuthology.orchestra.run.vm02:osd.1> sudo journalctl -f -n 0 -u ceph-107483ae-1c44-11f1-b530-c1172cd6122a@osd.1.service 2026-03-10T05:45:07.628 INFO:tasks.cephadm:Deploying osd.2 on vm02 with /dev/vdc... 2026-03-10T05:45:07.628 DEBUG:teuthology.orchestra.run.vm02:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 ceph-volume -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 107483ae-1c44-11f1-b530-c1172cd6122a -- lvm zap /dev/vdc 2026-03-10T05:45:07.758 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:45:07 vm05 bash[17864]: cluster 2026-03-10T05:45:06.755955+0000 mgr.y (mgr.14152) 59 : cluster [DBG] pgmap v25: 0 pgs: ; 0 B data, 4.8 MiB used, 20 GiB / 20 GiB avail 2026-03-10T05:45:07.758 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:45:07 vm05 bash[17864]: audit 2026-03-10T05:45:07.154207+0000 mon.a (mon.0) 282 : audit [INF] from='osd.1 [v2:192.168.123.102:6810/3944310722,v1:192.168.123.102:6811/3944310722]' entity='osd.1' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]: dispatch 2026-03-10T05:45:07.758 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:45:07 vm05 bash[17864]: audit 2026-03-10T05:45:07.172046+0000 mon.a (mon.0) 283 : audit [INF] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' 2026-03-10T05:45:07.758 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:45:07 vm05 bash[17864]: audit 2026-03-10T05:45:07.176467+0000 mon.a (mon.0) 284 : audit [INF] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' 2026-03-10T05:45:08.173 INFO:teuthology.orchestra.run.vm02.stdout: 2026-03-10T05:45:08.180 DEBUG:teuthology.orchestra.run.vm02:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 107483ae-1c44-11f1-b530-c1172cd6122a -- ceph orch daemon add osd vm02:/dev/vdc 2026-03-10T05:45:08.402 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:45:08 vm02 bash[17462]: audit 2026-03-10T05:45:07.315967+0000 mon.a (mon.0) 285 : audit [INF] from='osd.1 [v2:192.168.123.102:6810/3944310722,v1:192.168.123.102:6811/3944310722]' entity='osd.1' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]': finished 2026-03-10T05:45:08.402 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:45:08 vm02 bash[17462]: cluster 2026-03-10T05:45:07.316058+0000 mon.a (mon.0) 286 : cluster [DBG] osdmap e10: 2 total, 1 up, 2 in 2026-03-10T05:45:08.402 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:45:08 vm02 bash[17462]: audit 2026-03-10T05:45:07.316100+0000 mon.a (mon.0) 287 : audit [DBG] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T05:45:08.402 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:45:08 vm02 bash[17462]: audit 2026-03-10T05:45:07.316936+0000 mon.a (mon.0) 288 : audit [INF] from='osd.1 [v2:192.168.123.102:6810/3944310722,v1:192.168.123.102:6811/3944310722]' entity='osd.1' cmd=[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=vm02", "root=default"]}]: dispatch 2026-03-10T05:45:08.402 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:45:08 vm02 bash[17462]: audit 2026-03-10T05:45:07.541966+0000 mon.a (mon.0) 289 : audit [INF] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' 2026-03-10T05:45:08.402 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:45:08 vm02 bash[17462]: audit 2026-03-10T05:45:07.542483+0000 mon.a (mon.0) 290 : audit [DBG] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T05:45:08.402 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:45:08 vm02 bash[17462]: audit 2026-03-10T05:45:07.546071+0000 mon.a (mon.0) 291 : audit [DBG] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:45:08.402 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:45:08 vm02 bash[17462]: audit 2026-03-10T05:45:07.550410+0000 mon.a (mon.0) 292 : audit [INF] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T05:45:08.402 INFO:journalctl@ceph.osd.1.vm02.stdout:Mar 10 05:45:08 vm02 bash[28375]: debug 2026-03-10T05:45:08.319+0000 7f08b347a700 -1 osd.1 0 waiting for initial osdmap 2026-03-10T05:45:08.402 INFO:journalctl@ceph.osd.1.vm02.stdout:Mar 10 05:45:08 vm02 bash[28375]: debug 2026-03-10T05:45:08.323+0000 7f08afe15700 -1 osd.1 11 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory 2026-03-10T05:45:08.402 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:45:08 vm02 bash[22526]: audit 2026-03-10T05:45:07.315967+0000 mon.a (mon.0) 285 : audit [INF] from='osd.1 [v2:192.168.123.102:6810/3944310722,v1:192.168.123.102:6811/3944310722]' entity='osd.1' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]': finished 2026-03-10T05:45:08.402 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:45:08 vm02 bash[22526]: cluster 2026-03-10T05:45:07.316058+0000 mon.a (mon.0) 286 : cluster [DBG] osdmap e10: 2 total, 1 up, 2 in 2026-03-10T05:45:08.402 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:45:08 vm02 bash[22526]: audit 2026-03-10T05:45:07.316100+0000 mon.a (mon.0) 287 : audit [DBG] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T05:45:08.403 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:45:08 vm02 bash[22526]: audit 2026-03-10T05:45:07.316936+0000 mon.a (mon.0) 288 : audit [INF] from='osd.1 [v2:192.168.123.102:6810/3944310722,v1:192.168.123.102:6811/3944310722]' entity='osd.1' cmd=[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=vm02", "root=default"]}]: dispatch 2026-03-10T05:45:08.403 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:45:08 vm02 bash[22526]: audit 2026-03-10T05:45:07.541966+0000 mon.a (mon.0) 289 : audit [INF] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' 2026-03-10T05:45:08.403 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:45:08 vm02 bash[22526]: audit 2026-03-10T05:45:07.542483+0000 mon.a (mon.0) 290 : audit [DBG] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T05:45:08.403 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:45:08 vm02 bash[22526]: audit 2026-03-10T05:45:07.546071+0000 mon.a (mon.0) 291 : audit [DBG] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:45:08.403 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:45:08 vm02 bash[22526]: audit 2026-03-10T05:45:07.550410+0000 mon.a (mon.0) 292 : audit [INF] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T05:45:08.758 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:45:08 vm05 bash[17864]: audit 2026-03-10T05:45:07.315967+0000 mon.a (mon.0) 285 : audit [INF] from='osd.1 [v2:192.168.123.102:6810/3944310722,v1:192.168.123.102:6811/3944310722]' entity='osd.1' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]': finished 2026-03-10T05:45:08.758 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:45:08 vm05 bash[17864]: cluster 2026-03-10T05:45:07.316058+0000 mon.a (mon.0) 286 : cluster [DBG] osdmap e10: 2 total, 1 up, 2 in 2026-03-10T05:45:08.758 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:45:08 vm05 bash[17864]: audit 2026-03-10T05:45:07.316100+0000 mon.a (mon.0) 287 : audit [DBG] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T05:45:08.758 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:45:08 vm05 bash[17864]: audit 2026-03-10T05:45:07.316936+0000 mon.a (mon.0) 288 : audit [INF] from='osd.1 [v2:192.168.123.102:6810/3944310722,v1:192.168.123.102:6811/3944310722]' entity='osd.1' cmd=[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=vm02", "root=default"]}]: dispatch 2026-03-10T05:45:08.758 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:45:08 vm05 bash[17864]: audit 2026-03-10T05:45:07.541966+0000 mon.a (mon.0) 289 : audit [INF] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' 2026-03-10T05:45:08.758 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:45:08 vm05 bash[17864]: audit 2026-03-10T05:45:07.542483+0000 mon.a (mon.0) 290 : audit [DBG] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T05:45:08.758 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:45:08 vm05 bash[17864]: audit 2026-03-10T05:45:07.546071+0000 mon.a (mon.0) 291 : audit [DBG] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:45:08.758 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:45:08 vm05 bash[17864]: audit 2026-03-10T05:45:07.550410+0000 mon.a (mon.0) 292 : audit [INF] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T05:45:09.758 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:45:09 vm05 bash[17864]: audit 2026-03-10T05:45:08.319031+0000 mon.a (mon.0) 293 : audit [INF] from='osd.1 [v2:192.168.123.102:6810/3944310722,v1:192.168.123.102:6811/3944310722]' entity='osd.1' cmd='[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=vm02", "root=default"]}]': finished 2026-03-10T05:45:09.758 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:45:09 vm05 bash[17864]: cluster 2026-03-10T05:45:08.319173+0000 mon.a (mon.0) 294 : cluster [DBG] osdmap e11: 2 total, 1 up, 2 in 2026-03-10T05:45:09.758 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:45:09 vm05 bash[17864]: audit 2026-03-10T05:45:08.321517+0000 mon.a (mon.0) 295 : audit [DBG] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T05:45:09.758 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:45:09 vm05 bash[17864]: audit 2026-03-10T05:45:08.326947+0000 mon.a (mon.0) 296 : audit [DBG] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T05:45:09.758 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:45:09 vm05 bash[17864]: audit 2026-03-10T05:45:08.565910+0000 mgr.y (mgr.14152) 60 : audit [DBG] from='client.14259 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm02:/dev/vdc", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T05:45:09.758 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:45:09 vm05 bash[17864]: audit 2026-03-10T05:45:08.567274+0000 mon.a (mon.0) 297 : audit [DBG] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-10T05:45:09.758 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:45:09 vm05 bash[17864]: audit 2026-03-10T05:45:08.568392+0000 mon.a (mon.0) 298 : audit [INF] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-10T05:45:09.758 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:45:09 vm05 bash[17864]: audit 2026-03-10T05:45:08.568786+0000 mon.a (mon.0) 299 : audit [DBG] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:45:09.758 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:45:09 vm05 bash[17864]: cluster 2026-03-10T05:45:08.756176+0000 mgr.y (mgr.14152) 61 : cluster [DBG] pgmap v28: 0 pgs: ; 0 B data, 4.8 MiB used, 20 GiB / 20 GiB avail 2026-03-10T05:45:09.758 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:45:09 vm05 bash[17864]: cluster 2026-03-10T05:45:09.324871+0000 mon.a (mon.0) 300 : cluster [INF] osd.1 [v2:192.168.123.102:6810/3944310722,v1:192.168.123.102:6811/3944310722] boot 2026-03-10T05:45:09.758 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:45:09 vm05 bash[17864]: cluster 2026-03-10T05:45:09.324906+0000 mon.a (mon.0) 301 : cluster [DBG] osdmap e12: 2 total, 2 up, 2 in 2026-03-10T05:45:09.758 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:45:09 vm05 bash[17864]: audit 2026-03-10T05:45:09.327221+0000 mon.a (mon.0) 302 : audit [DBG] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T05:45:09.833 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:45:09 vm02 bash[17462]: audit 2026-03-10T05:45:08.319031+0000 mon.a (mon.0) 293 : audit [INF] from='osd.1 [v2:192.168.123.102:6810/3944310722,v1:192.168.123.102:6811/3944310722]' entity='osd.1' cmd='[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=vm02", "root=default"]}]': finished 2026-03-10T05:45:09.833 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:45:09 vm02 bash[17462]: cluster 2026-03-10T05:45:08.319173+0000 mon.a (mon.0) 294 : cluster [DBG] osdmap e11: 2 total, 1 up, 2 in 2026-03-10T05:45:09.833 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:45:09 vm02 bash[17462]: audit 2026-03-10T05:45:08.321517+0000 mon.a (mon.0) 295 : audit [DBG] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T05:45:09.833 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:45:09 vm02 bash[17462]: audit 2026-03-10T05:45:08.326947+0000 mon.a (mon.0) 296 : audit [DBG] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T05:45:09.834 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:45:09 vm02 bash[17462]: audit 2026-03-10T05:45:08.565910+0000 mgr.y (mgr.14152) 60 : audit [DBG] from='client.14259 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm02:/dev/vdc", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T05:45:09.834 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:45:09 vm02 bash[17462]: audit 2026-03-10T05:45:08.567274+0000 mon.a (mon.0) 297 : audit [DBG] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-10T05:45:09.834 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:45:09 vm02 bash[17462]: audit 2026-03-10T05:45:08.568392+0000 mon.a (mon.0) 298 : audit [INF] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-10T05:45:09.834 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:45:09 vm02 bash[17462]: audit 2026-03-10T05:45:08.568786+0000 mon.a (mon.0) 299 : audit [DBG] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:45:09.834 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:45:09 vm02 bash[17462]: cluster 2026-03-10T05:45:08.756176+0000 mgr.y (mgr.14152) 61 : cluster [DBG] pgmap v28: 0 pgs: ; 0 B data, 4.8 MiB used, 20 GiB / 20 GiB avail 2026-03-10T05:45:09.834 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:45:09 vm02 bash[17462]: cluster 2026-03-10T05:45:09.324871+0000 mon.a (mon.0) 300 : cluster [INF] osd.1 [v2:192.168.123.102:6810/3944310722,v1:192.168.123.102:6811/3944310722] boot 2026-03-10T05:45:09.834 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:45:09 vm02 bash[17462]: cluster 2026-03-10T05:45:09.324906+0000 mon.a (mon.0) 301 : cluster [DBG] osdmap e12: 2 total, 2 up, 2 in 2026-03-10T05:45:09.834 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:45:09 vm02 bash[17462]: audit 2026-03-10T05:45:09.327221+0000 mon.a (mon.0) 302 : audit [DBG] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T05:45:09.834 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:45:09 vm02 bash[22526]: audit 2026-03-10T05:45:08.319031+0000 mon.a (mon.0) 293 : audit [INF] from='osd.1 [v2:192.168.123.102:6810/3944310722,v1:192.168.123.102:6811/3944310722]' entity='osd.1' cmd='[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=vm02", "root=default"]}]': finished 2026-03-10T05:45:09.834 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:45:09 vm02 bash[22526]: cluster 2026-03-10T05:45:08.319173+0000 mon.a (mon.0) 294 : cluster [DBG] osdmap e11: 2 total, 1 up, 2 in 2026-03-10T05:45:09.834 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:45:09 vm02 bash[22526]: audit 2026-03-10T05:45:08.321517+0000 mon.a (mon.0) 295 : audit [DBG] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T05:45:09.834 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:45:09 vm02 bash[22526]: audit 2026-03-10T05:45:08.326947+0000 mon.a (mon.0) 296 : audit [DBG] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T05:45:09.834 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:45:09 vm02 bash[22526]: audit 2026-03-10T05:45:08.565910+0000 mgr.y (mgr.14152) 60 : audit [DBG] from='client.14259 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm02:/dev/vdc", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T05:45:09.834 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:45:09 vm02 bash[22526]: audit 2026-03-10T05:45:08.567274+0000 mon.a (mon.0) 297 : audit [DBG] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-10T05:45:09.834 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:45:09 vm02 bash[22526]: audit 2026-03-10T05:45:08.568392+0000 mon.a (mon.0) 298 : audit [INF] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-10T05:45:09.834 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:45:09 vm02 bash[22526]: audit 2026-03-10T05:45:08.568786+0000 mon.a (mon.0) 299 : audit [DBG] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:45:09.834 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:45:09 vm02 bash[22526]: cluster 2026-03-10T05:45:08.756176+0000 mgr.y (mgr.14152) 61 : cluster [DBG] pgmap v28: 0 pgs: ; 0 B data, 4.8 MiB used, 20 GiB / 20 GiB avail 2026-03-10T05:45:09.834 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:45:09 vm02 bash[22526]: cluster 2026-03-10T05:45:09.324871+0000 mon.a (mon.0) 300 : cluster [INF] osd.1 [v2:192.168.123.102:6810/3944310722,v1:192.168.123.102:6811/3944310722] boot 2026-03-10T05:45:09.834 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:45:09 vm02 bash[22526]: cluster 2026-03-10T05:45:09.324906+0000 mon.a (mon.0) 301 : cluster [DBG] osdmap e12: 2 total, 2 up, 2 in 2026-03-10T05:45:09.834 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:45:09 vm02 bash[22526]: audit 2026-03-10T05:45:09.327221+0000 mon.a (mon.0) 302 : audit [DBG] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T05:45:10.583 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:45:10 vm02 bash[17462]: cluster 2026-03-10T05:45:08.109327+0000 osd.1 (osd.1) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-10T05:45:10.584 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:45:10 vm02 bash[17462]: cluster 2026-03-10T05:45:08.109398+0000 osd.1 (osd.1) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-10T05:45:10.584 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:45:10 vm02 bash[22526]: cluster 2026-03-10T05:45:08.109327+0000 osd.1 (osd.1) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-10T05:45:10.584 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:45:10 vm02 bash[22526]: cluster 2026-03-10T05:45:08.109398+0000 osd.1 (osd.1) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-10T05:45:10.758 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:45:10 vm05 bash[17864]: cluster 2026-03-10T05:45:08.109327+0000 osd.1 (osd.1) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-10T05:45:10.758 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:45:10 vm05 bash[17864]: cluster 2026-03-10T05:45:08.109398+0000 osd.1 (osd.1) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-10T05:45:11.583 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:45:11 vm02 bash[17462]: cluster 2026-03-10T05:45:10.756386+0000 mgr.y (mgr.14152) 62 : cluster [DBG] pgmap v30: 0 pgs: ; 0 B data, 9.6 MiB used, 40 GiB / 40 GiB avail 2026-03-10T05:45:11.584 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:45:11 vm02 bash[22526]: cluster 2026-03-10T05:45:10.756386+0000 mgr.y (mgr.14152) 62 : cluster [DBG] pgmap v30: 0 pgs: ; 0 B data, 9.6 MiB used, 40 GiB / 40 GiB avail 2026-03-10T05:45:11.758 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:45:11 vm05 bash[17864]: cluster 2026-03-10T05:45:10.756386+0000 mgr.y (mgr.14152) 62 : cluster [DBG] pgmap v30: 0 pgs: ; 0 B data, 9.6 MiB used, 40 GiB / 40 GiB avail 2026-03-10T05:45:13.008 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:45:12 vm05 bash[17864]: cephadm 2026-03-10T05:45:11.669924+0000 mgr.y (mgr.14152) 63 : cephadm [INF] Detected new or changed devices on vm02 2026-03-10T05:45:13.008 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:45:12 vm05 bash[17864]: audit 2026-03-10T05:45:11.676042+0000 mon.a (mon.0) 303 : audit [INF] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' 2026-03-10T05:45:13.008 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:45:12 vm05 bash[17864]: audit 2026-03-10T05:45:11.676551+0000 mon.a (mon.0) 304 : audit [INF] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm02", "name": "osd_memory_target"}]: dispatch 2026-03-10T05:45:13.008 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:45:12 vm05 bash[17864]: audit 2026-03-10T05:45:11.680210+0000 mon.a (mon.0) 305 : audit [INF] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' 2026-03-10T05:45:13.083 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:45:12 vm02 bash[22526]: cephadm 2026-03-10T05:45:11.669924+0000 mgr.y (mgr.14152) 63 : cephadm [INF] Detected new or changed devices on vm02 2026-03-10T05:45:13.084 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:45:12 vm02 bash[22526]: audit 2026-03-10T05:45:11.676042+0000 mon.a (mon.0) 303 : audit [INF] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' 2026-03-10T05:45:13.084 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:45:12 vm02 bash[22526]: audit 2026-03-10T05:45:11.676551+0000 mon.a (mon.0) 304 : audit [INF] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm02", "name": "osd_memory_target"}]: dispatch 2026-03-10T05:45:13.084 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:45:12 vm02 bash[22526]: audit 2026-03-10T05:45:11.680210+0000 mon.a (mon.0) 305 : audit [INF] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' 2026-03-10T05:45:13.084 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:45:12 vm02 bash[17462]: cephadm 2026-03-10T05:45:11.669924+0000 mgr.y (mgr.14152) 63 : cephadm [INF] Detected new or changed devices on vm02 2026-03-10T05:45:13.084 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:45:12 vm02 bash[17462]: audit 2026-03-10T05:45:11.676042+0000 mon.a (mon.0) 303 : audit [INF] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' 2026-03-10T05:45:13.084 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:45:12 vm02 bash[17462]: audit 2026-03-10T05:45:11.676551+0000 mon.a (mon.0) 304 : audit [INF] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm02", "name": "osd_memory_target"}]: dispatch 2026-03-10T05:45:13.084 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:45:12 vm02 bash[17462]: audit 2026-03-10T05:45:11.680210+0000 mon.a (mon.0) 305 : audit [INF] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' 2026-03-10T05:45:14.008 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:45:13 vm05 bash[17864]: audit 2026-03-10T05:45:12.694116+0000 mon.c (mon.1) 6 : audit [INF] from='client.? 192.168.123.102:0/636240245' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "2d5b11d8-3856-47e7-80bc-ba0d5e91fd6c"}]: dispatch 2026-03-10T05:45:14.008 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:45:13 vm05 bash[17864]: audit 2026-03-10T05:45:12.694328+0000 mon.a (mon.0) 306 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "2d5b11d8-3856-47e7-80bc-ba0d5e91fd6c"}]: dispatch 2026-03-10T05:45:14.008 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:45:13 vm05 bash[17864]: audit 2026-03-10T05:45:12.699593+0000 mon.a (mon.0) 307 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "2d5b11d8-3856-47e7-80bc-ba0d5e91fd6c"}]': finished 2026-03-10T05:45:14.008 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:45:13 vm05 bash[17864]: cluster 2026-03-10T05:45:12.699678+0000 mon.a (mon.0) 308 : cluster [DBG] osdmap e13: 3 total, 2 up, 3 in 2026-03-10T05:45:14.008 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:45:13 vm05 bash[17864]: audit 2026-03-10T05:45:12.699801+0000 mon.a (mon.0) 309 : audit [DBG] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T05:45:14.008 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:45:13 vm05 bash[17864]: cluster 2026-03-10T05:45:12.756575+0000 mgr.y (mgr.14152) 64 : cluster [DBG] pgmap v32: 0 pgs: ; 0 B data, 9.6 MiB used, 40 GiB / 40 GiB avail 2026-03-10T05:45:14.008 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:45:13 vm05 bash[17864]: audit 2026-03-10T05:45:13.303392+0000 mon.c (mon.1) 7 : audit [DBG] from='client.? 192.168.123.102:0/2177975789' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-10T05:45:14.083 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:45:13 vm02 bash[17462]: audit 2026-03-10T05:45:12.694116+0000 mon.c (mon.1) 6 : audit [INF] from='client.? 192.168.123.102:0/636240245' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "2d5b11d8-3856-47e7-80bc-ba0d5e91fd6c"}]: dispatch 2026-03-10T05:45:14.083 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:45:13 vm02 bash[17462]: audit 2026-03-10T05:45:12.694328+0000 mon.a (mon.0) 306 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "2d5b11d8-3856-47e7-80bc-ba0d5e91fd6c"}]: dispatch 2026-03-10T05:45:14.083 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:45:13 vm02 bash[17462]: audit 2026-03-10T05:45:12.699593+0000 mon.a (mon.0) 307 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "2d5b11d8-3856-47e7-80bc-ba0d5e91fd6c"}]': finished 2026-03-10T05:45:14.084 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:45:13 vm02 bash[17462]: cluster 2026-03-10T05:45:12.699678+0000 mon.a (mon.0) 308 : cluster [DBG] osdmap e13: 3 total, 2 up, 3 in 2026-03-10T05:45:14.084 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:45:13 vm02 bash[17462]: audit 2026-03-10T05:45:12.699801+0000 mon.a (mon.0) 309 : audit [DBG] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T05:45:14.084 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:45:13 vm02 bash[17462]: cluster 2026-03-10T05:45:12.756575+0000 mgr.y (mgr.14152) 64 : cluster [DBG] pgmap v32: 0 pgs: ; 0 B data, 9.6 MiB used, 40 GiB / 40 GiB avail 2026-03-10T05:45:14.084 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:45:13 vm02 bash[17462]: audit 2026-03-10T05:45:13.303392+0000 mon.c (mon.1) 7 : audit [DBG] from='client.? 192.168.123.102:0/2177975789' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-10T05:45:14.084 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:45:13 vm02 bash[22526]: audit 2026-03-10T05:45:12.694116+0000 mon.c (mon.1) 6 : audit [INF] from='client.? 192.168.123.102:0/636240245' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "2d5b11d8-3856-47e7-80bc-ba0d5e91fd6c"}]: dispatch 2026-03-10T05:45:14.084 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:45:13 vm02 bash[22526]: audit 2026-03-10T05:45:12.694328+0000 mon.a (mon.0) 306 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "2d5b11d8-3856-47e7-80bc-ba0d5e91fd6c"}]: dispatch 2026-03-10T05:45:14.084 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:45:13 vm02 bash[22526]: audit 2026-03-10T05:45:12.699593+0000 mon.a (mon.0) 307 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "2d5b11d8-3856-47e7-80bc-ba0d5e91fd6c"}]': finished 2026-03-10T05:45:14.084 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:45:13 vm02 bash[22526]: cluster 2026-03-10T05:45:12.699678+0000 mon.a (mon.0) 308 : cluster [DBG] osdmap e13: 3 total, 2 up, 3 in 2026-03-10T05:45:14.084 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:45:13 vm02 bash[22526]: audit 2026-03-10T05:45:12.699801+0000 mon.a (mon.0) 309 : audit [DBG] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T05:45:14.084 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:45:13 vm02 bash[22526]: cluster 2026-03-10T05:45:12.756575+0000 mgr.y (mgr.14152) 64 : cluster [DBG] pgmap v32: 0 pgs: ; 0 B data, 9.6 MiB used, 40 GiB / 40 GiB avail 2026-03-10T05:45:14.084 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:45:13 vm02 bash[22526]: audit 2026-03-10T05:45:13.303392+0000 mon.c (mon.1) 7 : audit [DBG] from='client.? 192.168.123.102:0/2177975789' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-10T05:45:15.008 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:45:14 vm05 bash[17864]: audit 2026-03-10T05:45:14.039278+0000 mon.a (mon.0) 310 : audit [INF] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/mirror_snapshot_schedule"}]: dispatch 2026-03-10T05:45:15.008 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:45:14 vm05 bash[17864]: audit 2026-03-10T05:45:14.040151+0000 mon.a (mon.0) 311 : audit [INF] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/trash_purge_schedule"}]: dispatch 2026-03-10T05:45:15.083 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:45:14 vm02 bash[17462]: audit 2026-03-10T05:45:14.039278+0000 mon.a (mon.0) 310 : audit [INF] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/mirror_snapshot_schedule"}]: dispatch 2026-03-10T05:45:15.083 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:45:14 vm02 bash[17462]: audit 2026-03-10T05:45:14.040151+0000 mon.a (mon.0) 311 : audit [INF] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/trash_purge_schedule"}]: dispatch 2026-03-10T05:45:15.083 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:45:14 vm02 bash[22526]: audit 2026-03-10T05:45:14.039278+0000 mon.a (mon.0) 310 : audit [INF] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/mirror_snapshot_schedule"}]: dispatch 2026-03-10T05:45:15.083 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:45:14 vm02 bash[22526]: audit 2026-03-10T05:45:14.040151+0000 mon.a (mon.0) 311 : audit [INF] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/trash_purge_schedule"}]: dispatch 2026-03-10T05:45:15.833 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:45:15 vm02 bash[17462]: cluster 2026-03-10T05:45:14.756753+0000 mgr.y (mgr.14152) 65 : cluster [DBG] pgmap v33: 0 pgs: ; 0 B data, 9.7 MiB used, 40 GiB / 40 GiB avail 2026-03-10T05:45:15.834 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:45:15 vm02 bash[22526]: cluster 2026-03-10T05:45:14.756753+0000 mgr.y (mgr.14152) 65 : cluster [DBG] pgmap v33: 0 pgs: ; 0 B data, 9.7 MiB used, 40 GiB / 40 GiB avail 2026-03-10T05:45:16.008 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:45:15 vm05 bash[17864]: cluster 2026-03-10T05:45:14.756753+0000 mgr.y (mgr.14152) 65 : cluster [DBG] pgmap v33: 0 pgs: ; 0 B data, 9.7 MiB used, 40 GiB / 40 GiB avail 2026-03-10T05:45:18.083 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:45:17 vm02 bash[17462]: cluster 2026-03-10T05:45:16.756997+0000 mgr.y (mgr.14152) 66 : cluster [DBG] pgmap v34: 0 pgs: ; 0 B data, 9.7 MiB used, 40 GiB / 40 GiB avail 2026-03-10T05:45:18.084 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:45:17 vm02 bash[22526]: cluster 2026-03-10T05:45:16.756997+0000 mgr.y (mgr.14152) 66 : cluster [DBG] pgmap v34: 0 pgs: ; 0 B data, 9.7 MiB used, 40 GiB / 40 GiB avail 2026-03-10T05:45:18.258 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:45:17 vm05 bash[17864]: cluster 2026-03-10T05:45:16.756997+0000 mgr.y (mgr.14152) 66 : cluster [DBG] pgmap v34: 0 pgs: ; 0 B data, 9.7 MiB used, 40 GiB / 40 GiB avail 2026-03-10T05:45:18.982 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:45:18 vm02 bash[17462]: audit 2026-03-10T05:45:18.736234+0000 mon.a (mon.0) 312 : audit [INF] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.2"}]: dispatch 2026-03-10T05:45:18.983 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:45:18 vm02 bash[17462]: audit 2026-03-10T05:45:18.736800+0000 mon.a (mon.0) 313 : audit [DBG] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:45:18.983 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:45:18 vm02 bash[22526]: audit 2026-03-10T05:45:18.736234+0000 mon.a (mon.0) 312 : audit [INF] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.2"}]: dispatch 2026-03-10T05:45:18.983 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:45:18 vm02 bash[22526]: audit 2026-03-10T05:45:18.736800+0000 mon.a (mon.0) 313 : audit [DBG] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:45:19.258 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:45:18 vm05 bash[17864]: audit 2026-03-10T05:45:18.736234+0000 mon.a (mon.0) 312 : audit [INF] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.2"}]: dispatch 2026-03-10T05:45:19.258 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:45:18 vm05 bash[17864]: audit 2026-03-10T05:45:18.736800+0000 mon.a (mon.0) 313 : audit [DBG] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:45:19.523 INFO:journalctl@ceph.mgr.y.vm02.stdout:Mar 10 05:45:19 vm02 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:45:19.523 INFO:journalctl@ceph.mgr.y.vm02.stdout:Mar 10 05:45:19 vm02 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:45:19.523 INFO:journalctl@ceph.osd.0.vm02.stdout:Mar 10 05:45:19 vm02 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:45:19.523 INFO:journalctl@ceph.osd.0.vm02.stdout:Mar 10 05:45:19 vm02 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:45:19.523 INFO:journalctl@ceph.osd.1.vm02.stdout:Mar 10 05:45:19 vm02 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:45:19.523 INFO:journalctl@ceph.osd.1.vm02.stdout:Mar 10 05:45:19 vm02 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:45:19.523 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:45:19 vm02 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:45:19.523 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:45:19 vm02 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:45:19.523 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:45:19 vm02 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:45:19.523 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:45:19 vm02 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:45:20.083 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:45:19 vm02 bash[17462]: cephadm 2026-03-10T05:45:18.737240+0000 mgr.y (mgr.14152) 67 : cephadm [INF] Deploying daemon osd.2 on vm02 2026-03-10T05:45:20.084 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:45:19 vm02 bash[17462]: cluster 2026-03-10T05:45:18.757208+0000 mgr.y (mgr.14152) 68 : cluster [DBG] pgmap v35: 0 pgs: ; 0 B data, 9.7 MiB used, 40 GiB / 40 GiB avail 2026-03-10T05:45:20.084 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:45:19 vm02 bash[17462]: audit 2026-03-10T05:45:19.532136+0000 mon.a (mon.0) 314 : audit [INF] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' 2026-03-10T05:45:20.084 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:45:19 vm02 bash[17462]: audit 2026-03-10T05:45:19.557037+0000 mon.a (mon.0) 315 : audit [DBG] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T05:45:20.084 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:45:19 vm02 bash[17462]: audit 2026-03-10T05:45:19.558628+0000 mon.a (mon.0) 316 : audit [DBG] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:45:20.084 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:45:19 vm02 bash[17462]: audit 2026-03-10T05:45:19.559205+0000 mon.a (mon.0) 317 : audit [INF] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T05:45:20.084 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:45:19 vm02 bash[22526]: cephadm 2026-03-10T05:45:18.737240+0000 mgr.y (mgr.14152) 67 : cephadm [INF] Deploying daemon osd.2 on vm02 2026-03-10T05:45:20.084 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:45:19 vm02 bash[22526]: cluster 2026-03-10T05:45:18.757208+0000 mgr.y (mgr.14152) 68 : cluster [DBG] pgmap v35: 0 pgs: ; 0 B data, 9.7 MiB used, 40 GiB / 40 GiB avail 2026-03-10T05:45:20.084 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:45:19 vm02 bash[22526]: audit 2026-03-10T05:45:19.532136+0000 mon.a (mon.0) 314 : audit [INF] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' 2026-03-10T05:45:20.084 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:45:19 vm02 bash[22526]: audit 2026-03-10T05:45:19.557037+0000 mon.a (mon.0) 315 : audit [DBG] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T05:45:20.084 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:45:19 vm02 bash[22526]: audit 2026-03-10T05:45:19.558628+0000 mon.a (mon.0) 316 : audit [DBG] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:45:20.084 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:45:19 vm02 bash[22526]: audit 2026-03-10T05:45:19.559205+0000 mon.a (mon.0) 317 : audit [INF] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T05:45:20.258 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:45:19 vm05 bash[17864]: cephadm 2026-03-10T05:45:18.737240+0000 mgr.y (mgr.14152) 67 : cephadm [INF] Deploying daemon osd.2 on vm02 2026-03-10T05:45:20.258 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:45:19 vm05 bash[17864]: cluster 2026-03-10T05:45:18.757208+0000 mgr.y (mgr.14152) 68 : cluster [DBG] pgmap v35: 0 pgs: ; 0 B data, 9.7 MiB used, 40 GiB / 40 GiB avail 2026-03-10T05:45:20.258 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:45:19 vm05 bash[17864]: audit 2026-03-10T05:45:19.532136+0000 mon.a (mon.0) 314 : audit [INF] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' 2026-03-10T05:45:20.258 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:45:19 vm05 bash[17864]: audit 2026-03-10T05:45:19.557037+0000 mon.a (mon.0) 315 : audit [DBG] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T05:45:20.258 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:45:19 vm05 bash[17864]: audit 2026-03-10T05:45:19.558628+0000 mon.a (mon.0) 316 : audit [DBG] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:45:20.258 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:45:19 vm05 bash[17864]: audit 2026-03-10T05:45:19.559205+0000 mon.a (mon.0) 317 : audit [INF] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T05:45:22.083 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:45:21 vm02 bash[17462]: cluster 2026-03-10T05:45:20.757455+0000 mgr.y (mgr.14152) 69 : cluster [DBG] pgmap v36: 0 pgs: ; 0 B data, 9.7 MiB used, 40 GiB / 40 GiB avail 2026-03-10T05:45:22.083 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:45:21 vm02 bash[22526]: cluster 2026-03-10T05:45:20.757455+0000 mgr.y (mgr.14152) 69 : cluster [DBG] pgmap v36: 0 pgs: ; 0 B data, 9.7 MiB used, 40 GiB / 40 GiB avail 2026-03-10T05:45:22.258 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:45:21 vm05 bash[17864]: cluster 2026-03-10T05:45:20.757455+0000 mgr.y (mgr.14152) 69 : cluster [DBG] pgmap v36: 0 pgs: ; 0 B data, 9.7 MiB used, 40 GiB / 40 GiB avail 2026-03-10T05:45:22.865 INFO:teuthology.orchestra.run.vm02.stdout:Created osd(s) 2 on host 'vm02' 2026-03-10T05:45:22.924 DEBUG:teuthology.orchestra.run.vm02:osd.2> sudo journalctl -f -n 0 -u ceph-107483ae-1c44-11f1-b530-c1172cd6122a@osd.2.service 2026-03-10T05:45:22.925 INFO:tasks.cephadm:Deploying osd.3 on vm02 with /dev/vdb... 2026-03-10T05:45:22.925 DEBUG:teuthology.orchestra.run.vm02:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 ceph-volume -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 107483ae-1c44-11f1-b530-c1172cd6122a -- lvm zap /dev/vdb 2026-03-10T05:45:23.084 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:45:22 vm02 bash[17462]: cluster 2026-03-10T05:45:21.830105+0000 mon.a (mon.0) 318 : cluster [DBG] osdmap e14: 3 total, 2 up, 3 in 2026-03-10T05:45:23.084 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:45:22 vm02 bash[17462]: audit 2026-03-10T05:45:21.831022+0000 mon.a (mon.0) 319 : audit [DBG] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T05:45:23.084 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:45:22 vm02 bash[17462]: audit 2026-03-10T05:45:22.520971+0000 mon.a (mon.0) 320 : audit [INF] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' 2026-03-10T05:45:23.084 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:45:22 vm02 bash[17462]: audit 2026-03-10T05:45:22.620650+0000 mon.c (mon.1) 8 : audit [INF] from='osd.2 [v2:192.168.123.102:6818/1818843754,v1:192.168.123.102:6819/1818843754]' entity='osd.2' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]: dispatch 2026-03-10T05:45:23.084 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:45:22 vm02 bash[17462]: audit 2026-03-10T05:45:22.624481+0000 mon.a (mon.0) 321 : audit [INF] from='osd.2 ' entity='osd.2' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]: dispatch 2026-03-10T05:45:23.084 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:45:22 vm02 bash[17462]: audit 2026-03-10T05:45:22.672594+0000 mon.a (mon.0) 322 : audit [INF] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' 2026-03-10T05:45:23.084 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:45:22 vm02 bash[22526]: cluster 2026-03-10T05:45:21.830105+0000 mon.a (mon.0) 318 : cluster [DBG] osdmap e14: 3 total, 2 up, 3 in 2026-03-10T05:45:23.084 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:45:22 vm02 bash[22526]: audit 2026-03-10T05:45:21.831022+0000 mon.a (mon.0) 319 : audit [DBG] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T05:45:23.084 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:45:22 vm02 bash[22526]: audit 2026-03-10T05:45:22.520971+0000 mon.a (mon.0) 320 : audit [INF] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' 2026-03-10T05:45:23.084 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:45:22 vm02 bash[22526]: audit 2026-03-10T05:45:22.620650+0000 mon.c (mon.1) 8 : audit [INF] from='osd.2 [v2:192.168.123.102:6818/1818843754,v1:192.168.123.102:6819/1818843754]' entity='osd.2' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]: dispatch 2026-03-10T05:45:23.084 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:45:22 vm02 bash[22526]: audit 2026-03-10T05:45:22.624481+0000 mon.a (mon.0) 321 : audit [INF] from='osd.2 ' entity='osd.2' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]: dispatch 2026-03-10T05:45:23.084 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:45:22 vm02 bash[22526]: audit 2026-03-10T05:45:22.672594+0000 mon.a (mon.0) 322 : audit [INF] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' 2026-03-10T05:45:23.258 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:45:22 vm05 bash[17864]: cluster 2026-03-10T05:45:21.830105+0000 mon.a (mon.0) 318 : cluster [DBG] osdmap e14: 3 total, 2 up, 3 in 2026-03-10T05:45:23.258 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:45:22 vm05 bash[17864]: audit 2026-03-10T05:45:21.831022+0000 mon.a (mon.0) 319 : audit [DBG] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T05:45:23.258 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:45:22 vm05 bash[17864]: audit 2026-03-10T05:45:22.520971+0000 mon.a (mon.0) 320 : audit [INF] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' 2026-03-10T05:45:23.258 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:45:22 vm05 bash[17864]: audit 2026-03-10T05:45:22.620650+0000 mon.c (mon.1) 8 : audit [INF] from='osd.2 [v2:192.168.123.102:6818/1818843754,v1:192.168.123.102:6819/1818843754]' entity='osd.2' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]: dispatch 2026-03-10T05:45:23.258 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:45:22 vm05 bash[17864]: audit 2026-03-10T05:45:22.624481+0000 mon.a (mon.0) 321 : audit [INF] from='osd.2 ' entity='osd.2' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]: dispatch 2026-03-10T05:45:23.258 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:45:22 vm05 bash[17864]: audit 2026-03-10T05:45:22.672594+0000 mon.a (mon.0) 322 : audit [INF] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' 2026-03-10T05:45:23.517 INFO:teuthology.orchestra.run.vm02.stdout: 2026-03-10T05:45:23.529 DEBUG:teuthology.orchestra.run.vm02:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 107483ae-1c44-11f1-b530-c1172cd6122a -- ceph orch daemon add osd vm02:/dev/vdb 2026-03-10T05:45:24.083 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:45:23 vm02 bash[17462]: cluster 2026-03-10T05:45:22.757656+0000 mgr.y (mgr.14152) 70 : cluster [DBG] pgmap v38: 0 pgs: ; 0 B data, 9.7 MiB used, 40 GiB / 40 GiB avail 2026-03-10T05:45:24.084 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:45:23 vm02 bash[17462]: audit 2026-03-10T05:45:22.859199+0000 mon.a (mon.0) 323 : audit [INF] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' 2026-03-10T05:45:24.084 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:45:23 vm02 bash[17462]: audit 2026-03-10T05:45:22.887771+0000 mon.a (mon.0) 324 : audit [DBG] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T05:45:24.084 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:45:23 vm02 bash[17462]: audit 2026-03-10T05:45:22.888590+0000 mon.a (mon.0) 325 : audit [DBG] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:45:24.084 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:45:23 vm02 bash[17462]: audit 2026-03-10T05:45:22.889043+0000 mon.a (mon.0) 326 : audit [INF] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T05:45:24.084 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:45:23 vm02 bash[17462]: audit 2026-03-10T05:45:23.532797+0000 mon.a (mon.0) 327 : audit [INF] from='osd.2 ' entity='osd.2' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]': finished 2026-03-10T05:45:24.084 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:45:23 vm02 bash[17462]: cluster 2026-03-10T05:45:23.532939+0000 mon.a (mon.0) 328 : cluster [DBG] osdmap e15: 3 total, 2 up, 3 in 2026-03-10T05:45:24.084 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:45:23 vm02 bash[17462]: audit 2026-03-10T05:45:23.533058+0000 mon.a (mon.0) 329 : audit [DBG] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T05:45:24.084 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:45:23 vm02 bash[17462]: audit 2026-03-10T05:45:23.533970+0000 mon.c (mon.1) 9 : audit [INF] from='osd.2 [v2:192.168.123.102:6818/1818843754,v1:192.168.123.102:6819/1818843754]' entity='osd.2' cmd=[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=vm02", "root=default"]}]: dispatch 2026-03-10T05:45:24.084 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:45:23 vm02 bash[17462]: audit 2026-03-10T05:45:23.534549+0000 mon.a (mon.0) 330 : audit [INF] from='osd.2 ' entity='osd.2' cmd=[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=vm02", "root=default"]}]: dispatch 2026-03-10T05:45:24.084 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:45:23 vm02 bash[22526]: cluster 2026-03-10T05:45:22.757656+0000 mgr.y (mgr.14152) 70 : cluster [DBG] pgmap v38: 0 pgs: ; 0 B data, 9.7 MiB used, 40 GiB / 40 GiB avail 2026-03-10T05:45:24.084 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:45:23 vm02 bash[22526]: audit 2026-03-10T05:45:22.859199+0000 mon.a (mon.0) 323 : audit [INF] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' 2026-03-10T05:45:24.084 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:45:23 vm02 bash[22526]: audit 2026-03-10T05:45:22.887771+0000 mon.a (mon.0) 324 : audit [DBG] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T05:45:24.084 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:45:23 vm02 bash[22526]: audit 2026-03-10T05:45:22.888590+0000 mon.a (mon.0) 325 : audit [DBG] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:45:24.084 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:45:23 vm02 bash[22526]: audit 2026-03-10T05:45:22.889043+0000 mon.a (mon.0) 326 : audit [INF] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T05:45:24.084 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:45:23 vm02 bash[22526]: audit 2026-03-10T05:45:23.532797+0000 mon.a (mon.0) 327 : audit [INF] from='osd.2 ' entity='osd.2' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]': finished 2026-03-10T05:45:24.084 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:45:23 vm02 bash[22526]: cluster 2026-03-10T05:45:23.532939+0000 mon.a (mon.0) 328 : cluster [DBG] osdmap e15: 3 total, 2 up, 3 in 2026-03-10T05:45:24.084 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:45:23 vm02 bash[22526]: audit 2026-03-10T05:45:23.533058+0000 mon.a (mon.0) 329 : audit [DBG] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T05:45:24.084 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:45:23 vm02 bash[22526]: audit 2026-03-10T05:45:23.533970+0000 mon.c (mon.1) 9 : audit [INF] from='osd.2 [v2:192.168.123.102:6818/1818843754,v1:192.168.123.102:6819/1818843754]' entity='osd.2' cmd=[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=vm02", "root=default"]}]: dispatch 2026-03-10T05:45:24.084 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:45:23 vm02 bash[22526]: audit 2026-03-10T05:45:23.534549+0000 mon.a (mon.0) 330 : audit [INF] from='osd.2 ' entity='osd.2' cmd=[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=vm02", "root=default"]}]: dispatch 2026-03-10T05:45:24.258 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:45:23 vm05 bash[17864]: cluster 2026-03-10T05:45:22.757656+0000 mgr.y (mgr.14152) 70 : cluster [DBG] pgmap v38: 0 pgs: ; 0 B data, 9.7 MiB used, 40 GiB / 40 GiB avail 2026-03-10T05:45:24.258 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:45:23 vm05 bash[17864]: audit 2026-03-10T05:45:22.859199+0000 mon.a (mon.0) 323 : audit [INF] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' 2026-03-10T05:45:24.258 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:45:23 vm05 bash[17864]: audit 2026-03-10T05:45:22.887771+0000 mon.a (mon.0) 324 : audit [DBG] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T05:45:24.258 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:45:23 vm05 bash[17864]: audit 2026-03-10T05:45:22.888590+0000 mon.a (mon.0) 325 : audit [DBG] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:45:24.258 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:45:23 vm05 bash[17864]: audit 2026-03-10T05:45:22.889043+0000 mon.a (mon.0) 326 : audit [INF] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T05:45:24.258 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:45:23 vm05 bash[17864]: audit 2026-03-10T05:45:23.532797+0000 mon.a (mon.0) 327 : audit [INF] from='osd.2 ' entity='osd.2' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]': finished 2026-03-10T05:45:24.258 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:45:23 vm05 bash[17864]: cluster 2026-03-10T05:45:23.532939+0000 mon.a (mon.0) 328 : cluster [DBG] osdmap e15: 3 total, 2 up, 3 in 2026-03-10T05:45:24.258 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:45:23 vm05 bash[17864]: audit 2026-03-10T05:45:23.533058+0000 mon.a (mon.0) 329 : audit [DBG] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T05:45:24.258 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:45:23 vm05 bash[17864]: audit 2026-03-10T05:45:23.533970+0000 mon.c (mon.1) 9 : audit [INF] from='osd.2 [v2:192.168.123.102:6818/1818843754,v1:192.168.123.102:6819/1818843754]' entity='osd.2' cmd=[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=vm02", "root=default"]}]: dispatch 2026-03-10T05:45:24.258 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:45:23 vm05 bash[17864]: audit 2026-03-10T05:45:23.534549+0000 mon.a (mon.0) 330 : audit [INF] from='osd.2 ' entity='osd.2' cmd=[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=vm02", "root=default"]}]: dispatch 2026-03-10T05:45:24.834 INFO:journalctl@ceph.osd.2.vm02.stdout:Mar 10 05:45:24 vm02 bash[31546]: debug 2026-03-10T05:45:24.531+0000 7f49b1738700 -1 osd.2 0 waiting for initial osdmap 2026-03-10T05:45:24.834 INFO:journalctl@ceph.osd.2.vm02.stdout:Mar 10 05:45:24 vm02 bash[31546]: debug 2026-03-10T05:45:24.535+0000 7f49ab8ce700 -1 osd.2 16 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory 2026-03-10T05:45:25.258 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:45:24 vm05 bash[17864]: audit 2026-03-10T05:45:23.922324+0000 mgr.y (mgr.14152) 71 : audit [DBG] from='client.14271 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm02:/dev/vdb", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T05:45:25.258 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:45:24 vm05 bash[17864]: audit 2026-03-10T05:45:23.923601+0000 mon.a (mon.0) 331 : audit [DBG] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-10T05:45:25.258 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:45:24 vm05 bash[17864]: audit 2026-03-10T05:45:23.924856+0000 mon.a (mon.0) 332 : audit [INF] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-10T05:45:25.258 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:45:24 vm05 bash[17864]: audit 2026-03-10T05:45:23.925274+0000 mon.a (mon.0) 333 : audit [DBG] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:45:25.258 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:45:24 vm05 bash[17864]: audit 2026-03-10T05:45:24.532565+0000 mon.a (mon.0) 334 : audit [INF] from='osd.2 ' entity='osd.2' cmd='[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=vm02", "root=default"]}]': finished 2026-03-10T05:45:25.258 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:45:24 vm05 bash[17864]: cluster 2026-03-10T05:45:24.534521+0000 mon.a (mon.0) 335 : cluster [DBG] osdmap e16: 3 total, 2 up, 3 in 2026-03-10T05:45:25.258 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:45:24 vm05 bash[17864]: audit 2026-03-10T05:45:24.538335+0000 mon.a (mon.0) 336 : audit [DBG] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T05:45:25.334 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:45:24 vm02 bash[17462]: audit 2026-03-10T05:45:23.922324+0000 mgr.y (mgr.14152) 71 : audit [DBG] from='client.14271 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm02:/dev/vdb", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T05:45:25.334 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:45:24 vm02 bash[17462]: audit 2026-03-10T05:45:23.923601+0000 mon.a (mon.0) 331 : audit [DBG] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-10T05:45:25.334 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:45:24 vm02 bash[17462]: audit 2026-03-10T05:45:23.924856+0000 mon.a (mon.0) 332 : audit [INF] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-10T05:45:25.334 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:45:24 vm02 bash[17462]: audit 2026-03-10T05:45:23.925274+0000 mon.a (mon.0) 333 : audit [DBG] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:45:25.334 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:45:24 vm02 bash[17462]: audit 2026-03-10T05:45:24.532565+0000 mon.a (mon.0) 334 : audit [INF] from='osd.2 ' entity='osd.2' cmd='[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=vm02", "root=default"]}]': finished 2026-03-10T05:45:25.334 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:45:24 vm02 bash[17462]: cluster 2026-03-10T05:45:24.534521+0000 mon.a (mon.0) 335 : cluster [DBG] osdmap e16: 3 total, 2 up, 3 in 2026-03-10T05:45:25.334 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:45:24 vm02 bash[17462]: audit 2026-03-10T05:45:24.538335+0000 mon.a (mon.0) 336 : audit [DBG] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T05:45:25.334 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:45:24 vm02 bash[22526]: audit 2026-03-10T05:45:23.922324+0000 mgr.y (mgr.14152) 71 : audit [DBG] from='client.14271 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm02:/dev/vdb", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T05:45:25.334 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:45:24 vm02 bash[22526]: audit 2026-03-10T05:45:23.923601+0000 mon.a (mon.0) 331 : audit [DBG] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-10T05:45:25.334 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:45:24 vm02 bash[22526]: audit 2026-03-10T05:45:23.924856+0000 mon.a (mon.0) 332 : audit [INF] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-10T05:45:25.334 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:45:24 vm02 bash[22526]: audit 2026-03-10T05:45:23.925274+0000 mon.a (mon.0) 333 : audit [DBG] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:45:25.334 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:45:24 vm02 bash[22526]: audit 2026-03-10T05:45:24.532565+0000 mon.a (mon.0) 334 : audit [INF] from='osd.2 ' entity='osd.2' cmd='[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=vm02", "root=default"]}]': finished 2026-03-10T05:45:25.334 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:45:24 vm02 bash[22526]: cluster 2026-03-10T05:45:24.534521+0000 mon.a (mon.0) 335 : cluster [DBG] osdmap e16: 3 total, 2 up, 3 in 2026-03-10T05:45:25.334 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:45:24 vm02 bash[22526]: audit 2026-03-10T05:45:24.538335+0000 mon.a (mon.0) 336 : audit [DBG] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T05:45:26.258 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:45:25 vm05 bash[17864]: cluster 2026-03-10T05:45:23.624809+0000 osd.2 (osd.2) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-10T05:45:26.258 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:45:25 vm05 bash[17864]: cluster 2026-03-10T05:45:23.624883+0000 osd.2 (osd.2) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-10T05:45:26.258 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:45:25 vm05 bash[17864]: cluster 2026-03-10T05:45:24.757834+0000 mgr.y (mgr.14152) 72 : cluster [DBG] pgmap v41: 0 pgs: ; 0 B data, 9.8 MiB used, 40 GiB / 40 GiB avail 2026-03-10T05:45:26.258 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:45:25 vm05 bash[17864]: cluster 2026-03-10T05:45:25.537603+0000 mon.a (mon.0) 337 : cluster [INF] osd.2 [v2:192.168.123.102:6818/1818843754,v1:192.168.123.102:6819/1818843754] boot 2026-03-10T05:45:26.258 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:45:25 vm05 bash[17864]: cluster 2026-03-10T05:45:25.537645+0000 mon.a (mon.0) 338 : cluster [DBG] osdmap e17: 3 total, 3 up, 3 in 2026-03-10T05:45:26.258 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:45:25 vm05 bash[17864]: audit 2026-03-10T05:45:25.538235+0000 mon.a (mon.0) 339 : audit [DBG] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T05:45:26.334 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:45:25 vm02 bash[17462]: cluster 2026-03-10T05:45:23.624809+0000 osd.2 (osd.2) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-10T05:45:26.334 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:45:25 vm02 bash[17462]: cluster 2026-03-10T05:45:23.624883+0000 osd.2 (osd.2) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-10T05:45:26.334 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:45:25 vm02 bash[17462]: cluster 2026-03-10T05:45:24.757834+0000 mgr.y (mgr.14152) 72 : cluster [DBG] pgmap v41: 0 pgs: ; 0 B data, 9.8 MiB used, 40 GiB / 40 GiB avail 2026-03-10T05:45:26.334 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:45:25 vm02 bash[17462]: cluster 2026-03-10T05:45:25.537603+0000 mon.a (mon.0) 337 : cluster [INF] osd.2 [v2:192.168.123.102:6818/1818843754,v1:192.168.123.102:6819/1818843754] boot 2026-03-10T05:45:26.334 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:45:25 vm02 bash[17462]: cluster 2026-03-10T05:45:25.537645+0000 mon.a (mon.0) 338 : cluster [DBG] osdmap e17: 3 total, 3 up, 3 in 2026-03-10T05:45:26.334 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:45:25 vm02 bash[17462]: audit 2026-03-10T05:45:25.538235+0000 mon.a (mon.0) 339 : audit [DBG] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T05:45:26.334 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:45:25 vm02 bash[22526]: cluster 2026-03-10T05:45:23.624809+0000 osd.2 (osd.2) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-10T05:45:26.334 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:45:25 vm02 bash[22526]: cluster 2026-03-10T05:45:23.624883+0000 osd.2 (osd.2) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-10T05:45:26.334 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:45:25 vm02 bash[22526]: cluster 2026-03-10T05:45:24.757834+0000 mgr.y (mgr.14152) 72 : cluster [DBG] pgmap v41: 0 pgs: ; 0 B data, 9.8 MiB used, 40 GiB / 40 GiB avail 2026-03-10T05:45:26.334 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:45:25 vm02 bash[22526]: cluster 2026-03-10T05:45:25.537603+0000 mon.a (mon.0) 337 : cluster [INF] osd.2 [v2:192.168.123.102:6818/1818843754,v1:192.168.123.102:6819/1818843754] boot 2026-03-10T05:45:26.334 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:45:25 vm02 bash[22526]: cluster 2026-03-10T05:45:25.537645+0000 mon.a (mon.0) 338 : cluster [DBG] osdmap e17: 3 total, 3 up, 3 in 2026-03-10T05:45:26.334 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:45:25 vm02 bash[22526]: audit 2026-03-10T05:45:25.538235+0000 mon.a (mon.0) 339 : audit [DBG] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T05:45:27.126 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:45:26 vm02 bash[17462]: audit 2026-03-10T05:45:26.042873+0000 mon.a (mon.0) 340 : audit [INF] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32}]: dispatch 2026-03-10T05:45:27.126 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:45:26 vm02 bash[17462]: audit 2026-03-10T05:45:26.546109+0000 mon.a (mon.0) 341 : audit [INF] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd='[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32}]': finished 2026-03-10T05:45:27.126 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:45:26 vm02 bash[17462]: cluster 2026-03-10T05:45:26.546140+0000 mon.a (mon.0) 342 : cluster [DBG] osdmap e18: 3 total, 3 up, 3 in 2026-03-10T05:45:27.126 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:45:26 vm02 bash[17462]: audit 2026-03-10T05:45:26.548001+0000 mon.a (mon.0) 343 : audit [INF] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]: dispatch 2026-03-10T05:45:27.126 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:45:26 vm02 bash[22526]: audit 2026-03-10T05:45:26.042873+0000 mon.a (mon.0) 340 : audit [INF] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32}]: dispatch 2026-03-10T05:45:27.126 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:45:26 vm02 bash[22526]: audit 2026-03-10T05:45:26.546109+0000 mon.a (mon.0) 341 : audit [INF] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd='[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32}]': finished 2026-03-10T05:45:27.126 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:45:26 vm02 bash[22526]: cluster 2026-03-10T05:45:26.546140+0000 mon.a (mon.0) 342 : cluster [DBG] osdmap e18: 3 total, 3 up, 3 in 2026-03-10T05:45:27.126 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:45:26 vm02 bash[22526]: audit 2026-03-10T05:45:26.548001+0000 mon.a (mon.0) 343 : audit [INF] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]: dispatch 2026-03-10T05:45:27.258 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:45:26 vm05 bash[17864]: audit 2026-03-10T05:45:26.042873+0000 mon.a (mon.0) 340 : audit [INF] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32}]: dispatch 2026-03-10T05:45:27.258 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:45:26 vm05 bash[17864]: audit 2026-03-10T05:45:26.546109+0000 mon.a (mon.0) 341 : audit [INF] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd='[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32}]': finished 2026-03-10T05:45:27.258 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:45:26 vm05 bash[17864]: cluster 2026-03-10T05:45:26.546140+0000 mon.a (mon.0) 342 : cluster [DBG] osdmap e18: 3 total, 3 up, 3 in 2026-03-10T05:45:27.258 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:45:26 vm05 bash[17864]: audit 2026-03-10T05:45:26.548001+0000 mon.a (mon.0) 343 : audit [INF] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]: dispatch 2026-03-10T05:45:28.508 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:45:28 vm05 bash[17864]: cluster 2026-03-10T05:45:26.758275+0000 mgr.y (mgr.14152) 73 : cluster [DBG] pgmap v44: 1 pgs: 1 unknown; 0 B data, 15 MiB used, 60 GiB / 60 GiB avail 2026-03-10T05:45:28.508 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:45:28 vm05 bash[17864]: cephadm 2026-03-10T05:45:27.128260+0000 mgr.y (mgr.14152) 74 : cephadm [INF] Detected new or changed devices on vm02 2026-03-10T05:45:28.508 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:45:28 vm05 bash[17864]: audit 2026-03-10T05:45:27.134764+0000 mon.a (mon.0) 344 : audit [INF] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' 2026-03-10T05:45:28.508 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:45:28 vm05 bash[17864]: audit 2026-03-10T05:45:27.136354+0000 mon.a (mon.0) 345 : audit [INF] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm02", "name": "osd_memory_target"}]: dispatch 2026-03-10T05:45:28.508 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:45:28 vm05 bash[17864]: audit 2026-03-10T05:45:27.139768+0000 mon.a (mon.0) 346 : audit [INF] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' 2026-03-10T05:45:28.508 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:45:28 vm05 bash[17864]: audit 2026-03-10T05:45:27.556594+0000 mon.a (mon.0) 347 : audit [INF] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd='[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]': finished 2026-03-10T05:45:28.508 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:45:28 vm05 bash[17864]: cluster 2026-03-10T05:45:27.556724+0000 mon.a (mon.0) 348 : cluster [DBG] osdmap e19: 3 total, 3 up, 3 in 2026-03-10T05:45:28.508 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:45:28 vm05 bash[17864]: audit 2026-03-10T05:45:28.112692+0000 mon.a (mon.0) 349 : audit [INF] from='client.? 192.168.123.102:0/183946179' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "c8c62231-6895-42f2-ba03-c49e0ca5380e"}]: dispatch 2026-03-10T05:45:28.508 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:45:28 vm05 bash[17864]: audit 2026-03-10T05:45:28.117629+0000 mon.a (mon.0) 350 : audit [INF] from='client.? 192.168.123.102:0/183946179' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "c8c62231-6895-42f2-ba03-c49e0ca5380e"}]': finished 2026-03-10T05:45:28.508 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:45:28 vm05 bash[17864]: cluster 2026-03-10T05:45:28.117657+0000 mon.a (mon.0) 351 : cluster [DBG] osdmap e20: 4 total, 3 up, 4 in 2026-03-10T05:45:28.508 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:45:28 vm05 bash[17864]: audit 2026-03-10T05:45:28.117703+0000 mon.a (mon.0) 352 : audit [DBG] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-10T05:45:28.531 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:45:28 vm02 bash[22526]: cluster 2026-03-10T05:45:26.758275+0000 mgr.y (mgr.14152) 73 : cluster [DBG] pgmap v44: 1 pgs: 1 unknown; 0 B data, 15 MiB used, 60 GiB / 60 GiB avail 2026-03-10T05:45:28.531 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:45:28 vm02 bash[22526]: cephadm 2026-03-10T05:45:27.128260+0000 mgr.y (mgr.14152) 74 : cephadm [INF] Detected new or changed devices on vm02 2026-03-10T05:45:28.531 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:45:28 vm02 bash[22526]: audit 2026-03-10T05:45:27.134764+0000 mon.a (mon.0) 344 : audit [INF] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' 2026-03-10T05:45:28.531 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:45:28 vm02 bash[22526]: audit 2026-03-10T05:45:27.136354+0000 mon.a (mon.0) 345 : audit [INF] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm02", "name": "osd_memory_target"}]: dispatch 2026-03-10T05:45:28.531 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:45:28 vm02 bash[22526]: audit 2026-03-10T05:45:27.139768+0000 mon.a (mon.0) 346 : audit [INF] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' 2026-03-10T05:45:28.531 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:45:28 vm02 bash[22526]: audit 2026-03-10T05:45:27.556594+0000 mon.a (mon.0) 347 : audit [INF] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd='[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]': finished 2026-03-10T05:45:28.531 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:45:28 vm02 bash[22526]: cluster 2026-03-10T05:45:27.556724+0000 mon.a (mon.0) 348 : cluster [DBG] osdmap e19: 3 total, 3 up, 3 in 2026-03-10T05:45:28.531 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:45:28 vm02 bash[22526]: audit 2026-03-10T05:45:28.112692+0000 mon.a (mon.0) 349 : audit [INF] from='client.? 192.168.123.102:0/183946179' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "c8c62231-6895-42f2-ba03-c49e0ca5380e"}]: dispatch 2026-03-10T05:45:28.531 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:45:28 vm02 bash[22526]: audit 2026-03-10T05:45:28.117629+0000 mon.a (mon.0) 350 : audit [INF] from='client.? 192.168.123.102:0/183946179' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "c8c62231-6895-42f2-ba03-c49e0ca5380e"}]': finished 2026-03-10T05:45:28.531 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:45:28 vm02 bash[22526]: cluster 2026-03-10T05:45:28.117657+0000 mon.a (mon.0) 351 : cluster [DBG] osdmap e20: 4 total, 3 up, 4 in 2026-03-10T05:45:28.531 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:45:28 vm02 bash[22526]: audit 2026-03-10T05:45:28.117703+0000 mon.a (mon.0) 352 : audit [DBG] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-10T05:45:28.532 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:45:28 vm02 bash[17462]: cluster 2026-03-10T05:45:26.758275+0000 mgr.y (mgr.14152) 73 : cluster [DBG] pgmap v44: 1 pgs: 1 unknown; 0 B data, 15 MiB used, 60 GiB / 60 GiB avail 2026-03-10T05:45:28.532 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:45:28 vm02 bash[17462]: cephadm 2026-03-10T05:45:27.128260+0000 mgr.y (mgr.14152) 74 : cephadm [INF] Detected new or changed devices on vm02 2026-03-10T05:45:28.532 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:45:28 vm02 bash[17462]: audit 2026-03-10T05:45:27.134764+0000 mon.a (mon.0) 344 : audit [INF] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' 2026-03-10T05:45:28.532 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:45:28 vm02 bash[17462]: audit 2026-03-10T05:45:27.136354+0000 mon.a (mon.0) 345 : audit [INF] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm02", "name": "osd_memory_target"}]: dispatch 2026-03-10T05:45:28.532 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:45:28 vm02 bash[17462]: audit 2026-03-10T05:45:27.139768+0000 mon.a (mon.0) 346 : audit [INF] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' 2026-03-10T05:45:28.532 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:45:28 vm02 bash[17462]: audit 2026-03-10T05:45:27.556594+0000 mon.a (mon.0) 347 : audit [INF] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd='[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]': finished 2026-03-10T05:45:28.532 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:45:28 vm02 bash[17462]: cluster 2026-03-10T05:45:27.556724+0000 mon.a (mon.0) 348 : cluster [DBG] osdmap e19: 3 total, 3 up, 3 in 2026-03-10T05:45:28.532 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:45:28 vm02 bash[17462]: audit 2026-03-10T05:45:28.112692+0000 mon.a (mon.0) 349 : audit [INF] from='client.? 192.168.123.102:0/183946179' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "c8c62231-6895-42f2-ba03-c49e0ca5380e"}]: dispatch 2026-03-10T05:45:28.532 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:45:28 vm02 bash[17462]: audit 2026-03-10T05:45:28.117629+0000 mon.a (mon.0) 350 : audit [INF] from='client.? 192.168.123.102:0/183946179' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "c8c62231-6895-42f2-ba03-c49e0ca5380e"}]': finished 2026-03-10T05:45:28.532 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:45:28 vm02 bash[17462]: cluster 2026-03-10T05:45:28.117657+0000 mon.a (mon.0) 351 : cluster [DBG] osdmap e20: 4 total, 3 up, 4 in 2026-03-10T05:45:28.532 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:45:28 vm02 bash[17462]: audit 2026-03-10T05:45:28.117703+0000 mon.a (mon.0) 352 : audit [DBG] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-10T05:45:29.508 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:45:29 vm05 bash[17864]: audit 2026-03-10T05:45:28.537421+0000 mon.a (mon.0) 353 : audit [INF] from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch 2026-03-10T05:45:29.508 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:45:29 vm05 bash[17864]: audit 2026-03-10T05:45:28.688491+0000 mon.a (mon.0) 354 : audit [INF] from='admin socket' entity='admin socket' cmd=smart args=[json]: finished 2026-03-10T05:45:29.508 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:45:29 vm05 bash[17864]: audit 2026-03-10T05:45:28.688890+0000 mon.a (mon.0) 355 : audit [DBG] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-10T05:45:29.508 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:45:29 vm05 bash[17864]: audit 2026-03-10T05:45:28.689108+0000 mon.a (mon.0) 356 : audit [DBG] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T05:45:29.508 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:45:29 vm05 bash[17864]: audit 2026-03-10T05:45:28.689285+0000 mon.a (mon.0) 357 : audit [DBG] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T05:45:29.508 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:45:29 vm05 bash[17864]: audit 2026-03-10T05:45:28.690314+0000 mon.c (mon.1) 10 : audit [INF] from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch 2026-03-10T05:45:29.508 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:45:29 vm05 bash[17864]: audit 2026-03-10T05:45:28.703778+0000 mon.a (mon.0) 358 : audit [DBG] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-10T05:45:29.508 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:45:29 vm05 bash[17864]: audit 2026-03-10T05:45:28.703825+0000 mon.a (mon.0) 359 : audit [DBG] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T05:45:29.508 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:45:29 vm05 bash[17864]: audit 2026-03-10T05:45:28.704095+0000 mon.a (mon.0) 360 : audit [DBG] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T05:45:29.508 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:45:29 vm05 bash[17864]: audit 2026-03-10T05:45:28.834713+0000 mon.b (mon.2) 5 : audit [INF] from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch 2026-03-10T05:45:29.508 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:45:29 vm05 bash[17864]: audit 2026-03-10T05:45:28.837339+0000 mon.c (mon.1) 11 : audit [INF] from='admin socket' entity='admin socket' cmd=smart args=[json]: finished 2026-03-10T05:45:29.508 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:45:29 vm05 bash[17864]: audit 2026-03-10T05:45:28.838699+0000 mon.c (mon.1) 12 : audit [DBG] from='client.? 192.168.123.102:0/3385209186' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-10T05:45:29.508 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:45:29 vm05 bash[17864]: audit 2026-03-10T05:45:28.839828+0000 mon.a (mon.0) 361 : audit [DBG] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-10T05:45:29.508 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:45:29 vm05 bash[17864]: audit 2026-03-10T05:45:28.839881+0000 mon.a (mon.0) 362 : audit [DBG] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T05:45:29.508 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:45:29 vm05 bash[17864]: audit 2026-03-10T05:45:28.839924+0000 mon.a (mon.0) 363 : audit [DBG] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T05:45:29.508 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:45:29 vm05 bash[17864]: audit 2026-03-10T05:45:28.977281+0000 mon.b (mon.2) 6 : audit [INF] from='admin socket' entity='admin socket' cmd=smart args=[json]: finished 2026-03-10T05:45:29.583 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:45:29 vm02 bash[17462]: audit 2026-03-10T05:45:28.537421+0000 mon.a (mon.0) 353 : audit [INF] from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch 2026-03-10T05:45:29.584 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:45:29 vm02 bash[17462]: audit 2026-03-10T05:45:28.688491+0000 mon.a (mon.0) 354 : audit [INF] from='admin socket' entity='admin socket' cmd=smart args=[json]: finished 2026-03-10T05:45:29.584 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:45:29 vm02 bash[17462]: audit 2026-03-10T05:45:28.688890+0000 mon.a (mon.0) 355 : audit [DBG] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-10T05:45:29.584 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:45:29 vm02 bash[17462]: audit 2026-03-10T05:45:28.689108+0000 mon.a (mon.0) 356 : audit [DBG] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T05:45:29.584 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:45:29 vm02 bash[17462]: audit 2026-03-10T05:45:28.689285+0000 mon.a (mon.0) 357 : audit [DBG] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T05:45:29.584 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:45:29 vm02 bash[17462]: audit 2026-03-10T05:45:28.690314+0000 mon.c (mon.1) 10 : audit [INF] from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch 2026-03-10T05:45:29.584 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:45:29 vm02 bash[17462]: audit 2026-03-10T05:45:28.703778+0000 mon.a (mon.0) 358 : audit [DBG] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-10T05:45:29.584 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:45:29 vm02 bash[17462]: audit 2026-03-10T05:45:28.703825+0000 mon.a (mon.0) 359 : audit [DBG] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T05:45:29.584 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:45:29 vm02 bash[17462]: audit 2026-03-10T05:45:28.704095+0000 mon.a (mon.0) 360 : audit [DBG] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T05:45:29.584 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:45:29 vm02 bash[17462]: audit 2026-03-10T05:45:28.834713+0000 mon.b (mon.2) 5 : audit [INF] from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch 2026-03-10T05:45:29.584 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:45:29 vm02 bash[17462]: audit 2026-03-10T05:45:28.837339+0000 mon.c (mon.1) 11 : audit [INF] from='admin socket' entity='admin socket' cmd=smart args=[json]: finished 2026-03-10T05:45:29.584 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:45:29 vm02 bash[17462]: audit 2026-03-10T05:45:28.838699+0000 mon.c (mon.1) 12 : audit [DBG] from='client.? 192.168.123.102:0/3385209186' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-10T05:45:29.584 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:45:29 vm02 bash[17462]: audit 2026-03-10T05:45:28.839828+0000 mon.a (mon.0) 361 : audit [DBG] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-10T05:45:29.584 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:45:29 vm02 bash[17462]: audit 2026-03-10T05:45:28.839881+0000 mon.a (mon.0) 362 : audit [DBG] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T05:45:29.584 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:45:29 vm02 bash[17462]: audit 2026-03-10T05:45:28.839924+0000 mon.a (mon.0) 363 : audit [DBG] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T05:45:29.584 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:45:29 vm02 bash[17462]: audit 2026-03-10T05:45:28.977281+0000 mon.b (mon.2) 6 : audit [INF] from='admin socket' entity='admin socket' cmd=smart args=[json]: finished 2026-03-10T05:45:29.584 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:45:29 vm02 bash[22526]: audit 2026-03-10T05:45:28.537421+0000 mon.a (mon.0) 353 : audit [INF] from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch 2026-03-10T05:45:29.584 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:45:29 vm02 bash[22526]: audit 2026-03-10T05:45:28.688491+0000 mon.a (mon.0) 354 : audit [INF] from='admin socket' entity='admin socket' cmd=smart args=[json]: finished 2026-03-10T05:45:29.584 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:45:29 vm02 bash[22526]: audit 2026-03-10T05:45:28.688890+0000 mon.a (mon.0) 355 : audit [DBG] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-10T05:45:29.584 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:45:29 vm02 bash[22526]: audit 2026-03-10T05:45:28.689108+0000 mon.a (mon.0) 356 : audit [DBG] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T05:45:29.584 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:45:29 vm02 bash[22526]: audit 2026-03-10T05:45:28.689285+0000 mon.a (mon.0) 357 : audit [DBG] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T05:45:29.584 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:45:29 vm02 bash[22526]: audit 2026-03-10T05:45:28.690314+0000 mon.c (mon.1) 10 : audit [INF] from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch 2026-03-10T05:45:29.584 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:45:29 vm02 bash[22526]: audit 2026-03-10T05:45:28.703778+0000 mon.a (mon.0) 358 : audit [DBG] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-10T05:45:29.584 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:45:29 vm02 bash[22526]: audit 2026-03-10T05:45:28.703825+0000 mon.a (mon.0) 359 : audit [DBG] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T05:45:29.584 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:45:29 vm02 bash[22526]: audit 2026-03-10T05:45:28.704095+0000 mon.a (mon.0) 360 : audit [DBG] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T05:45:29.584 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:45:29 vm02 bash[22526]: audit 2026-03-10T05:45:28.834713+0000 mon.b (mon.2) 5 : audit [INF] from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch 2026-03-10T05:45:29.584 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:45:29 vm02 bash[22526]: audit 2026-03-10T05:45:28.837339+0000 mon.c (mon.1) 11 : audit [INF] from='admin socket' entity='admin socket' cmd=smart args=[json]: finished 2026-03-10T05:45:29.584 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:45:29 vm02 bash[22526]: audit 2026-03-10T05:45:28.838699+0000 mon.c (mon.1) 12 : audit [DBG] from='client.? 192.168.123.102:0/3385209186' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-10T05:45:29.584 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:45:29 vm02 bash[22526]: audit 2026-03-10T05:45:28.839828+0000 mon.a (mon.0) 361 : audit [DBG] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-10T05:45:29.584 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:45:29 vm02 bash[22526]: audit 2026-03-10T05:45:28.839881+0000 mon.a (mon.0) 362 : audit [DBG] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T05:45:29.584 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:45:29 vm02 bash[22526]: audit 2026-03-10T05:45:28.839924+0000 mon.a (mon.0) 363 : audit [DBG] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T05:45:29.584 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:45:29 vm02 bash[22526]: audit 2026-03-10T05:45:28.977281+0000 mon.b (mon.2) 6 : audit [INF] from='admin socket' entity='admin socket' cmd=smart args=[json]: finished 2026-03-10T05:45:30.507 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:45:30 vm05 bash[17864]: cluster 2026-03-10T05:45:28.758499+0000 mgr.y (mgr.14152) 75 : cluster [DBG] pgmap v47: 1 pgs: 1 unknown; 0 B data, 15 MiB used, 60 GiB / 60 GiB avail 2026-03-10T05:45:30.583 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:45:30 vm02 bash[17462]: cluster 2026-03-10T05:45:28.758499+0000 mgr.y (mgr.14152) 75 : cluster [DBG] pgmap v47: 1 pgs: 1 unknown; 0 B data, 15 MiB used, 60 GiB / 60 GiB avail 2026-03-10T05:45:30.583 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:45:30 vm02 bash[22526]: cluster 2026-03-10T05:45:28.758499+0000 mgr.y (mgr.14152) 75 : cluster [DBG] pgmap v47: 1 pgs: 1 unknown; 0 B data, 15 MiB used, 60 GiB / 60 GiB avail 2026-03-10T05:45:31.507 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:45:31 vm05 bash[17864]: cluster 2026-03-10T05:45:30.160363+0000 mon.a (mon.0) 364 : cluster [DBG] mgrmap e15: y(active, since 76s), standbys: x 2026-03-10T05:45:31.583 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:45:31 vm02 bash[17462]: cluster 2026-03-10T05:45:30.160363+0000 mon.a (mon.0) 364 : cluster [DBG] mgrmap e15: y(active, since 76s), standbys: x 2026-03-10T05:45:31.583 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:45:31 vm02 bash[22526]: cluster 2026-03-10T05:45:30.160363+0000 mon.a (mon.0) 364 : cluster [DBG] mgrmap e15: y(active, since 76s), standbys: x 2026-03-10T05:45:32.507 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:45:32 vm05 bash[17864]: cluster 2026-03-10T05:45:30.758704+0000 mgr.y (mgr.14152) 76 : cluster [DBG] pgmap v48: 1 pgs: 1 active+clean; 449 KiB data, 17 MiB used, 60 GiB / 60 GiB avail 2026-03-10T05:45:32.583 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:45:32 vm02 bash[17462]: cluster 2026-03-10T05:45:30.758704+0000 mgr.y (mgr.14152) 76 : cluster [DBG] pgmap v48: 1 pgs: 1 active+clean; 449 KiB data, 17 MiB used, 60 GiB / 60 GiB avail 2026-03-10T05:45:32.583 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:45:32 vm02 bash[22526]: cluster 2026-03-10T05:45:30.758704+0000 mgr.y (mgr.14152) 76 : cluster [DBG] pgmap v48: 1 pgs: 1 active+clean; 449 KiB data, 17 MiB used, 60 GiB / 60 GiB avail 2026-03-10T05:45:34.008 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:45:33 vm05 bash[17864]: cluster 2026-03-10T05:45:32.758929+0000 mgr.y (mgr.14152) 77 : cluster [DBG] pgmap v49: 1 pgs: 1 active+clean; 449 KiB data, 17 MiB used, 60 GiB / 60 GiB avail 2026-03-10T05:45:34.084 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:45:33 vm02 bash[17462]: cluster 2026-03-10T05:45:32.758929+0000 mgr.y (mgr.14152) 77 : cluster [DBG] pgmap v49: 1 pgs: 1 active+clean; 449 KiB data, 17 MiB used, 60 GiB / 60 GiB avail 2026-03-10T05:45:34.084 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:45:33 vm02 bash[22526]: cluster 2026-03-10T05:45:32.758929+0000 mgr.y (mgr.14152) 77 : cluster [DBG] pgmap v49: 1 pgs: 1 active+clean; 449 KiB data, 17 MiB used, 60 GiB / 60 GiB avail 2026-03-10T05:45:34.809 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:45:34 vm02 bash[17462]: audit 2026-03-10T05:45:34.270160+0000 mon.a (mon.0) 365 : audit [INF] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.3"}]: dispatch 2026-03-10T05:45:34.809 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:45:34 vm02 bash[17462]: audit 2026-03-10T05:45:34.270603+0000 mon.a (mon.0) 366 : audit [DBG] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:45:34.809 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:45:34 vm02 bash[17462]: cephadm 2026-03-10T05:45:34.270938+0000 mgr.y (mgr.14152) 78 : cephadm [INF] Deploying daemon osd.3 on vm02 2026-03-10T05:45:34.809 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:45:34 vm02 bash[22526]: audit 2026-03-10T05:45:34.270160+0000 mon.a (mon.0) 365 : audit [INF] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.3"}]: dispatch 2026-03-10T05:45:34.809 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:45:34 vm02 bash[22526]: audit 2026-03-10T05:45:34.270603+0000 mon.a (mon.0) 366 : audit [DBG] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:45:34.809 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:45:34 vm02 bash[22526]: cephadm 2026-03-10T05:45:34.270938+0000 mgr.y (mgr.14152) 78 : cephadm [INF] Deploying daemon osd.3 on vm02 2026-03-10T05:45:35.008 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:45:34 vm05 bash[17864]: audit 2026-03-10T05:45:34.270160+0000 mon.a (mon.0) 365 : audit [INF] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.3"}]: dispatch 2026-03-10T05:45:35.008 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:45:34 vm05 bash[17864]: audit 2026-03-10T05:45:34.270603+0000 mon.a (mon.0) 366 : audit [DBG] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:45:35.008 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:45:34 vm05 bash[17864]: cephadm 2026-03-10T05:45:34.270938+0000 mgr.y (mgr.14152) 78 : cephadm [INF] Deploying daemon osd.3 on vm02 2026-03-10T05:45:35.060 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:45:34 vm02 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:45:35.060 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:45:34 vm02 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:45:35.060 INFO:journalctl@ceph.mgr.y.vm02.stdout:Mar 10 05:45:34 vm02 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:45:35.060 INFO:journalctl@ceph.mgr.y.vm02.stdout:Mar 10 05:45:34 vm02 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:45:35.060 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:45:34 vm02 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:45:35.060 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:45:34 vm02 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:45:35.060 INFO:journalctl@ceph.osd.1.vm02.stdout:Mar 10 05:45:34 vm02 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:45:35.060 INFO:journalctl@ceph.osd.1.vm02.stdout:Mar 10 05:45:34 vm02 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:45:35.060 INFO:journalctl@ceph.osd.2.vm02.stdout:Mar 10 05:45:34 vm02 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:45:35.060 INFO:journalctl@ceph.osd.2.vm02.stdout:Mar 10 05:45:34 vm02 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:45:35.060 INFO:journalctl@ceph.osd.0.vm02.stdout:Mar 10 05:45:34 vm02 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:45:35.060 INFO:journalctl@ceph.osd.0.vm02.stdout:Mar 10 05:45:34 vm02 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:45:36.333 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:45:36 vm02 bash[17462]: cluster 2026-03-10T05:45:34.759134+0000 mgr.y (mgr.14152) 79 : cluster [DBG] pgmap v50: 1 pgs: 1 active+clean; 449 KiB data, 17 MiB used, 60 GiB / 60 GiB avail 2026-03-10T05:45:36.334 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:45:36 vm02 bash[17462]: audit 2026-03-10T05:45:35.067192+0000 mon.a (mon.0) 367 : audit [INF] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' 2026-03-10T05:45:36.334 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:45:36 vm02 bash[17462]: audit 2026-03-10T05:45:35.068321+0000 mon.a (mon.0) 368 : audit [DBG] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T05:45:36.334 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:45:36 vm02 bash[17462]: audit 2026-03-10T05:45:35.070408+0000 mon.a (mon.0) 369 : audit [DBG] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:45:36.334 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:45:36 vm02 bash[17462]: audit 2026-03-10T05:45:35.074072+0000 mon.a (mon.0) 370 : audit [INF] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T05:45:36.334 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:45:36 vm02 bash[22526]: cluster 2026-03-10T05:45:34.759134+0000 mgr.y (mgr.14152) 79 : cluster [DBG] pgmap v50: 1 pgs: 1 active+clean; 449 KiB data, 17 MiB used, 60 GiB / 60 GiB avail 2026-03-10T05:45:36.334 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:45:36 vm02 bash[22526]: audit 2026-03-10T05:45:35.067192+0000 mon.a (mon.0) 367 : audit [INF] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' 2026-03-10T05:45:36.334 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:45:36 vm02 bash[22526]: audit 2026-03-10T05:45:35.068321+0000 mon.a (mon.0) 368 : audit [DBG] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T05:45:36.334 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:45:36 vm02 bash[22526]: audit 2026-03-10T05:45:35.070408+0000 mon.a (mon.0) 369 : audit [DBG] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:45:36.334 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:45:36 vm02 bash[22526]: audit 2026-03-10T05:45:35.074072+0000 mon.a (mon.0) 370 : audit [INF] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T05:45:36.508 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:45:36 vm05 bash[17864]: cluster 2026-03-10T05:45:34.759134+0000 mgr.y (mgr.14152) 79 : cluster [DBG] pgmap v50: 1 pgs: 1 active+clean; 449 KiB data, 17 MiB used, 60 GiB / 60 GiB avail 2026-03-10T05:45:36.508 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:45:36 vm05 bash[17864]: audit 2026-03-10T05:45:35.067192+0000 mon.a (mon.0) 367 : audit [INF] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' 2026-03-10T05:45:36.508 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:45:36 vm05 bash[17864]: audit 2026-03-10T05:45:35.068321+0000 mon.a (mon.0) 368 : audit [DBG] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T05:45:36.508 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:45:36 vm05 bash[17864]: audit 2026-03-10T05:45:35.070408+0000 mon.a (mon.0) 369 : audit [DBG] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:45:36.508 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:45:36 vm05 bash[17864]: audit 2026-03-10T05:45:35.074072+0000 mon.a (mon.0) 370 : audit [INF] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T05:45:38.294 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:45:38 vm02 bash[22526]: cluster 2026-03-10T05:45:36.759331+0000 mgr.y (mgr.14152) 80 : cluster [DBG] pgmap v51: 1 pgs: 1 active+clean; 449 KiB data, 17 MiB used, 60 GiB / 60 GiB avail 2026-03-10T05:45:38.294 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:45:38 vm02 bash[22526]: audit 2026-03-10T05:45:38.003372+0000 mon.a (mon.0) 371 : audit [INF] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' 2026-03-10T05:45:38.294 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:45:38 vm02 bash[22526]: audit 2026-03-10T05:45:38.007337+0000 mon.a (mon.0) 372 : audit [INF] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' 2026-03-10T05:45:38.294 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:45:38 vm02 bash[17462]: cluster 2026-03-10T05:45:36.759331+0000 mgr.y (mgr.14152) 80 : cluster [DBG] pgmap v51: 1 pgs: 1 active+clean; 449 KiB data, 17 MiB used, 60 GiB / 60 GiB avail 2026-03-10T05:45:38.294 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:45:38 vm02 bash[17462]: audit 2026-03-10T05:45:38.003372+0000 mon.a (mon.0) 371 : audit [INF] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' 2026-03-10T05:45:38.294 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:45:38 vm02 bash[17462]: audit 2026-03-10T05:45:38.007337+0000 mon.a (mon.0) 372 : audit [INF] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' 2026-03-10T05:45:38.350 INFO:teuthology.orchestra.run.vm02.stdout:Created osd(s) 3 on host 'vm02' 2026-03-10T05:45:38.412 DEBUG:teuthology.orchestra.run.vm02:osd.3> sudo journalctl -f -n 0 -u ceph-107483ae-1c44-11f1-b530-c1172cd6122a@osd.3.service 2026-03-10T05:45:38.413 INFO:tasks.cephadm:Deploying osd.4 on vm05 with /dev/vde... 2026-03-10T05:45:38.413 DEBUG:teuthology.orchestra.run.vm05:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 ceph-volume -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 107483ae-1c44-11f1-b530-c1172cd6122a -- lvm zap /dev/vde 2026-03-10T05:45:38.419 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:45:38 vm05 bash[17864]: cluster 2026-03-10T05:45:36.759331+0000 mgr.y (mgr.14152) 80 : cluster [DBG] pgmap v51: 1 pgs: 1 active+clean; 449 KiB data, 17 MiB used, 60 GiB / 60 GiB avail 2026-03-10T05:45:38.419 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:45:38 vm05 bash[17864]: audit 2026-03-10T05:45:38.003372+0000 mon.a (mon.0) 371 : audit [INF] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' 2026-03-10T05:45:38.419 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:45:38 vm05 bash[17864]: audit 2026-03-10T05:45:38.007337+0000 mon.a (mon.0) 372 : audit [INF] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' 2026-03-10T05:45:38.941 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-10T05:45:38.951 DEBUG:teuthology.orchestra.run.vm05:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 107483ae-1c44-11f1-b530-c1172cd6122a -- ceph orch daemon add osd vm05:/dev/vde 2026-03-10T05:45:39.194 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:45:39 vm05 bash[17864]: audit 2026-03-10T05:45:38.192250+0000 mon.c (mon.1) 13 : audit [INF] from='osd.3 [v2:192.168.123.102:6826/268408037,v1:192.168.123.102:6827/268408037]' entity='osd.3' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["3"]}]: dispatch 2026-03-10T05:45:39.194 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:45:39 vm05 bash[17864]: audit 2026-03-10T05:45:38.192733+0000 mon.a (mon.0) 373 : audit [INF] from='osd.3 ' entity='osd.3' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["3"]}]: dispatch 2026-03-10T05:45:39.194 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:45:39 vm05 bash[17864]: audit 2026-03-10T05:45:38.346263+0000 mon.a (mon.0) 374 : audit [INF] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' 2026-03-10T05:45:39.194 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:45:39 vm05 bash[17864]: audit 2026-03-10T05:45:38.372684+0000 mon.a (mon.0) 375 : audit [DBG] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T05:45:39.194 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:45:39 vm05 bash[17864]: audit 2026-03-10T05:45:38.373373+0000 mon.a (mon.0) 376 : audit [DBG] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:45:39.194 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:45:39 vm05 bash[17864]: audit 2026-03-10T05:45:38.373827+0000 mon.a (mon.0) 377 : audit [INF] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T05:45:39.334 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:45:39 vm02 bash[17462]: audit 2026-03-10T05:45:38.192250+0000 mon.c (mon.1) 13 : audit [INF] from='osd.3 [v2:192.168.123.102:6826/268408037,v1:192.168.123.102:6827/268408037]' entity='osd.3' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["3"]}]: dispatch 2026-03-10T05:45:39.334 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:45:39 vm02 bash[17462]: audit 2026-03-10T05:45:38.192733+0000 mon.a (mon.0) 373 : audit [INF] from='osd.3 ' entity='osd.3' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["3"]}]: dispatch 2026-03-10T05:45:39.334 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:45:39 vm02 bash[17462]: audit 2026-03-10T05:45:38.346263+0000 mon.a (mon.0) 374 : audit [INF] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' 2026-03-10T05:45:39.334 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:45:39 vm02 bash[17462]: audit 2026-03-10T05:45:38.372684+0000 mon.a (mon.0) 375 : audit [DBG] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T05:45:39.334 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:45:39 vm02 bash[17462]: audit 2026-03-10T05:45:38.373373+0000 mon.a (mon.0) 376 : audit [DBG] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:45:39.334 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:45:39 vm02 bash[17462]: audit 2026-03-10T05:45:38.373827+0000 mon.a (mon.0) 377 : audit [INF] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T05:45:39.334 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:45:39 vm02 bash[22526]: audit 2026-03-10T05:45:38.192250+0000 mon.c (mon.1) 13 : audit [INF] from='osd.3 [v2:192.168.123.102:6826/268408037,v1:192.168.123.102:6827/268408037]' entity='osd.3' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["3"]}]: dispatch 2026-03-10T05:45:39.334 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:45:39 vm02 bash[22526]: audit 2026-03-10T05:45:38.192733+0000 mon.a (mon.0) 373 : audit [INF] from='osd.3 ' entity='osd.3' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["3"]}]: dispatch 2026-03-10T05:45:39.334 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:45:39 vm02 bash[22526]: audit 2026-03-10T05:45:38.346263+0000 mon.a (mon.0) 374 : audit [INF] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' 2026-03-10T05:45:39.334 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:45:39 vm02 bash[22526]: audit 2026-03-10T05:45:38.372684+0000 mon.a (mon.0) 375 : audit [DBG] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T05:45:39.334 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:45:39 vm02 bash[22526]: audit 2026-03-10T05:45:38.373373+0000 mon.a (mon.0) 376 : audit [DBG] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:45:39.334 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:45:39 vm02 bash[22526]: audit 2026-03-10T05:45:38.373827+0000 mon.a (mon.0) 377 : audit [INF] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T05:45:40.508 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:45:40 vm05 bash[17864]: cluster 2026-03-10T05:45:38.759571+0000 mgr.y (mgr.14152) 81 : cluster [DBG] pgmap v52: 1 pgs: 1 active+clean; 449 KiB data, 17 MiB used, 60 GiB / 60 GiB avail 2026-03-10T05:45:40.508 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:45:40 vm05 bash[17864]: audit 2026-03-10T05:45:39.078736+0000 mon.a (mon.0) 378 : audit [INF] from='osd.3 ' entity='osd.3' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["3"]}]': finished 2026-03-10T05:45:40.508 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:45:40 vm05 bash[17864]: cluster 2026-03-10T05:45:39.078853+0000 mon.a (mon.0) 379 : cluster [DBG] osdmap e21: 4 total, 3 up, 4 in 2026-03-10T05:45:40.508 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:45:40 vm05 bash[17864]: audit 2026-03-10T05:45:39.080340+0000 mon.a (mon.0) 380 : audit [DBG] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-10T05:45:40.508 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:45:40 vm05 bash[17864]: audit 2026-03-10T05:45:39.081146+0000 mon.c (mon.1) 14 : audit [INF] from='osd.3 [v2:192.168.123.102:6826/268408037,v1:192.168.123.102:6827/268408037]' entity='osd.3' cmd=[{"prefix": "osd crush create-or-move", "id": 3, "weight":0.0195, "args": ["host=vm02", "root=default"]}]: dispatch 2026-03-10T05:45:40.508 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:45:40 vm05 bash[17864]: audit 2026-03-10T05:45:39.087137+0000 mon.a (mon.0) 381 : audit [INF] from='osd.3 ' entity='osd.3' cmd=[{"prefix": "osd crush create-or-move", "id": 3, "weight":0.0195, "args": ["host=vm02", "root=default"]}]: dispatch 2026-03-10T05:45:40.508 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:45:40 vm05 bash[17864]: audit 2026-03-10T05:45:39.358437+0000 mon.a (mon.0) 382 : audit [DBG] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-10T05:45:40.508 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:45:40 vm05 bash[17864]: audit 2026-03-10T05:45:39.360154+0000 mon.a (mon.0) 383 : audit [INF] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-10T05:45:40.508 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:45:40 vm05 bash[17864]: audit 2026-03-10T05:45:39.360542+0000 mon.a (mon.0) 384 : audit [DBG] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:45:40.584 INFO:journalctl@ceph.osd.3.vm02.stdout:Mar 10 05:45:40 vm02 bash[34760]: debug 2026-03-10T05:45:40.095+0000 7f4e495b7700 -1 osd.3 0 waiting for initial osdmap 2026-03-10T05:45:40.584 INFO:journalctl@ceph.osd.3.vm02.stdout:Mar 10 05:45:40 vm02 bash[34760]: debug 2026-03-10T05:45:40.103+0000 7f4e45f52700 -1 osd.3 22 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory 2026-03-10T05:45:40.584 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:45:40 vm02 bash[17462]: cluster 2026-03-10T05:45:38.759571+0000 mgr.y (mgr.14152) 81 : cluster [DBG] pgmap v52: 1 pgs: 1 active+clean; 449 KiB data, 17 MiB used, 60 GiB / 60 GiB avail 2026-03-10T05:45:40.584 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:45:40 vm02 bash[17462]: audit 2026-03-10T05:45:39.078736+0000 mon.a (mon.0) 378 : audit [INF] from='osd.3 ' entity='osd.3' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["3"]}]': finished 2026-03-10T05:45:40.584 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:45:40 vm02 bash[17462]: cluster 2026-03-10T05:45:39.078853+0000 mon.a (mon.0) 379 : cluster [DBG] osdmap e21: 4 total, 3 up, 4 in 2026-03-10T05:45:40.584 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:45:40 vm02 bash[17462]: audit 2026-03-10T05:45:39.080340+0000 mon.a (mon.0) 380 : audit [DBG] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-10T05:45:40.584 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:45:40 vm02 bash[17462]: audit 2026-03-10T05:45:39.081146+0000 mon.c (mon.1) 14 : audit [INF] from='osd.3 [v2:192.168.123.102:6826/268408037,v1:192.168.123.102:6827/268408037]' entity='osd.3' cmd=[{"prefix": "osd crush create-or-move", "id": 3, "weight":0.0195, "args": ["host=vm02", "root=default"]}]: dispatch 2026-03-10T05:45:40.584 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:45:40 vm02 bash[17462]: audit 2026-03-10T05:45:39.087137+0000 mon.a (mon.0) 381 : audit [INF] from='osd.3 ' entity='osd.3' cmd=[{"prefix": "osd crush create-or-move", "id": 3, "weight":0.0195, "args": ["host=vm02", "root=default"]}]: dispatch 2026-03-10T05:45:40.584 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:45:40 vm02 bash[17462]: audit 2026-03-10T05:45:39.358437+0000 mon.a (mon.0) 382 : audit [DBG] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-10T05:45:40.584 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:45:40 vm02 bash[17462]: audit 2026-03-10T05:45:39.360154+0000 mon.a (mon.0) 383 : audit [INF] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-10T05:45:40.584 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:45:40 vm02 bash[17462]: audit 2026-03-10T05:45:39.360542+0000 mon.a (mon.0) 384 : audit [DBG] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:45:40.584 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:45:40 vm02 bash[22526]: cluster 2026-03-10T05:45:38.759571+0000 mgr.y (mgr.14152) 81 : cluster [DBG] pgmap v52: 1 pgs: 1 active+clean; 449 KiB data, 17 MiB used, 60 GiB / 60 GiB avail 2026-03-10T05:45:40.584 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:45:40 vm02 bash[22526]: audit 2026-03-10T05:45:39.078736+0000 mon.a (mon.0) 378 : audit [INF] from='osd.3 ' entity='osd.3' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["3"]}]': finished 2026-03-10T05:45:40.584 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:45:40 vm02 bash[22526]: cluster 2026-03-10T05:45:39.078853+0000 mon.a (mon.0) 379 : cluster [DBG] osdmap e21: 4 total, 3 up, 4 in 2026-03-10T05:45:40.584 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:45:40 vm02 bash[22526]: audit 2026-03-10T05:45:39.080340+0000 mon.a (mon.0) 380 : audit [DBG] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-10T05:45:40.584 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:45:40 vm02 bash[22526]: audit 2026-03-10T05:45:39.081146+0000 mon.c (mon.1) 14 : audit [INF] from='osd.3 [v2:192.168.123.102:6826/268408037,v1:192.168.123.102:6827/268408037]' entity='osd.3' cmd=[{"prefix": "osd crush create-or-move", "id": 3, "weight":0.0195, "args": ["host=vm02", "root=default"]}]: dispatch 2026-03-10T05:45:40.584 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:45:40 vm02 bash[22526]: audit 2026-03-10T05:45:39.087137+0000 mon.a (mon.0) 381 : audit [INF] from='osd.3 ' entity='osd.3' cmd=[{"prefix": "osd crush create-or-move", "id": 3, "weight":0.0195, "args": ["host=vm02", "root=default"]}]: dispatch 2026-03-10T05:45:40.584 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:45:40 vm02 bash[22526]: audit 2026-03-10T05:45:39.358437+0000 mon.a (mon.0) 382 : audit [DBG] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-10T05:45:40.584 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:45:40 vm02 bash[22526]: audit 2026-03-10T05:45:39.360154+0000 mon.a (mon.0) 383 : audit [INF] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-10T05:45:40.584 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:45:40 vm02 bash[22526]: audit 2026-03-10T05:45:39.360542+0000 mon.a (mon.0) 384 : audit [DBG] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:45:41.334 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:45:41 vm02 bash[17462]: audit 2026-03-10T05:45:39.357137+0000 mgr.y (mgr.14152) 82 : audit [DBG] from='client.24173 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm05:/dev/vde", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T05:45:41.334 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:45:41 vm02 bash[17462]: audit 2026-03-10T05:45:40.088123+0000 mon.a (mon.0) 385 : audit [INF] from='osd.3 ' entity='osd.3' cmd='[{"prefix": "osd crush create-or-move", "id": 3, "weight":0.0195, "args": ["host=vm02", "root=default"]}]': finished 2026-03-10T05:45:41.334 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:45:41 vm02 bash[17462]: cluster 2026-03-10T05:45:40.088278+0000 mon.a (mon.0) 386 : cluster [DBG] osdmap e22: 4 total, 3 up, 4 in 2026-03-10T05:45:41.334 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:45:41 vm02 bash[17462]: audit 2026-03-10T05:45:40.088996+0000 mon.a (mon.0) 387 : audit [DBG] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-10T05:45:41.334 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:45:41 vm02 bash[17462]: audit 2026-03-10T05:45:40.099242+0000 mon.a (mon.0) 388 : audit [DBG] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-10T05:45:41.334 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:45:41 vm02 bash[22526]: audit 2026-03-10T05:45:39.357137+0000 mgr.y (mgr.14152) 82 : audit [DBG] from='client.24173 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm05:/dev/vde", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T05:45:41.334 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:45:41 vm02 bash[22526]: audit 2026-03-10T05:45:40.088123+0000 mon.a (mon.0) 385 : audit [INF] from='osd.3 ' entity='osd.3' cmd='[{"prefix": "osd crush create-or-move", "id": 3, "weight":0.0195, "args": ["host=vm02", "root=default"]}]': finished 2026-03-10T05:45:41.334 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:45:41 vm02 bash[22526]: cluster 2026-03-10T05:45:40.088278+0000 mon.a (mon.0) 386 : cluster [DBG] osdmap e22: 4 total, 3 up, 4 in 2026-03-10T05:45:41.334 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:45:41 vm02 bash[22526]: audit 2026-03-10T05:45:40.088996+0000 mon.a (mon.0) 387 : audit [DBG] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-10T05:45:41.334 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:45:41 vm02 bash[22526]: audit 2026-03-10T05:45:40.099242+0000 mon.a (mon.0) 388 : audit [DBG] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-10T05:45:41.508 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:45:41 vm05 bash[17864]: audit 2026-03-10T05:45:39.357137+0000 mgr.y (mgr.14152) 82 : audit [DBG] from='client.24173 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm05:/dev/vde", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T05:45:41.508 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:45:41 vm05 bash[17864]: audit 2026-03-10T05:45:40.088123+0000 mon.a (mon.0) 385 : audit [INF] from='osd.3 ' entity='osd.3' cmd='[{"prefix": "osd crush create-or-move", "id": 3, "weight":0.0195, "args": ["host=vm02", "root=default"]}]': finished 2026-03-10T05:45:41.508 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:45:41 vm05 bash[17864]: cluster 2026-03-10T05:45:40.088278+0000 mon.a (mon.0) 386 : cluster [DBG] osdmap e22: 4 total, 3 up, 4 in 2026-03-10T05:45:41.508 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:45:41 vm05 bash[17864]: audit 2026-03-10T05:45:40.088996+0000 mon.a (mon.0) 387 : audit [DBG] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-10T05:45:41.508 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:45:41 vm05 bash[17864]: audit 2026-03-10T05:45:40.099242+0000 mon.a (mon.0) 388 : audit [DBG] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-10T05:45:42.258 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:45:42 vm05 bash[17864]: cluster 2026-03-10T05:45:39.146472+0000 osd.3 (osd.3) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-10T05:45:42.258 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:45:42 vm05 bash[17864]: cluster 2026-03-10T05:45:39.146566+0000 osd.3 (osd.3) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-10T05:45:42.258 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:45:42 vm05 bash[17864]: cluster 2026-03-10T05:45:40.759830+0000 mgr.y (mgr.14152) 83 : cluster [DBG] pgmap v55: 1 pgs: 1 active+clean; 449 KiB data, 17 MiB used, 60 GiB / 60 GiB avail 2026-03-10T05:45:42.258 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:45:42 vm05 bash[17864]: audit 2026-03-10T05:45:41.092225+0000 mon.a (mon.0) 389 : audit [DBG] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-10T05:45:42.258 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:45:42 vm05 bash[17864]: cluster 2026-03-10T05:45:41.098048+0000 mon.a (mon.0) 390 : cluster [INF] osd.3 [v2:192.168.123.102:6826/268408037,v1:192.168.123.102:6827/268408037] boot 2026-03-10T05:45:42.258 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:45:42 vm05 bash[17864]: cluster 2026-03-10T05:45:41.098251+0000 mon.a (mon.0) 391 : cluster [DBG] osdmap e23: 4 total, 4 up, 4 in 2026-03-10T05:45:42.258 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:45:42 vm05 bash[17864]: audit 2026-03-10T05:45:41.100636+0000 mon.a (mon.0) 392 : audit [DBG] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-10T05:45:42.334 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:45:42 vm02 bash[17462]: cluster 2026-03-10T05:45:39.146472+0000 osd.3 (osd.3) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-10T05:45:42.334 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:45:42 vm02 bash[17462]: cluster 2026-03-10T05:45:39.146566+0000 osd.3 (osd.3) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-10T05:45:42.334 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:45:42 vm02 bash[17462]: cluster 2026-03-10T05:45:40.759830+0000 mgr.y (mgr.14152) 83 : cluster [DBG] pgmap v55: 1 pgs: 1 active+clean; 449 KiB data, 17 MiB used, 60 GiB / 60 GiB avail 2026-03-10T05:45:42.334 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:45:42 vm02 bash[17462]: audit 2026-03-10T05:45:41.092225+0000 mon.a (mon.0) 389 : audit [DBG] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-10T05:45:42.334 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:45:42 vm02 bash[17462]: cluster 2026-03-10T05:45:41.098048+0000 mon.a (mon.0) 390 : cluster [INF] osd.3 [v2:192.168.123.102:6826/268408037,v1:192.168.123.102:6827/268408037] boot 2026-03-10T05:45:42.334 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:45:42 vm02 bash[17462]: cluster 2026-03-10T05:45:41.098251+0000 mon.a (mon.0) 391 : cluster [DBG] osdmap e23: 4 total, 4 up, 4 in 2026-03-10T05:45:42.334 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:45:42 vm02 bash[17462]: audit 2026-03-10T05:45:41.100636+0000 mon.a (mon.0) 392 : audit [DBG] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-10T05:45:42.334 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:45:42 vm02 bash[22526]: cluster 2026-03-10T05:45:39.146472+0000 osd.3 (osd.3) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-10T05:45:42.334 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:45:42 vm02 bash[22526]: cluster 2026-03-10T05:45:39.146566+0000 osd.3 (osd.3) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-10T05:45:42.334 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:45:42 vm02 bash[22526]: cluster 2026-03-10T05:45:40.759830+0000 mgr.y (mgr.14152) 83 : cluster [DBG] pgmap v55: 1 pgs: 1 active+clean; 449 KiB data, 17 MiB used, 60 GiB / 60 GiB avail 2026-03-10T05:45:42.334 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:45:42 vm02 bash[22526]: audit 2026-03-10T05:45:41.092225+0000 mon.a (mon.0) 389 : audit [DBG] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-10T05:45:42.334 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:45:42 vm02 bash[22526]: cluster 2026-03-10T05:45:41.098048+0000 mon.a (mon.0) 390 : cluster [INF] osd.3 [v2:192.168.123.102:6826/268408037,v1:192.168.123.102:6827/268408037] boot 2026-03-10T05:45:42.334 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:45:42 vm02 bash[22526]: cluster 2026-03-10T05:45:41.098251+0000 mon.a (mon.0) 391 : cluster [DBG] osdmap e23: 4 total, 4 up, 4 in 2026-03-10T05:45:42.334 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:45:42 vm02 bash[22526]: audit 2026-03-10T05:45:41.100636+0000 mon.a (mon.0) 392 : audit [DBG] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-10T05:45:43.258 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:45:43 vm05 bash[17864]: cluster 2026-03-10T05:45:42.108424+0000 mon.a (mon.0) 393 : cluster [DBG] osdmap e24: 4 total, 4 up, 4 in 2026-03-10T05:45:43.258 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:45:43 vm05 bash[17864]: audit 2026-03-10T05:45:42.478929+0000 mon.b (mon.2) 7 : audit [INF] from='client.? 192.168.123.105:0/3311985571' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "49541bd1-b8b0-4d09-9b97-6ca490c33f9d"}]: dispatch 2026-03-10T05:45:43.258 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:45:43 vm05 bash[17864]: audit 2026-03-10T05:45:42.484376+0000 mon.a (mon.0) 394 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "49541bd1-b8b0-4d09-9b97-6ca490c33f9d"}]: dispatch 2026-03-10T05:45:43.258 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:45:43 vm05 bash[17864]: audit 2026-03-10T05:45:42.489976+0000 mon.a (mon.0) 395 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "49541bd1-b8b0-4d09-9b97-6ca490c33f9d"}]': finished 2026-03-10T05:45:43.258 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:45:43 vm05 bash[17864]: cluster 2026-03-10T05:45:42.490027+0000 mon.a (mon.0) 396 : cluster [DBG] osdmap e25: 5 total, 4 up, 5 in 2026-03-10T05:45:43.258 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:45:43 vm05 bash[17864]: audit 2026-03-10T05:45:42.490070+0000 mon.a (mon.0) 397 : audit [DBG] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-10T05:45:43.258 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:45:43 vm05 bash[17864]: audit 2026-03-10T05:45:42.662633+0000 mon.a (mon.0) 398 : audit [INF] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' 2026-03-10T05:45:43.258 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:45:43 vm05 bash[17864]: audit 2026-03-10T05:45:42.663327+0000 mon.a (mon.0) 399 : audit [INF] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm02", "name": "osd_memory_target"}]: dispatch 2026-03-10T05:45:43.258 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:45:43 vm05 bash[17864]: audit 2026-03-10T05:45:42.667690+0000 mon.a (mon.0) 400 : audit [INF] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' 2026-03-10T05:45:43.584 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:45:43 vm02 bash[17462]: cluster 2026-03-10T05:45:42.108424+0000 mon.a (mon.0) 393 : cluster [DBG] osdmap e24: 4 total, 4 up, 4 in 2026-03-10T05:45:43.584 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:45:43 vm02 bash[17462]: audit 2026-03-10T05:45:42.478929+0000 mon.b (mon.2) 7 : audit [INF] from='client.? 192.168.123.105:0/3311985571' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "49541bd1-b8b0-4d09-9b97-6ca490c33f9d"}]: dispatch 2026-03-10T05:45:43.584 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:45:43 vm02 bash[17462]: audit 2026-03-10T05:45:42.484376+0000 mon.a (mon.0) 394 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "49541bd1-b8b0-4d09-9b97-6ca490c33f9d"}]: dispatch 2026-03-10T05:45:43.584 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:45:43 vm02 bash[17462]: audit 2026-03-10T05:45:42.489976+0000 mon.a (mon.0) 395 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "49541bd1-b8b0-4d09-9b97-6ca490c33f9d"}]': finished 2026-03-10T05:45:43.584 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:45:43 vm02 bash[17462]: cluster 2026-03-10T05:45:42.490027+0000 mon.a (mon.0) 396 : cluster [DBG] osdmap e25: 5 total, 4 up, 5 in 2026-03-10T05:45:43.584 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:45:43 vm02 bash[17462]: audit 2026-03-10T05:45:42.490070+0000 mon.a (mon.0) 397 : audit [DBG] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-10T05:45:43.584 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:45:43 vm02 bash[17462]: audit 2026-03-10T05:45:42.662633+0000 mon.a (mon.0) 398 : audit [INF] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' 2026-03-10T05:45:43.584 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:45:43 vm02 bash[17462]: audit 2026-03-10T05:45:42.663327+0000 mon.a (mon.0) 399 : audit [INF] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm02", "name": "osd_memory_target"}]: dispatch 2026-03-10T05:45:43.584 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:45:43 vm02 bash[17462]: audit 2026-03-10T05:45:42.667690+0000 mon.a (mon.0) 400 : audit [INF] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' 2026-03-10T05:45:43.584 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:45:43 vm02 bash[22526]: cluster 2026-03-10T05:45:42.108424+0000 mon.a (mon.0) 393 : cluster [DBG] osdmap e24: 4 total, 4 up, 4 in 2026-03-10T05:45:43.584 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:45:43 vm02 bash[22526]: audit 2026-03-10T05:45:42.478929+0000 mon.b (mon.2) 7 : audit [INF] from='client.? 192.168.123.105:0/3311985571' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "49541bd1-b8b0-4d09-9b97-6ca490c33f9d"}]: dispatch 2026-03-10T05:45:43.584 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:45:43 vm02 bash[22526]: audit 2026-03-10T05:45:42.484376+0000 mon.a (mon.0) 394 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "49541bd1-b8b0-4d09-9b97-6ca490c33f9d"}]: dispatch 2026-03-10T05:45:43.584 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:45:43 vm02 bash[22526]: audit 2026-03-10T05:45:42.489976+0000 mon.a (mon.0) 395 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "49541bd1-b8b0-4d09-9b97-6ca490c33f9d"}]': finished 2026-03-10T05:45:43.584 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:45:43 vm02 bash[22526]: cluster 2026-03-10T05:45:42.490027+0000 mon.a (mon.0) 396 : cluster [DBG] osdmap e25: 5 total, 4 up, 5 in 2026-03-10T05:45:43.584 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:45:43 vm02 bash[22526]: audit 2026-03-10T05:45:42.490070+0000 mon.a (mon.0) 397 : audit [DBG] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-10T05:45:43.584 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:45:43 vm02 bash[22526]: audit 2026-03-10T05:45:42.662633+0000 mon.a (mon.0) 398 : audit [INF] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' 2026-03-10T05:45:43.584 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:45:43 vm02 bash[22526]: audit 2026-03-10T05:45:42.663327+0000 mon.a (mon.0) 399 : audit [INF] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm02", "name": "osd_memory_target"}]: dispatch 2026-03-10T05:45:43.584 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:45:43 vm02 bash[22526]: audit 2026-03-10T05:45:42.667690+0000 mon.a (mon.0) 400 : audit [INF] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' 2026-03-10T05:45:44.507 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:45:44 vm05 bash[17864]: cephadm 2026-03-10T05:45:42.656820+0000 mgr.y (mgr.14152) 84 : cephadm [INF] Detected new or changed devices on vm02 2026-03-10T05:45:44.508 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:45:44 vm05 bash[17864]: cluster 2026-03-10T05:45:42.760078+0000 mgr.y (mgr.14152) 85 : cluster [DBG] pgmap v59: 1 pgs: 1 active+clean; 449 KiB data, 23 MiB used, 80 GiB / 80 GiB avail 2026-03-10T05:45:44.508 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:45:44 vm05 bash[17864]: audit 2026-03-10T05:45:43.106344+0000 mon.b (mon.2) 8 : audit [DBG] from='client.? 192.168.123.105:0/1255154288' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-10T05:45:44.584 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:45:44 vm02 bash[17462]: cephadm 2026-03-10T05:45:42.656820+0000 mgr.y (mgr.14152) 84 : cephadm [INF] Detected new or changed devices on vm02 2026-03-10T05:45:44.584 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:45:44 vm02 bash[17462]: cluster 2026-03-10T05:45:42.760078+0000 mgr.y (mgr.14152) 85 : cluster [DBG] pgmap v59: 1 pgs: 1 active+clean; 449 KiB data, 23 MiB used, 80 GiB / 80 GiB avail 2026-03-10T05:45:44.584 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:45:44 vm02 bash[17462]: audit 2026-03-10T05:45:43.106344+0000 mon.b (mon.2) 8 : audit [DBG] from='client.? 192.168.123.105:0/1255154288' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-10T05:45:44.584 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:45:44 vm02 bash[22526]: cephadm 2026-03-10T05:45:42.656820+0000 mgr.y (mgr.14152) 84 : cephadm [INF] Detected new or changed devices on vm02 2026-03-10T05:45:44.584 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:45:44 vm02 bash[22526]: cluster 2026-03-10T05:45:42.760078+0000 mgr.y (mgr.14152) 85 : cluster [DBG] pgmap v59: 1 pgs: 1 active+clean; 449 KiB data, 23 MiB used, 80 GiB / 80 GiB avail 2026-03-10T05:45:44.584 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:45:44 vm02 bash[22526]: audit 2026-03-10T05:45:43.106344+0000 mon.b (mon.2) 8 : audit [DBG] from='client.? 192.168.123.105:0/1255154288' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-10T05:45:46.507 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:45:46 vm05 bash[17864]: cluster 2026-03-10T05:45:44.760359+0000 mgr.y (mgr.14152) 86 : cluster [DBG] pgmap v60: 1 pgs: 1 active+clean; 449 KiB data, 23 MiB used, 80 GiB / 80 GiB avail 2026-03-10T05:45:46.583 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:45:46 vm02 bash[17462]: cluster 2026-03-10T05:45:44.760359+0000 mgr.y (mgr.14152) 86 : cluster [DBG] pgmap v60: 1 pgs: 1 active+clean; 449 KiB data, 23 MiB used, 80 GiB / 80 GiB avail 2026-03-10T05:45:46.584 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:45:46 vm02 bash[22526]: cluster 2026-03-10T05:45:44.760359+0000 mgr.y (mgr.14152) 86 : cluster [DBG] pgmap v60: 1 pgs: 1 active+clean; 449 KiB data, 23 MiB used, 80 GiB / 80 GiB avail 2026-03-10T05:45:48.405 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:45:48 vm05 bash[17864]: cluster 2026-03-10T05:45:46.760586+0000 mgr.y (mgr.14152) 87 : cluster [DBG] pgmap v61: 1 pgs: 1 active+clean; 449 KiB data, 23 MiB used, 80 GiB / 80 GiB avail 2026-03-10T05:45:48.584 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:45:48 vm02 bash[17462]: cluster 2026-03-10T05:45:46.760586+0000 mgr.y (mgr.14152) 87 : cluster [DBG] pgmap v61: 1 pgs: 1 active+clean; 449 KiB data, 23 MiB used, 80 GiB / 80 GiB avail 2026-03-10T05:45:48.584 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:45:48 vm02 bash[22526]: cluster 2026-03-10T05:45:46.760586+0000 mgr.y (mgr.14152) 87 : cluster [DBG] pgmap v61: 1 pgs: 1 active+clean; 449 KiB data, 23 MiB used, 80 GiB / 80 GiB avail 2026-03-10T05:45:49.185 INFO:journalctl@ceph.mgr.x.vm05.stdout:Mar 10 05:45:48 vm05 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:45:49.186 INFO:journalctl@ceph.mgr.x.vm05.stdout:Mar 10 05:45:49 vm05 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:45:49.186 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:45:48 vm05 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:45:49.186 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:45:49 vm05 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:45:49.186 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:45:49 vm05 bash[17864]: audit 2026-03-10T05:45:48.446035+0000 mon.a (mon.0) 401 : audit [INF] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.4"}]: dispatch 2026-03-10T05:45:49.186 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:45:49 vm05 bash[17864]: audit 2026-03-10T05:45:48.446585+0000 mon.a (mon.0) 402 : audit [DBG] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:45:49.584 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:45:49 vm02 bash[17462]: audit 2026-03-10T05:45:48.446035+0000 mon.a (mon.0) 401 : audit [INF] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.4"}]: dispatch 2026-03-10T05:45:49.584 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:45:49 vm02 bash[17462]: audit 2026-03-10T05:45:48.446585+0000 mon.a (mon.0) 402 : audit [DBG] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:45:49.584 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:45:49 vm02 bash[22526]: audit 2026-03-10T05:45:48.446035+0000 mon.a (mon.0) 401 : audit [INF] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.4"}]: dispatch 2026-03-10T05:45:49.584 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:45:49 vm02 bash[22526]: audit 2026-03-10T05:45:48.446585+0000 mon.a (mon.0) 402 : audit [DBG] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:45:50.508 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:45:50 vm05 bash[17864]: cephadm 2026-03-10T05:45:48.446915+0000 mgr.y (mgr.14152) 88 : cephadm [INF] Deploying daemon osd.4 on vm05 2026-03-10T05:45:50.508 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:45:50 vm05 bash[17864]: cluster 2026-03-10T05:45:48.760821+0000 mgr.y (mgr.14152) 89 : cluster [DBG] pgmap v62: 1 pgs: 1 active+clean; 449 KiB data, 23 MiB used, 80 GiB / 80 GiB avail 2026-03-10T05:45:50.508 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:45:50 vm05 bash[17864]: audit 2026-03-10T05:45:49.192672+0000 mon.a (mon.0) 403 : audit [INF] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' 2026-03-10T05:45:50.508 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:45:50 vm05 bash[17864]: audit 2026-03-10T05:45:49.193573+0000 mon.a (mon.0) 404 : audit [DBG] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T05:45:50.508 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:45:50 vm05 bash[17864]: audit 2026-03-10T05:45:49.194519+0000 mon.a (mon.0) 405 : audit [DBG] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:45:50.508 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:45:50 vm05 bash[17864]: audit 2026-03-10T05:45:49.194973+0000 mon.a (mon.0) 406 : audit [INF] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T05:45:50.584 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:45:50 vm02 bash[17462]: cephadm 2026-03-10T05:45:48.446915+0000 mgr.y (mgr.14152) 88 : cephadm [INF] Deploying daemon osd.4 on vm05 2026-03-10T05:45:50.584 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:45:50 vm02 bash[17462]: cluster 2026-03-10T05:45:48.760821+0000 mgr.y (mgr.14152) 89 : cluster [DBG] pgmap v62: 1 pgs: 1 active+clean; 449 KiB data, 23 MiB used, 80 GiB / 80 GiB avail 2026-03-10T05:45:50.584 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:45:50 vm02 bash[17462]: audit 2026-03-10T05:45:49.192672+0000 mon.a (mon.0) 403 : audit [INF] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' 2026-03-10T05:45:50.584 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:45:50 vm02 bash[17462]: audit 2026-03-10T05:45:49.193573+0000 mon.a (mon.0) 404 : audit [DBG] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T05:45:50.584 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:45:50 vm02 bash[17462]: audit 2026-03-10T05:45:49.194519+0000 mon.a (mon.0) 405 : audit [DBG] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:45:50.584 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:45:50 vm02 bash[17462]: audit 2026-03-10T05:45:49.194973+0000 mon.a (mon.0) 406 : audit [INF] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T05:45:50.584 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:45:50 vm02 bash[22526]: cephadm 2026-03-10T05:45:48.446915+0000 mgr.y (mgr.14152) 88 : cephadm [INF] Deploying daemon osd.4 on vm05 2026-03-10T05:45:50.584 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:45:50 vm02 bash[22526]: cluster 2026-03-10T05:45:48.760821+0000 mgr.y (mgr.14152) 89 : cluster [DBG] pgmap v62: 1 pgs: 1 active+clean; 449 KiB data, 23 MiB used, 80 GiB / 80 GiB avail 2026-03-10T05:45:50.584 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:45:50 vm02 bash[22526]: audit 2026-03-10T05:45:49.192672+0000 mon.a (mon.0) 403 : audit [INF] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' 2026-03-10T05:45:50.584 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:45:50 vm02 bash[22526]: audit 2026-03-10T05:45:49.193573+0000 mon.a (mon.0) 404 : audit [DBG] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T05:45:50.584 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:45:50 vm02 bash[22526]: audit 2026-03-10T05:45:49.194519+0000 mon.a (mon.0) 405 : audit [DBG] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:45:50.584 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:45:50 vm02 bash[22526]: audit 2026-03-10T05:45:49.194973+0000 mon.a (mon.0) 406 : audit [INF] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T05:45:52.403 INFO:teuthology.orchestra.run.vm05.stdout:Created osd(s) 4 on host 'vm05' 2026-03-10T05:45:52.455 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:45:52 vm05 bash[17864]: cluster 2026-03-10T05:45:50.761050+0000 mgr.y (mgr.14152) 90 : cluster [DBG] pgmap v63: 1 pgs: 1 active+clean; 449 KiB data, 23 MiB used, 80 GiB / 80 GiB avail 2026-03-10T05:45:52.456 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:45:52 vm05 bash[17864]: audit 2026-03-10T05:45:52.000001+0000 mon.b (mon.2) 9 : audit [INF] from='osd.4 [v2:192.168.123.105:6800/1737072685,v1:192.168.123.105:6801/1737072685]' entity='osd.4' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["4"]}]: dispatch 2026-03-10T05:45:52.456 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:45:52 vm05 bash[17864]: audit 2026-03-10T05:45:52.005318+0000 mon.a (mon.0) 407 : audit [INF] from='osd.4 ' entity='osd.4' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["4"]}]: dispatch 2026-03-10T05:45:52.456 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:45:52 vm05 bash[17864]: audit 2026-03-10T05:45:52.045311+0000 mon.a (mon.0) 408 : audit [INF] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' 2026-03-10T05:45:52.456 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:45:52 vm05 bash[17864]: audit 2026-03-10T05:45:52.051454+0000 mon.a (mon.0) 409 : audit [INF] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' 2026-03-10T05:45:52.456 DEBUG:teuthology.orchestra.run.vm05:osd.4> sudo journalctl -f -n 0 -u ceph-107483ae-1c44-11f1-b530-c1172cd6122a@osd.4.service 2026-03-10T05:45:52.457 INFO:tasks.cephadm:Deploying osd.5 on vm05 with /dev/vdd... 2026-03-10T05:45:52.457 DEBUG:teuthology.orchestra.run.vm05:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 ceph-volume -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 107483ae-1c44-11f1-b530-c1172cd6122a -- lvm zap /dev/vdd 2026-03-10T05:45:52.584 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:45:52 vm02 bash[17462]: cluster 2026-03-10T05:45:50.761050+0000 mgr.y (mgr.14152) 90 : cluster [DBG] pgmap v63: 1 pgs: 1 active+clean; 449 KiB data, 23 MiB used, 80 GiB / 80 GiB avail 2026-03-10T05:45:52.584 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:45:52 vm02 bash[17462]: audit 2026-03-10T05:45:52.000001+0000 mon.b (mon.2) 9 : audit [INF] from='osd.4 [v2:192.168.123.105:6800/1737072685,v1:192.168.123.105:6801/1737072685]' entity='osd.4' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["4"]}]: dispatch 2026-03-10T05:45:52.584 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:45:52 vm02 bash[17462]: audit 2026-03-10T05:45:52.005318+0000 mon.a (mon.0) 407 : audit [INF] from='osd.4 ' entity='osd.4' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["4"]}]: dispatch 2026-03-10T05:45:52.584 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:45:52 vm02 bash[17462]: audit 2026-03-10T05:45:52.045311+0000 mon.a (mon.0) 408 : audit [INF] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' 2026-03-10T05:45:52.584 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:45:52 vm02 bash[17462]: audit 2026-03-10T05:45:52.051454+0000 mon.a (mon.0) 409 : audit [INF] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' 2026-03-10T05:45:52.584 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:45:52 vm02 bash[22526]: cluster 2026-03-10T05:45:50.761050+0000 mgr.y (mgr.14152) 90 : cluster [DBG] pgmap v63: 1 pgs: 1 active+clean; 449 KiB data, 23 MiB used, 80 GiB / 80 GiB avail 2026-03-10T05:45:52.584 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:45:52 vm02 bash[22526]: audit 2026-03-10T05:45:52.000001+0000 mon.b (mon.2) 9 : audit [INF] from='osd.4 [v2:192.168.123.105:6800/1737072685,v1:192.168.123.105:6801/1737072685]' entity='osd.4' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["4"]}]: dispatch 2026-03-10T05:45:52.584 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:45:52 vm02 bash[22526]: audit 2026-03-10T05:45:52.005318+0000 mon.a (mon.0) 407 : audit [INF] from='osd.4 ' entity='osd.4' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["4"]}]: dispatch 2026-03-10T05:45:52.584 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:45:52 vm02 bash[22526]: audit 2026-03-10T05:45:52.045311+0000 mon.a (mon.0) 408 : audit [INF] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' 2026-03-10T05:45:52.584 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:45:52 vm02 bash[22526]: audit 2026-03-10T05:45:52.051454+0000 mon.a (mon.0) 409 : audit [INF] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' 2026-03-10T05:45:53.017 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-10T05:45:53.027 DEBUG:teuthology.orchestra.run.vm05:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 107483ae-1c44-11f1-b530-c1172cd6122a -- ceph orch daemon add osd vm05:/dev/vdd 2026-03-10T05:45:53.242 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:45:53 vm05 bash[17864]: audit 2026-03-10T05:45:52.204321+0000 mon.b (mon.2) 10 : audit [INF] from='osd.4 [v2:192.168.123.105:6800/1737072685,v1:192.168.123.105:6801/1737072685]' entity='osd.4' cmd=[{"prefix": "osd crush create-or-move", "id": 4, "weight":0.0195, "args": ["host=vm05", "root=default"]}]: dispatch 2026-03-10T05:45:53.242 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:45:53 vm05 bash[17864]: audit 2026-03-10T05:45:52.208674+0000 mon.a (mon.0) 410 : audit [INF] from='osd.4 ' entity='osd.4' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["4"]}]': finished 2026-03-10T05:45:53.242 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:45:53 vm05 bash[17864]: cluster 2026-03-10T05:45:52.208834+0000 mon.a (mon.0) 411 : cluster [DBG] osdmap e26: 5 total, 4 up, 5 in 2026-03-10T05:45:53.242 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:45:53 vm05 bash[17864]: audit 2026-03-10T05:45:52.208898+0000 mon.a (mon.0) 412 : audit [DBG] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-10T05:45:53.242 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:45:53 vm05 bash[17864]: audit 2026-03-10T05:45:52.209627+0000 mon.a (mon.0) 413 : audit [INF] from='osd.4 ' entity='osd.4' cmd=[{"prefix": "osd crush create-or-move", "id": 4, "weight":0.0195, "args": ["host=vm05", "root=default"]}]: dispatch 2026-03-10T05:45:53.242 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:45:53 vm05 bash[17864]: audit 2026-03-10T05:45:52.401799+0000 mon.a (mon.0) 414 : audit [INF] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' 2026-03-10T05:45:53.242 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:45:53 vm05 bash[17864]: audit 2026-03-10T05:45:52.416837+0000 mon.a (mon.0) 415 : audit [DBG] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T05:45:53.242 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:45:53 vm05 bash[17864]: audit 2026-03-10T05:45:52.417584+0000 mon.a (mon.0) 416 : audit [DBG] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:45:53.242 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:45:53 vm05 bash[17864]: audit 2026-03-10T05:45:52.418069+0000 mon.a (mon.0) 417 : audit [INF] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T05:45:53.242 INFO:journalctl@ceph.osd.4.vm05.stdout:Mar 10 05:45:53 vm05 bash[20835]: debug 2026-03-10T05:45:53.214+0000 7f4423f64700 -1 osd.4 0 waiting for initial osdmap 2026-03-10T05:45:53.242 INFO:journalctl@ceph.osd.4.vm05.stdout:Mar 10 05:45:53 vm05 bash[20835]: debug 2026-03-10T05:45:53.226+0000 7f441e0fa700 -1 osd.4 27 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory 2026-03-10T05:45:53.584 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:45:53 vm02 bash[17462]: audit 2026-03-10T05:45:52.204321+0000 mon.b (mon.2) 10 : audit [INF] from='osd.4 [v2:192.168.123.105:6800/1737072685,v1:192.168.123.105:6801/1737072685]' entity='osd.4' cmd=[{"prefix": "osd crush create-or-move", "id": 4, "weight":0.0195, "args": ["host=vm05", "root=default"]}]: dispatch 2026-03-10T05:45:53.584 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:45:53 vm02 bash[17462]: audit 2026-03-10T05:45:52.208674+0000 mon.a (mon.0) 410 : audit [INF] from='osd.4 ' entity='osd.4' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["4"]}]': finished 2026-03-10T05:45:53.584 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:45:53 vm02 bash[17462]: cluster 2026-03-10T05:45:52.208834+0000 mon.a (mon.0) 411 : cluster [DBG] osdmap e26: 5 total, 4 up, 5 in 2026-03-10T05:45:53.584 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:45:53 vm02 bash[17462]: audit 2026-03-10T05:45:52.208898+0000 mon.a (mon.0) 412 : audit [DBG] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-10T05:45:53.584 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:45:53 vm02 bash[17462]: audit 2026-03-10T05:45:52.209627+0000 mon.a (mon.0) 413 : audit [INF] from='osd.4 ' entity='osd.4' cmd=[{"prefix": "osd crush create-or-move", "id": 4, "weight":0.0195, "args": ["host=vm05", "root=default"]}]: dispatch 2026-03-10T05:45:53.584 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:45:53 vm02 bash[17462]: audit 2026-03-10T05:45:52.401799+0000 mon.a (mon.0) 414 : audit [INF] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' 2026-03-10T05:45:53.584 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:45:53 vm02 bash[17462]: audit 2026-03-10T05:45:52.416837+0000 mon.a (mon.0) 415 : audit [DBG] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T05:45:53.584 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:45:53 vm02 bash[17462]: audit 2026-03-10T05:45:52.417584+0000 mon.a (mon.0) 416 : audit [DBG] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:45:53.584 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:45:53 vm02 bash[17462]: audit 2026-03-10T05:45:52.418069+0000 mon.a (mon.0) 417 : audit [INF] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T05:45:53.584 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:45:53 vm02 bash[22526]: audit 2026-03-10T05:45:52.204321+0000 mon.b (mon.2) 10 : audit [INF] from='osd.4 [v2:192.168.123.105:6800/1737072685,v1:192.168.123.105:6801/1737072685]' entity='osd.4' cmd=[{"prefix": "osd crush create-or-move", "id": 4, "weight":0.0195, "args": ["host=vm05", "root=default"]}]: dispatch 2026-03-10T05:45:53.584 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:45:53 vm02 bash[22526]: audit 2026-03-10T05:45:52.208674+0000 mon.a (mon.0) 410 : audit [INF] from='osd.4 ' entity='osd.4' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["4"]}]': finished 2026-03-10T05:45:53.584 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:45:53 vm02 bash[22526]: cluster 2026-03-10T05:45:52.208834+0000 mon.a (mon.0) 411 : cluster [DBG] osdmap e26: 5 total, 4 up, 5 in 2026-03-10T05:45:53.584 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:45:53 vm02 bash[22526]: audit 2026-03-10T05:45:52.208898+0000 mon.a (mon.0) 412 : audit [DBG] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-10T05:45:53.584 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:45:53 vm02 bash[22526]: audit 2026-03-10T05:45:52.209627+0000 mon.a (mon.0) 413 : audit [INF] from='osd.4 ' entity='osd.4' cmd=[{"prefix": "osd crush create-or-move", "id": 4, "weight":0.0195, "args": ["host=vm05", "root=default"]}]: dispatch 2026-03-10T05:45:53.584 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:45:53 vm02 bash[22526]: audit 2026-03-10T05:45:52.401799+0000 mon.a (mon.0) 414 : audit [INF] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' 2026-03-10T05:45:53.584 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:45:53 vm02 bash[22526]: audit 2026-03-10T05:45:52.416837+0000 mon.a (mon.0) 415 : audit [DBG] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T05:45:53.584 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:45:53 vm02 bash[22526]: audit 2026-03-10T05:45:52.417584+0000 mon.a (mon.0) 416 : audit [DBG] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:45:53.584 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:45:53 vm02 bash[22526]: audit 2026-03-10T05:45:52.418069+0000 mon.a (mon.0) 417 : audit [INF] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T05:45:54.508 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:45:54 vm05 bash[17864]: cluster 2026-03-10T05:45:52.761313+0000 mgr.y (mgr.14152) 91 : cluster [DBG] pgmap v65: 1 pgs: 1 active+clean; 449 KiB data, 23 MiB used, 80 GiB / 80 GiB avail 2026-03-10T05:45:54.508 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:45:54 vm05 bash[17864]: audit 2026-03-10T05:45:53.216573+0000 mon.a (mon.0) 418 : audit [INF] from='osd.4 ' entity='osd.4' cmd='[{"prefix": "osd crush create-or-move", "id": 4, "weight":0.0195, "args": ["host=vm05", "root=default"]}]': finished 2026-03-10T05:45:54.508 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:45:54 vm05 bash[17864]: cluster 2026-03-10T05:45:53.216792+0000 mon.a (mon.0) 419 : cluster [DBG] osdmap e27: 5 total, 4 up, 5 in 2026-03-10T05:45:54.508 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:45:54 vm05 bash[17864]: audit 2026-03-10T05:45:53.217749+0000 mon.a (mon.0) 420 : audit [DBG] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-10T05:45:54.508 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:45:54 vm05 bash[17864]: audit 2026-03-10T05:45:53.226254+0000 mon.a (mon.0) 421 : audit [DBG] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-10T05:45:54.508 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:45:54 vm05 bash[17864]: audit 2026-03-10T05:45:53.404745+0000 mon.a (mon.0) 422 : audit [DBG] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-10T05:45:54.508 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:45:54 vm05 bash[17864]: audit 2026-03-10T05:45:53.405897+0000 mon.a (mon.0) 423 : audit [INF] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-10T05:45:54.508 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:45:54 vm05 bash[17864]: audit 2026-03-10T05:45:53.406253+0000 mon.a (mon.0) 424 : audit [DBG] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:45:54.508 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:45:54 vm05 bash[17864]: cluster 2026-03-10T05:45:54.218414+0000 mon.a (mon.0) 425 : cluster [INF] osd.4 [v2:192.168.123.105:6800/1737072685,v1:192.168.123.105:6801/1737072685] boot 2026-03-10T05:45:54.508 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:45:54 vm05 bash[17864]: cluster 2026-03-10T05:45:54.218508+0000 mon.a (mon.0) 426 : cluster [DBG] osdmap e28: 5 total, 5 up, 5 in 2026-03-10T05:45:54.508 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:45:54 vm05 bash[17864]: audit 2026-03-10T05:45:54.218740+0000 mon.a (mon.0) 427 : audit [DBG] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-10T05:45:54.584 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:45:54 vm02 bash[17462]: cluster 2026-03-10T05:45:52.761313+0000 mgr.y (mgr.14152) 91 : cluster [DBG] pgmap v65: 1 pgs: 1 active+clean; 449 KiB data, 23 MiB used, 80 GiB / 80 GiB avail 2026-03-10T05:45:54.584 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:45:54 vm02 bash[17462]: audit 2026-03-10T05:45:53.216573+0000 mon.a (mon.0) 418 : audit [INF] from='osd.4 ' entity='osd.4' cmd='[{"prefix": "osd crush create-or-move", "id": 4, "weight":0.0195, "args": ["host=vm05", "root=default"]}]': finished 2026-03-10T05:45:54.584 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:45:54 vm02 bash[17462]: cluster 2026-03-10T05:45:53.216792+0000 mon.a (mon.0) 419 : cluster [DBG] osdmap e27: 5 total, 4 up, 5 in 2026-03-10T05:45:54.584 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:45:54 vm02 bash[17462]: audit 2026-03-10T05:45:53.217749+0000 mon.a (mon.0) 420 : audit [DBG] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-10T05:45:54.584 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:45:54 vm02 bash[17462]: audit 2026-03-10T05:45:53.226254+0000 mon.a (mon.0) 421 : audit [DBG] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-10T05:45:54.584 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:45:54 vm02 bash[17462]: audit 2026-03-10T05:45:53.404745+0000 mon.a (mon.0) 422 : audit [DBG] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-10T05:45:54.584 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:45:54 vm02 bash[17462]: audit 2026-03-10T05:45:53.405897+0000 mon.a (mon.0) 423 : audit [INF] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-10T05:45:54.584 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:45:54 vm02 bash[17462]: audit 2026-03-10T05:45:53.406253+0000 mon.a (mon.0) 424 : audit [DBG] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:45:54.584 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:45:54 vm02 bash[17462]: cluster 2026-03-10T05:45:54.218414+0000 mon.a (mon.0) 425 : cluster [INF] osd.4 [v2:192.168.123.105:6800/1737072685,v1:192.168.123.105:6801/1737072685] boot 2026-03-10T05:45:54.584 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:45:54 vm02 bash[17462]: cluster 2026-03-10T05:45:54.218508+0000 mon.a (mon.0) 426 : cluster [DBG] osdmap e28: 5 total, 5 up, 5 in 2026-03-10T05:45:54.584 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:45:54 vm02 bash[17462]: audit 2026-03-10T05:45:54.218740+0000 mon.a (mon.0) 427 : audit [DBG] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-10T05:45:54.584 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:45:54 vm02 bash[22526]: cluster 2026-03-10T05:45:52.761313+0000 mgr.y (mgr.14152) 91 : cluster [DBG] pgmap v65: 1 pgs: 1 active+clean; 449 KiB data, 23 MiB used, 80 GiB / 80 GiB avail 2026-03-10T05:45:54.584 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:45:54 vm02 bash[22526]: audit 2026-03-10T05:45:53.216573+0000 mon.a (mon.0) 418 : audit [INF] from='osd.4 ' entity='osd.4' cmd='[{"prefix": "osd crush create-or-move", "id": 4, "weight":0.0195, "args": ["host=vm05", "root=default"]}]': finished 2026-03-10T05:45:54.584 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:45:54 vm02 bash[22526]: cluster 2026-03-10T05:45:53.216792+0000 mon.a (mon.0) 419 : cluster [DBG] osdmap e27: 5 total, 4 up, 5 in 2026-03-10T05:45:54.584 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:45:54 vm02 bash[22526]: audit 2026-03-10T05:45:53.217749+0000 mon.a (mon.0) 420 : audit [DBG] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-10T05:45:54.584 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:45:54 vm02 bash[22526]: audit 2026-03-10T05:45:53.226254+0000 mon.a (mon.0) 421 : audit [DBG] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-10T05:45:54.584 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:45:54 vm02 bash[22526]: audit 2026-03-10T05:45:53.404745+0000 mon.a (mon.0) 422 : audit [DBG] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-10T05:45:54.584 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:45:54 vm02 bash[22526]: audit 2026-03-10T05:45:53.405897+0000 mon.a (mon.0) 423 : audit [INF] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-10T05:45:54.584 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:45:54 vm02 bash[22526]: audit 2026-03-10T05:45:53.406253+0000 mon.a (mon.0) 424 : audit [DBG] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:45:54.584 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:45:54 vm02 bash[22526]: cluster 2026-03-10T05:45:54.218414+0000 mon.a (mon.0) 425 : cluster [INF] osd.4 [v2:192.168.123.105:6800/1737072685,v1:192.168.123.105:6801/1737072685] boot 2026-03-10T05:45:54.584 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:45:54 vm02 bash[22526]: cluster 2026-03-10T05:45:54.218508+0000 mon.a (mon.0) 426 : cluster [DBG] osdmap e28: 5 total, 5 up, 5 in 2026-03-10T05:45:54.584 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:45:54 vm02 bash[22526]: audit 2026-03-10T05:45:54.218740+0000 mon.a (mon.0) 427 : audit [DBG] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-10T05:45:55.508 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:45:55 vm05 bash[17864]: cluster 2026-03-10T05:45:53.043441+0000 osd.4 (osd.4) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-10T05:45:55.508 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:45:55 vm05 bash[17864]: cluster 2026-03-10T05:45:53.043523+0000 osd.4 (osd.4) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-10T05:45:55.508 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:45:55 vm05 bash[17864]: audit 2026-03-10T05:45:53.403484+0000 mgr.y (mgr.14152) 92 : audit [DBG] from='client.24200 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm05:/dev/vdd", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T05:45:55.508 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:45:55 vm05 bash[17864]: cluster 2026-03-10T05:45:55.219537+0000 mon.a (mon.0) 428 : cluster [DBG] osdmap e29: 5 total, 5 up, 5 in 2026-03-10T05:45:55.584 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:45:55 vm02 bash[17462]: cluster 2026-03-10T05:45:53.043441+0000 osd.4 (osd.4) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-10T05:45:55.584 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:45:55 vm02 bash[17462]: cluster 2026-03-10T05:45:53.043523+0000 osd.4 (osd.4) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-10T05:45:55.584 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:45:55 vm02 bash[17462]: audit 2026-03-10T05:45:53.403484+0000 mgr.y (mgr.14152) 92 : audit [DBG] from='client.24200 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm05:/dev/vdd", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T05:45:55.584 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:45:55 vm02 bash[17462]: cluster 2026-03-10T05:45:55.219537+0000 mon.a (mon.0) 428 : cluster [DBG] osdmap e29: 5 total, 5 up, 5 in 2026-03-10T05:45:55.584 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:45:55 vm02 bash[22526]: cluster 2026-03-10T05:45:53.043441+0000 osd.4 (osd.4) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-10T05:45:55.584 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:45:55 vm02 bash[22526]: cluster 2026-03-10T05:45:53.043523+0000 osd.4 (osd.4) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-10T05:45:55.584 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:45:55 vm02 bash[22526]: audit 2026-03-10T05:45:53.403484+0000 mgr.y (mgr.14152) 92 : audit [DBG] from='client.24200 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm05:/dev/vdd", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T05:45:55.584 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:45:55 vm02 bash[22526]: cluster 2026-03-10T05:45:55.219537+0000 mon.a (mon.0) 428 : cluster [DBG] osdmap e29: 5 total, 5 up, 5 in 2026-03-10T05:45:56.504 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:45:56 vm05 bash[17864]: cluster 2026-03-10T05:45:54.761565+0000 mgr.y (mgr.14152) 93 : cluster [DBG] pgmap v68: 1 pgs: 1 remapped+peering; 449 KiB data, 28 MiB used, 100 GiB / 100 GiB avail 2026-03-10T05:45:56.504 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:45:56 vm05 bash[17864]: cluster 2026-03-10T05:45:56.225046+0000 mon.a (mon.0) 429 : cluster [DBG] osdmap e30: 5 total, 5 up, 5 in 2026-03-10T05:45:56.584 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:45:56 vm02 bash[17462]: cluster 2026-03-10T05:45:54.761565+0000 mgr.y (mgr.14152) 93 : cluster [DBG] pgmap v68: 1 pgs: 1 remapped+peering; 449 KiB data, 28 MiB used, 100 GiB / 100 GiB avail 2026-03-10T05:45:56.584 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:45:56 vm02 bash[17462]: cluster 2026-03-10T05:45:56.225046+0000 mon.a (mon.0) 429 : cluster [DBG] osdmap e30: 5 total, 5 up, 5 in 2026-03-10T05:45:56.584 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:45:56 vm02 bash[22526]: cluster 2026-03-10T05:45:54.761565+0000 mgr.y (mgr.14152) 93 : cluster [DBG] pgmap v68: 1 pgs: 1 remapped+peering; 449 KiB data, 28 MiB used, 100 GiB / 100 GiB avail 2026-03-10T05:45:56.584 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:45:56 vm02 bash[22526]: cluster 2026-03-10T05:45:56.225046+0000 mon.a (mon.0) 429 : cluster [DBG] osdmap e30: 5 total, 5 up, 5 in 2026-03-10T05:45:57.834 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:45:57 vm02 bash[17462]: cephadm 2026-03-10T05:45:56.560804+0000 mgr.y (mgr.14152) 94 : cephadm [INF] Detected new or changed devices on vm05 2026-03-10T05:45:57.834 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:45:57 vm02 bash[17462]: audit 2026-03-10T05:45:56.566503+0000 mon.a (mon.0) 430 : audit [INF] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' 2026-03-10T05:45:57.834 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:45:57 vm02 bash[17462]: audit 2026-03-10T05:45:56.567121+0000 mon.a (mon.0) 431 : audit [INF] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"}]: dispatch 2026-03-10T05:45:57.834 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:45:57 vm02 bash[17462]: cephadm 2026-03-10T05:45:56.567448+0000 mgr.y (mgr.14152) 95 : cephadm [INF] Adjusting osd_memory_target on vm05 to 455.7M 2026-03-10T05:45:57.834 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:45:57 vm02 bash[17462]: cephadm 2026-03-10T05:45:56.567792+0000 mgr.y (mgr.14152) 96 : cephadm [WRN] Unable to set osd_memory_target on vm05 to 477915955: error parsing value: Value '477915955' is below minimum 939524096 2026-03-10T05:45:57.834 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:45:57 vm02 bash[17462]: audit 2026-03-10T05:45:56.571554+0000 mon.a (mon.0) 432 : audit [INF] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' 2026-03-10T05:45:57.834 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:45:57 vm02 bash[17462]: cluster 2026-03-10T05:45:56.761811+0000 mgr.y (mgr.14152) 97 : cluster [DBG] pgmap v71: 1 pgs: 1 remapped+peering; 449 KiB data, 28 MiB used, 100 GiB / 100 GiB avail 2026-03-10T05:45:57.834 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:45:57 vm02 bash[17462]: audit 2026-03-10T05:45:56.988082+0000 mon.c (mon.1) 15 : audit [INF] from='client.? 192.168.123.105:0/2826836448' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "2b35feb0-b492-4603-81e0-b864fb275f8c"}]: dispatch 2026-03-10T05:45:57.834 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:45:57 vm02 bash[17462]: audit 2026-03-10T05:45:56.988496+0000 mon.a (mon.0) 433 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "2b35feb0-b492-4603-81e0-b864fb275f8c"}]: dispatch 2026-03-10T05:45:57.834 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:45:57 vm02 bash[17462]: audit 2026-03-10T05:45:56.994048+0000 mon.a (mon.0) 434 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "2b35feb0-b492-4603-81e0-b864fb275f8c"}]': finished 2026-03-10T05:45:57.834 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:45:57 vm02 bash[17462]: cluster 2026-03-10T05:45:56.994110+0000 mon.a (mon.0) 435 : cluster [DBG] osdmap e31: 6 total, 5 up, 6 in 2026-03-10T05:45:57.834 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:45:57 vm02 bash[17462]: audit 2026-03-10T05:45:56.994172+0000 mon.a (mon.0) 436 : audit [DBG] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-10T05:45:57.834 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:45:57 vm02 bash[22526]: cephadm 2026-03-10T05:45:56.560804+0000 mgr.y (mgr.14152) 94 : cephadm [INF] Detected new or changed devices on vm05 2026-03-10T05:45:57.834 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:45:57 vm02 bash[22526]: audit 2026-03-10T05:45:56.566503+0000 mon.a (mon.0) 430 : audit [INF] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' 2026-03-10T05:45:57.834 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:45:57 vm02 bash[22526]: audit 2026-03-10T05:45:56.567121+0000 mon.a (mon.0) 431 : audit [INF] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"}]: dispatch 2026-03-10T05:45:57.834 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:45:57 vm02 bash[22526]: cephadm 2026-03-10T05:45:56.567448+0000 mgr.y (mgr.14152) 95 : cephadm [INF] Adjusting osd_memory_target on vm05 to 455.7M 2026-03-10T05:45:57.834 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:45:57 vm02 bash[22526]: cephadm 2026-03-10T05:45:56.567792+0000 mgr.y (mgr.14152) 96 : cephadm [WRN] Unable to set osd_memory_target on vm05 to 477915955: error parsing value: Value '477915955' is below minimum 939524096 2026-03-10T05:45:57.834 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:45:57 vm02 bash[22526]: audit 2026-03-10T05:45:56.571554+0000 mon.a (mon.0) 432 : audit [INF] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' 2026-03-10T05:45:57.834 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:45:57 vm02 bash[22526]: cluster 2026-03-10T05:45:56.761811+0000 mgr.y (mgr.14152) 97 : cluster [DBG] pgmap v71: 1 pgs: 1 remapped+peering; 449 KiB data, 28 MiB used, 100 GiB / 100 GiB avail 2026-03-10T05:45:57.834 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:45:57 vm02 bash[22526]: audit 2026-03-10T05:45:56.988082+0000 mon.c (mon.1) 15 : audit [INF] from='client.? 192.168.123.105:0/2826836448' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "2b35feb0-b492-4603-81e0-b864fb275f8c"}]: dispatch 2026-03-10T05:45:57.834 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:45:57 vm02 bash[22526]: audit 2026-03-10T05:45:56.988496+0000 mon.a (mon.0) 433 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "2b35feb0-b492-4603-81e0-b864fb275f8c"}]: dispatch 2026-03-10T05:45:57.834 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:45:57 vm02 bash[22526]: audit 2026-03-10T05:45:56.994048+0000 mon.a (mon.0) 434 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "2b35feb0-b492-4603-81e0-b864fb275f8c"}]': finished 2026-03-10T05:45:57.834 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:45:57 vm02 bash[22526]: cluster 2026-03-10T05:45:56.994110+0000 mon.a (mon.0) 435 : cluster [DBG] osdmap e31: 6 total, 5 up, 6 in 2026-03-10T05:45:57.834 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:45:57 vm02 bash[22526]: audit 2026-03-10T05:45:56.994172+0000 mon.a (mon.0) 436 : audit [DBG] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-10T05:45:58.008 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:45:57 vm05 bash[17864]: cephadm 2026-03-10T05:45:56.560804+0000 mgr.y (mgr.14152) 94 : cephadm [INF] Detected new or changed devices on vm05 2026-03-10T05:45:58.008 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:45:57 vm05 bash[17864]: audit 2026-03-10T05:45:56.566503+0000 mon.a (mon.0) 430 : audit [INF] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' 2026-03-10T05:45:58.008 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:45:57 vm05 bash[17864]: audit 2026-03-10T05:45:56.567121+0000 mon.a (mon.0) 431 : audit [INF] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"}]: dispatch 2026-03-10T05:45:58.008 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:45:57 vm05 bash[17864]: cephadm 2026-03-10T05:45:56.567448+0000 mgr.y (mgr.14152) 95 : cephadm [INF] Adjusting osd_memory_target on vm05 to 455.7M 2026-03-10T05:45:58.008 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:45:57 vm05 bash[17864]: cephadm 2026-03-10T05:45:56.567792+0000 mgr.y (mgr.14152) 96 : cephadm [WRN] Unable to set osd_memory_target on vm05 to 477915955: error parsing value: Value '477915955' is below minimum 939524096 2026-03-10T05:45:58.008 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:45:57 vm05 bash[17864]: audit 2026-03-10T05:45:56.571554+0000 mon.a (mon.0) 432 : audit [INF] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' 2026-03-10T05:45:58.008 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:45:57 vm05 bash[17864]: cluster 2026-03-10T05:45:56.761811+0000 mgr.y (mgr.14152) 97 : cluster [DBG] pgmap v71: 1 pgs: 1 remapped+peering; 449 KiB data, 28 MiB used, 100 GiB / 100 GiB avail 2026-03-10T05:45:58.008 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:45:57 vm05 bash[17864]: audit 2026-03-10T05:45:56.988082+0000 mon.c (mon.1) 15 : audit [INF] from='client.? 192.168.123.105:0/2826836448' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "2b35feb0-b492-4603-81e0-b864fb275f8c"}]: dispatch 2026-03-10T05:45:58.008 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:45:57 vm05 bash[17864]: audit 2026-03-10T05:45:56.988496+0000 mon.a (mon.0) 433 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "2b35feb0-b492-4603-81e0-b864fb275f8c"}]: dispatch 2026-03-10T05:45:58.008 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:45:57 vm05 bash[17864]: audit 2026-03-10T05:45:56.994048+0000 mon.a (mon.0) 434 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "2b35feb0-b492-4603-81e0-b864fb275f8c"}]': finished 2026-03-10T05:45:58.008 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:45:57 vm05 bash[17864]: cluster 2026-03-10T05:45:56.994110+0000 mon.a (mon.0) 435 : cluster [DBG] osdmap e31: 6 total, 5 up, 6 in 2026-03-10T05:45:58.008 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:45:57 vm05 bash[17864]: audit 2026-03-10T05:45:56.994172+0000 mon.a (mon.0) 436 : audit [DBG] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-10T05:45:58.834 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:45:58 vm02 bash[17462]: audit 2026-03-10T05:45:57.561505+0000 mon.b (mon.2) 11 : audit [DBG] from='client.? 192.168.123.105:0/577048768' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-10T05:45:58.834 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:45:58 vm02 bash[22526]: audit 2026-03-10T05:45:57.561505+0000 mon.b (mon.2) 11 : audit [DBG] from='client.? 192.168.123.105:0/577048768' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-10T05:45:59.008 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:45:58 vm05 bash[17864]: audit 2026-03-10T05:45:57.561505+0000 mon.b (mon.2) 11 : audit [DBG] from='client.? 192.168.123.105:0/577048768' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-10T05:45:59.834 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:45:59 vm02 bash[17462]: cluster 2026-03-10T05:45:58.762066+0000 mgr.y (mgr.14152) 98 : cluster [DBG] pgmap v73: 1 pgs: 1 remapped+peering; 449 KiB data, 28 MiB used, 100 GiB / 100 GiB avail 2026-03-10T05:45:59.834 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:45:59 vm02 bash[22526]: cluster 2026-03-10T05:45:58.762066+0000 mgr.y (mgr.14152) 98 : cluster [DBG] pgmap v73: 1 pgs: 1 remapped+peering; 449 KiB data, 28 MiB used, 100 GiB / 100 GiB avail 2026-03-10T05:45:59.838 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:45:59 vm05 bash[17864]: cluster 2026-03-10T05:45:58.762066+0000 mgr.y (mgr.14152) 98 : cluster [DBG] pgmap v73: 1 pgs: 1 remapped+peering; 449 KiB data, 28 MiB used, 100 GiB / 100 GiB avail 2026-03-10T05:46:02.084 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:46:01 vm02 bash[17462]: cluster 2026-03-10T05:46:00.762331+0000 mgr.y (mgr.14152) 99 : cluster [DBG] pgmap v74: 1 pgs: 1 active+clean; 449 KiB data, 29 MiB used, 100 GiB / 100 GiB avail; 65 KiB/s, 0 objects/s recovering 2026-03-10T05:46:02.084 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:46:01 vm02 bash[22526]: cluster 2026-03-10T05:46:00.762331+0000 mgr.y (mgr.14152) 99 : cluster [DBG] pgmap v74: 1 pgs: 1 active+clean; 449 KiB data, 29 MiB used, 100 GiB / 100 GiB avail; 65 KiB/s, 0 objects/s recovering 2026-03-10T05:46:02.258 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:46:01 vm05 bash[17864]: cluster 2026-03-10T05:46:00.762331+0000 mgr.y (mgr.14152) 99 : cluster [DBG] pgmap v74: 1 pgs: 1 active+clean; 449 KiB data, 29 MiB used, 100 GiB / 100 GiB avail; 65 KiB/s, 0 objects/s recovering 2026-03-10T05:46:03.696 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:46:03 vm05 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:46:03.696 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:46:03 vm05 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:46:03.698 INFO:journalctl@ceph.mgr.x.vm05.stdout:Mar 10 05:46:03 vm05 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:46:03.698 INFO:journalctl@ceph.mgr.x.vm05.stdout:Mar 10 05:46:03 vm05 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:46:03.698 INFO:journalctl@ceph.osd.4.vm05.stdout:Mar 10 05:46:03 vm05 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:46:03.698 INFO:journalctl@ceph.osd.4.vm05.stdout:Mar 10 05:46:03 vm05 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:46:04.008 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:46:03 vm05 bash[17864]: cluster 2026-03-10T05:46:02.762625+0000 mgr.y (mgr.14152) 100 : cluster [DBG] pgmap v75: 1 pgs: 1 active+clean; 449 KiB data, 29 MiB used, 100 GiB / 100 GiB avail; 52 KiB/s, 0 objects/s recovering 2026-03-10T05:46:04.008 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:46:03 vm05 bash[17864]: audit 2026-03-10T05:46:02.938722+0000 mon.a (mon.0) 437 : audit [INF] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.5"}]: dispatch 2026-03-10T05:46:04.008 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:46:03 vm05 bash[17864]: audit 2026-03-10T05:46:02.939314+0000 mon.a (mon.0) 438 : audit [DBG] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:46:04.008 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:46:03 vm05 bash[17864]: cephadm 2026-03-10T05:46:02.939779+0000 mgr.y (mgr.14152) 101 : cephadm [INF] Deploying daemon osd.5 on vm05 2026-03-10T05:46:04.084 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:46:03 vm02 bash[17462]: cluster 2026-03-10T05:46:02.762625+0000 mgr.y (mgr.14152) 100 : cluster [DBG] pgmap v75: 1 pgs: 1 active+clean; 449 KiB data, 29 MiB used, 100 GiB / 100 GiB avail; 52 KiB/s, 0 objects/s recovering 2026-03-10T05:46:04.084 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:46:03 vm02 bash[17462]: audit 2026-03-10T05:46:02.938722+0000 mon.a (mon.0) 437 : audit [INF] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.5"}]: dispatch 2026-03-10T05:46:04.084 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:46:03 vm02 bash[17462]: audit 2026-03-10T05:46:02.939314+0000 mon.a (mon.0) 438 : audit [DBG] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:46:04.084 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:46:03 vm02 bash[17462]: cephadm 2026-03-10T05:46:02.939779+0000 mgr.y (mgr.14152) 101 : cephadm [INF] Deploying daemon osd.5 on vm05 2026-03-10T05:46:04.084 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:46:03 vm02 bash[22526]: cluster 2026-03-10T05:46:02.762625+0000 mgr.y (mgr.14152) 100 : cluster [DBG] pgmap v75: 1 pgs: 1 active+clean; 449 KiB data, 29 MiB used, 100 GiB / 100 GiB avail; 52 KiB/s, 0 objects/s recovering 2026-03-10T05:46:04.084 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:46:03 vm02 bash[22526]: audit 2026-03-10T05:46:02.938722+0000 mon.a (mon.0) 437 : audit [INF] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.5"}]: dispatch 2026-03-10T05:46:04.084 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:46:03 vm02 bash[22526]: audit 2026-03-10T05:46:02.939314+0000 mon.a (mon.0) 438 : audit [DBG] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:46:04.084 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:46:03 vm02 bash[22526]: cephadm 2026-03-10T05:46:02.939779+0000 mgr.y (mgr.14152) 101 : cephadm [INF] Deploying daemon osd.5 on vm05 2026-03-10T05:46:04.758 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:46:04 vm05 bash[17864]: audit 2026-03-10T05:46:03.717790+0000 mon.a (mon.0) 439 : audit [INF] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' 2026-03-10T05:46:04.758 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:46:04 vm05 bash[17864]: audit 2026-03-10T05:46:03.746435+0000 mon.a (mon.0) 440 : audit [DBG] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T05:46:04.758 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:46:04 vm05 bash[17864]: audit 2026-03-10T05:46:03.747135+0000 mon.a (mon.0) 441 : audit [DBG] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:46:04.758 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:46:04 vm05 bash[17864]: audit 2026-03-10T05:46:03.747544+0000 mon.a (mon.0) 442 : audit [INF] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T05:46:05.084 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:46:04 vm02 bash[17462]: audit 2026-03-10T05:46:03.717790+0000 mon.a (mon.0) 439 : audit [INF] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' 2026-03-10T05:46:05.084 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:46:04 vm02 bash[17462]: audit 2026-03-10T05:46:03.746435+0000 mon.a (mon.0) 440 : audit [DBG] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T05:46:05.084 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:46:04 vm02 bash[17462]: audit 2026-03-10T05:46:03.747135+0000 mon.a (mon.0) 441 : audit [DBG] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:46:05.084 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:46:04 vm02 bash[17462]: audit 2026-03-10T05:46:03.747544+0000 mon.a (mon.0) 442 : audit [INF] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T05:46:05.084 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:46:04 vm02 bash[22526]: audit 2026-03-10T05:46:03.717790+0000 mon.a (mon.0) 439 : audit [INF] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' 2026-03-10T05:46:05.084 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:46:04 vm02 bash[22526]: audit 2026-03-10T05:46:03.746435+0000 mon.a (mon.0) 440 : audit [DBG] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T05:46:05.084 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:46:04 vm02 bash[22526]: audit 2026-03-10T05:46:03.747135+0000 mon.a (mon.0) 441 : audit [DBG] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:46:05.084 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:46:04 vm02 bash[22526]: audit 2026-03-10T05:46:03.747544+0000 mon.a (mon.0) 442 : audit [INF] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T05:46:06.008 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:46:05 vm05 bash[17864]: cluster 2026-03-10T05:46:04.762839+0000 mgr.y (mgr.14152) 102 : cluster [DBG] pgmap v76: 1 pgs: 1 active+clean; 449 KiB data, 29 MiB used, 100 GiB / 100 GiB avail; 45 KiB/s, 0 objects/s recovering 2026-03-10T05:46:06.084 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:46:05 vm02 bash[17462]: cluster 2026-03-10T05:46:04.762839+0000 mgr.y (mgr.14152) 102 : cluster [DBG] pgmap v76: 1 pgs: 1 active+clean; 449 KiB data, 29 MiB used, 100 GiB / 100 GiB avail; 45 KiB/s, 0 objects/s recovering 2026-03-10T05:46:06.084 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:46:05 vm02 bash[22526]: cluster 2026-03-10T05:46:04.762839+0000 mgr.y (mgr.14152) 102 : cluster [DBG] pgmap v76: 1 pgs: 1 active+clean; 449 KiB data, 29 MiB used, 100 GiB / 100 GiB avail; 45 KiB/s, 0 objects/s recovering 2026-03-10T05:46:06.981 INFO:teuthology.orchestra.run.vm05.stdout:Created osd(s) 5 on host 'vm05' 2026-03-10T05:46:06.991 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:46:06 vm05 bash[17864]: audit 2026-03-10T05:46:06.559224+0000 mon.c (mon.1) 16 : audit [INF] from='osd.5 [v2:192.168.123.105:6808/3303341454,v1:192.168.123.105:6809/3303341454]' entity='osd.5' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["5"]}]: dispatch 2026-03-10T05:46:06.991 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:46:06 vm05 bash[17864]: audit 2026-03-10T05:46:06.559629+0000 mon.a (mon.0) 443 : audit [INF] from='osd.5 ' entity='osd.5' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["5"]}]: dispatch 2026-03-10T05:46:06.991 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:46:06 vm05 bash[17864]: audit 2026-03-10T05:46:06.610060+0000 mon.a (mon.0) 444 : audit [INF] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' 2026-03-10T05:46:06.991 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:46:06 vm05 bash[17864]: audit 2026-03-10T05:46:06.616018+0000 mon.a (mon.0) 445 : audit [INF] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' 2026-03-10T05:46:07.049 DEBUG:teuthology.orchestra.run.vm05:osd.5> sudo journalctl -f -n 0 -u ceph-107483ae-1c44-11f1-b530-c1172cd6122a@osd.5.service 2026-03-10T05:46:07.050 INFO:tasks.cephadm:Deploying osd.6 on vm05 with /dev/vdc... 2026-03-10T05:46:07.050 DEBUG:teuthology.orchestra.run.vm05:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 ceph-volume -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 107483ae-1c44-11f1-b530-c1172cd6122a -- lvm zap /dev/vdc 2026-03-10T05:46:07.084 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:46:06 vm02 bash[17462]: audit 2026-03-10T05:46:06.559224+0000 mon.c (mon.1) 16 : audit [INF] from='osd.5 [v2:192.168.123.105:6808/3303341454,v1:192.168.123.105:6809/3303341454]' entity='osd.5' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["5"]}]: dispatch 2026-03-10T05:46:07.084 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:46:06 vm02 bash[17462]: audit 2026-03-10T05:46:06.559629+0000 mon.a (mon.0) 443 : audit [INF] from='osd.5 ' entity='osd.5' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["5"]}]: dispatch 2026-03-10T05:46:07.084 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:46:06 vm02 bash[17462]: audit 2026-03-10T05:46:06.610060+0000 mon.a (mon.0) 444 : audit [INF] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' 2026-03-10T05:46:07.084 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:46:06 vm02 bash[17462]: audit 2026-03-10T05:46:06.616018+0000 mon.a (mon.0) 445 : audit [INF] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' 2026-03-10T05:46:07.084 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:46:06 vm02 bash[22526]: audit 2026-03-10T05:46:06.559224+0000 mon.c (mon.1) 16 : audit [INF] from='osd.5 [v2:192.168.123.105:6808/3303341454,v1:192.168.123.105:6809/3303341454]' entity='osd.5' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["5"]}]: dispatch 2026-03-10T05:46:07.084 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:46:06 vm02 bash[22526]: audit 2026-03-10T05:46:06.559629+0000 mon.a (mon.0) 443 : audit [INF] from='osd.5 ' entity='osd.5' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["5"]}]: dispatch 2026-03-10T05:46:07.084 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:46:06 vm02 bash[22526]: audit 2026-03-10T05:46:06.610060+0000 mon.a (mon.0) 444 : audit [INF] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' 2026-03-10T05:46:07.084 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:46:06 vm02 bash[22526]: audit 2026-03-10T05:46:06.616018+0000 mon.a (mon.0) 445 : audit [INF] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' 2026-03-10T05:46:07.626 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-10T05:46:07.634 DEBUG:teuthology.orchestra.run.vm05:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 107483ae-1c44-11f1-b530-c1172cd6122a -- ceph orch daemon add osd vm05:/dev/vdc 2026-03-10T05:46:07.858 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:46:07 vm05 bash[17864]: audit 2026-03-10T05:46:06.745743+0000 mon.a (mon.0) 446 : audit [INF] from='osd.5 ' entity='osd.5' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["5"]}]': finished 2026-03-10T05:46:07.858 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:46:07 vm05 bash[17864]: cluster 2026-03-10T05:46:06.746090+0000 mon.a (mon.0) 447 : cluster [DBG] osdmap e32: 6 total, 5 up, 6 in 2026-03-10T05:46:07.858 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:46:07 vm05 bash[17864]: audit 2026-03-10T05:46:06.746215+0000 mon.a (mon.0) 448 : audit [DBG] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-10T05:46:07.858 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:46:07 vm05 bash[17864]: audit 2026-03-10T05:46:06.746603+0000 mon.c (mon.1) 17 : audit [INF] from='osd.5 [v2:192.168.123.105:6808/3303341454,v1:192.168.123.105:6809/3303341454]' entity='osd.5' cmd=[{"prefix": "osd crush create-or-move", "id": 5, "weight":0.0195, "args": ["host=vm05", "root=default"]}]: dispatch 2026-03-10T05:46:07.858 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:46:07 vm05 bash[17864]: audit 2026-03-10T05:46:06.746898+0000 mon.a (mon.0) 449 : audit [INF] from='osd.5 ' entity='osd.5' cmd=[{"prefix": "osd crush create-or-move", "id": 5, "weight":0.0195, "args": ["host=vm05", "root=default"]}]: dispatch 2026-03-10T05:46:07.858 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:46:07 vm05 bash[17864]: cluster 2026-03-10T05:46:06.763081+0000 mgr.y (mgr.14152) 103 : cluster [DBG] pgmap v78: 1 pgs: 1 active+clean; 449 KiB data, 29 MiB used, 100 GiB / 100 GiB avail; 40 KiB/s, 0 objects/s recovering 2026-03-10T05:46:07.858 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:46:07 vm05 bash[17864]: audit 2026-03-10T05:46:06.980464+0000 mon.a (mon.0) 450 : audit [INF] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' 2026-03-10T05:46:07.858 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:46:07 vm05 bash[17864]: audit 2026-03-10T05:46:06.982592+0000 mon.a (mon.0) 451 : audit [DBG] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T05:46:07.858 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:46:07 vm05 bash[17864]: audit 2026-03-10T05:46:06.983244+0000 mon.a (mon.0) 452 : audit [DBG] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:46:07.858 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:46:07 vm05 bash[17864]: audit 2026-03-10T05:46:06.983829+0000 mon.a (mon.0) 453 : audit [INF] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T05:46:07.858 INFO:journalctl@ceph.osd.5.vm05.stdout:Mar 10 05:46:07 vm05 bash[23962]: debug 2026-03-10T05:46:07.754+0000 7f2c1616c700 -1 osd.5 0 waiting for initial osdmap 2026-03-10T05:46:07.858 INFO:journalctl@ceph.osd.5.vm05.stdout:Mar 10 05:46:07 vm05 bash[23962]: debug 2026-03-10T05:46:07.758+0000 7f2c12306700 -1 osd.5 33 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory 2026-03-10T05:46:08.084 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:46:07 vm02 bash[17462]: audit 2026-03-10T05:46:06.745743+0000 mon.a (mon.0) 446 : audit [INF] from='osd.5 ' entity='osd.5' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["5"]}]': finished 2026-03-10T05:46:08.084 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:46:07 vm02 bash[17462]: cluster 2026-03-10T05:46:06.746090+0000 mon.a (mon.0) 447 : cluster [DBG] osdmap e32: 6 total, 5 up, 6 in 2026-03-10T05:46:08.084 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:46:07 vm02 bash[17462]: audit 2026-03-10T05:46:06.746215+0000 mon.a (mon.0) 448 : audit [DBG] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-10T05:46:08.084 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:46:07 vm02 bash[17462]: audit 2026-03-10T05:46:06.746603+0000 mon.c (mon.1) 17 : audit [INF] from='osd.5 [v2:192.168.123.105:6808/3303341454,v1:192.168.123.105:6809/3303341454]' entity='osd.5' cmd=[{"prefix": "osd crush create-or-move", "id": 5, "weight":0.0195, "args": ["host=vm05", "root=default"]}]: dispatch 2026-03-10T05:46:08.084 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:46:07 vm02 bash[17462]: audit 2026-03-10T05:46:06.746898+0000 mon.a (mon.0) 449 : audit [INF] from='osd.5 ' entity='osd.5' cmd=[{"prefix": "osd crush create-or-move", "id": 5, "weight":0.0195, "args": ["host=vm05", "root=default"]}]: dispatch 2026-03-10T05:46:08.084 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:46:07 vm02 bash[17462]: cluster 2026-03-10T05:46:06.763081+0000 mgr.y (mgr.14152) 103 : cluster [DBG] pgmap v78: 1 pgs: 1 active+clean; 449 KiB data, 29 MiB used, 100 GiB / 100 GiB avail; 40 KiB/s, 0 objects/s recovering 2026-03-10T05:46:08.084 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:46:07 vm02 bash[17462]: audit 2026-03-10T05:46:06.980464+0000 mon.a (mon.0) 450 : audit [INF] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' 2026-03-10T05:46:08.084 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:46:07 vm02 bash[17462]: audit 2026-03-10T05:46:06.982592+0000 mon.a (mon.0) 451 : audit [DBG] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T05:46:08.084 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:46:07 vm02 bash[17462]: audit 2026-03-10T05:46:06.983244+0000 mon.a (mon.0) 452 : audit [DBG] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:46:08.084 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:46:07 vm02 bash[17462]: audit 2026-03-10T05:46:06.983829+0000 mon.a (mon.0) 453 : audit [INF] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T05:46:08.084 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:46:07 vm02 bash[22526]: audit 2026-03-10T05:46:06.745743+0000 mon.a (mon.0) 446 : audit [INF] from='osd.5 ' entity='osd.5' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["5"]}]': finished 2026-03-10T05:46:08.084 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:46:07 vm02 bash[22526]: cluster 2026-03-10T05:46:06.746090+0000 mon.a (mon.0) 447 : cluster [DBG] osdmap e32: 6 total, 5 up, 6 in 2026-03-10T05:46:08.084 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:46:07 vm02 bash[22526]: audit 2026-03-10T05:46:06.746215+0000 mon.a (mon.0) 448 : audit [DBG] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-10T05:46:08.084 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:46:07 vm02 bash[22526]: audit 2026-03-10T05:46:06.746603+0000 mon.c (mon.1) 17 : audit [INF] from='osd.5 [v2:192.168.123.105:6808/3303341454,v1:192.168.123.105:6809/3303341454]' entity='osd.5' cmd=[{"prefix": "osd crush create-or-move", "id": 5, "weight":0.0195, "args": ["host=vm05", "root=default"]}]: dispatch 2026-03-10T05:46:08.084 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:46:07 vm02 bash[22526]: audit 2026-03-10T05:46:06.746898+0000 mon.a (mon.0) 449 : audit [INF] from='osd.5 ' entity='osd.5' cmd=[{"prefix": "osd crush create-or-move", "id": 5, "weight":0.0195, "args": ["host=vm05", "root=default"]}]: dispatch 2026-03-10T05:46:08.084 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:46:07 vm02 bash[22526]: cluster 2026-03-10T05:46:06.763081+0000 mgr.y (mgr.14152) 103 : cluster [DBG] pgmap v78: 1 pgs: 1 active+clean; 449 KiB data, 29 MiB used, 100 GiB / 100 GiB avail; 40 KiB/s, 0 objects/s recovering 2026-03-10T05:46:08.084 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:46:07 vm02 bash[22526]: audit 2026-03-10T05:46:06.980464+0000 mon.a (mon.0) 450 : audit [INF] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' 2026-03-10T05:46:08.084 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:46:07 vm02 bash[22526]: audit 2026-03-10T05:46:06.982592+0000 mon.a (mon.0) 451 : audit [DBG] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T05:46:08.084 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:46:07 vm02 bash[22526]: audit 2026-03-10T05:46:06.983244+0000 mon.a (mon.0) 452 : audit [DBG] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:46:08.084 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:46:07 vm02 bash[22526]: audit 2026-03-10T05:46:06.983829+0000 mon.a (mon.0) 453 : audit [INF] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T05:46:09.084 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:46:08 vm02 bash[17462]: audit 2026-03-10T05:46:07.748965+0000 mon.a (mon.0) 454 : audit [INF] from='osd.5 ' entity='osd.5' cmd='[{"prefix": "osd crush create-or-move", "id": 5, "weight":0.0195, "args": ["host=vm05", "root=default"]}]': finished 2026-03-10T05:46:09.084 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:46:08 vm02 bash[17462]: cluster 2026-03-10T05:46:07.749009+0000 mon.a (mon.0) 455 : cluster [DBG] osdmap e33: 6 total, 5 up, 6 in 2026-03-10T05:46:09.084 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:46:08 vm02 bash[17462]: audit 2026-03-10T05:46:07.749683+0000 mon.a (mon.0) 456 : audit [DBG] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-10T05:46:09.084 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:46:08 vm02 bash[17462]: audit 2026-03-10T05:46:07.764393+0000 mon.a (mon.0) 457 : audit [DBG] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-10T05:46:09.084 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:46:08 vm02 bash[17462]: audit 2026-03-10T05:46:08.037636+0000 mgr.y (mgr.14152) 104 : audit [DBG] from='client.24227 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm05:/dev/vdc", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T05:46:09.084 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:46:08 vm02 bash[17462]: audit 2026-03-10T05:46:08.038897+0000 mon.a (mon.0) 458 : audit [DBG] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-10T05:46:09.084 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:46:08 vm02 bash[17462]: audit 2026-03-10T05:46:08.040180+0000 mon.a (mon.0) 459 : audit [INF] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-10T05:46:09.084 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:46:08 vm02 bash[17462]: audit 2026-03-10T05:46:08.040552+0000 mon.a (mon.0) 460 : audit [DBG] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:46:09.084 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:46:08 vm02 bash[22526]: audit 2026-03-10T05:46:07.748965+0000 mon.a (mon.0) 454 : audit [INF] from='osd.5 ' entity='osd.5' cmd='[{"prefix": "osd crush create-or-move", "id": 5, "weight":0.0195, "args": ["host=vm05", "root=default"]}]': finished 2026-03-10T05:46:09.084 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:46:08 vm02 bash[22526]: cluster 2026-03-10T05:46:07.749009+0000 mon.a (mon.0) 455 : cluster [DBG] osdmap e33: 6 total, 5 up, 6 in 2026-03-10T05:46:09.084 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:46:08 vm02 bash[22526]: audit 2026-03-10T05:46:07.749683+0000 mon.a (mon.0) 456 : audit [DBG] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-10T05:46:09.084 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:46:08 vm02 bash[22526]: audit 2026-03-10T05:46:07.764393+0000 mon.a (mon.0) 457 : audit [DBG] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-10T05:46:09.084 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:46:08 vm02 bash[22526]: audit 2026-03-10T05:46:08.037636+0000 mgr.y (mgr.14152) 104 : audit [DBG] from='client.24227 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm05:/dev/vdc", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T05:46:09.084 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:46:08 vm02 bash[22526]: audit 2026-03-10T05:46:08.038897+0000 mon.a (mon.0) 458 : audit [DBG] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-10T05:46:09.084 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:46:08 vm02 bash[22526]: audit 2026-03-10T05:46:08.040180+0000 mon.a (mon.0) 459 : audit [INF] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-10T05:46:09.084 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:46:08 vm02 bash[22526]: audit 2026-03-10T05:46:08.040552+0000 mon.a (mon.0) 460 : audit [DBG] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:46:09.258 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:46:08 vm05 bash[17864]: audit 2026-03-10T05:46:07.748965+0000 mon.a (mon.0) 454 : audit [INF] from='osd.5 ' entity='osd.5' cmd='[{"prefix": "osd crush create-or-move", "id": 5, "weight":0.0195, "args": ["host=vm05", "root=default"]}]': finished 2026-03-10T05:46:09.258 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:46:08 vm05 bash[17864]: cluster 2026-03-10T05:46:07.749009+0000 mon.a (mon.0) 455 : cluster [DBG] osdmap e33: 6 total, 5 up, 6 in 2026-03-10T05:46:09.258 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:46:08 vm05 bash[17864]: audit 2026-03-10T05:46:07.749683+0000 mon.a (mon.0) 456 : audit [DBG] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-10T05:46:09.258 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:46:08 vm05 bash[17864]: audit 2026-03-10T05:46:07.764393+0000 mon.a (mon.0) 457 : audit [DBG] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-10T05:46:09.258 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:46:08 vm05 bash[17864]: audit 2026-03-10T05:46:08.037636+0000 mgr.y (mgr.14152) 104 : audit [DBG] from='client.24227 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm05:/dev/vdc", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T05:46:09.258 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:46:08 vm05 bash[17864]: audit 2026-03-10T05:46:08.038897+0000 mon.a (mon.0) 458 : audit [DBG] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-10T05:46:09.258 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:46:08 vm05 bash[17864]: audit 2026-03-10T05:46:08.040180+0000 mon.a (mon.0) 459 : audit [INF] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-10T05:46:09.258 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:46:08 vm05 bash[17864]: audit 2026-03-10T05:46:08.040552+0000 mon.a (mon.0) 460 : audit [DBG] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:46:10.008 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:46:09 vm05 bash[17864]: cluster 2026-03-10T05:46:07.589210+0000 osd.5 (osd.5) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-10T05:46:10.008 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:46:09 vm05 bash[17864]: cluster 2026-03-10T05:46:07.589293+0000 osd.5 (osd.5) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-10T05:46:10.008 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:46:09 vm05 bash[17864]: cluster 2026-03-10T05:46:08.754714+0000 mon.a (mon.0) 461 : cluster [INF] osd.5 [v2:192.168.123.105:6808/3303341454,v1:192.168.123.105:6809/3303341454] boot 2026-03-10T05:46:10.008 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:46:09 vm05 bash[17864]: cluster 2026-03-10T05:46:08.754761+0000 mon.a (mon.0) 462 : cluster [DBG] osdmap e34: 6 total, 6 up, 6 in 2026-03-10T05:46:10.008 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:46:09 vm05 bash[17864]: audit 2026-03-10T05:46:08.755566+0000 mon.a (mon.0) 463 : audit [DBG] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-10T05:46:10.008 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:46:09 vm05 bash[17864]: cluster 2026-03-10T05:46:08.763327+0000 mgr.y (mgr.14152) 105 : cluster [DBG] pgmap v81: 1 pgs: 1 active+clean; 449 KiB data, 29 MiB used, 100 GiB / 100 GiB avail 2026-03-10T05:46:10.008 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:46:09 vm05 bash[17864]: cluster 2026-03-10T05:46:09.757945+0000 mon.a (mon.0) 464 : cluster [DBG] osdmap e35: 6 total, 6 up, 6 in 2026-03-10T05:46:10.084 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:46:09 vm02 bash[17462]: cluster 2026-03-10T05:46:07.589210+0000 osd.5 (osd.5) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-10T05:46:10.084 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:46:09 vm02 bash[17462]: cluster 2026-03-10T05:46:07.589293+0000 osd.5 (osd.5) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-10T05:46:10.084 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:46:09 vm02 bash[17462]: cluster 2026-03-10T05:46:08.754714+0000 mon.a (mon.0) 461 : cluster [INF] osd.5 [v2:192.168.123.105:6808/3303341454,v1:192.168.123.105:6809/3303341454] boot 2026-03-10T05:46:10.084 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:46:09 vm02 bash[17462]: cluster 2026-03-10T05:46:08.754761+0000 mon.a (mon.0) 462 : cluster [DBG] osdmap e34: 6 total, 6 up, 6 in 2026-03-10T05:46:10.084 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:46:09 vm02 bash[17462]: audit 2026-03-10T05:46:08.755566+0000 mon.a (mon.0) 463 : audit [DBG] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-10T05:46:10.084 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:46:09 vm02 bash[17462]: cluster 2026-03-10T05:46:08.763327+0000 mgr.y (mgr.14152) 105 : cluster [DBG] pgmap v81: 1 pgs: 1 active+clean; 449 KiB data, 29 MiB used, 100 GiB / 100 GiB avail 2026-03-10T05:46:10.084 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:46:09 vm02 bash[17462]: cluster 2026-03-10T05:46:09.757945+0000 mon.a (mon.0) 464 : cluster [DBG] osdmap e35: 6 total, 6 up, 6 in 2026-03-10T05:46:10.084 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:46:09 vm02 bash[22526]: cluster 2026-03-10T05:46:07.589210+0000 osd.5 (osd.5) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-10T05:46:10.084 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:46:09 vm02 bash[22526]: cluster 2026-03-10T05:46:07.589293+0000 osd.5 (osd.5) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-10T05:46:10.084 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:46:09 vm02 bash[22526]: cluster 2026-03-10T05:46:08.754714+0000 mon.a (mon.0) 461 : cluster [INF] osd.5 [v2:192.168.123.105:6808/3303341454,v1:192.168.123.105:6809/3303341454] boot 2026-03-10T05:46:10.084 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:46:09 vm02 bash[22526]: cluster 2026-03-10T05:46:08.754761+0000 mon.a (mon.0) 462 : cluster [DBG] osdmap e34: 6 total, 6 up, 6 in 2026-03-10T05:46:10.084 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:46:09 vm02 bash[22526]: audit 2026-03-10T05:46:08.755566+0000 mon.a (mon.0) 463 : audit [DBG] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-10T05:46:10.084 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:46:09 vm02 bash[22526]: cluster 2026-03-10T05:46:08.763327+0000 mgr.y (mgr.14152) 105 : cluster [DBG] pgmap v81: 1 pgs: 1 active+clean; 449 KiB data, 29 MiB used, 100 GiB / 100 GiB avail 2026-03-10T05:46:10.084 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:46:09 vm02 bash[22526]: cluster 2026-03-10T05:46:09.757945+0000 mon.a (mon.0) 464 : cluster [DBG] osdmap e35: 6 total, 6 up, 6 in 2026-03-10T05:46:12.008 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:46:11 vm05 bash[17864]: cluster 2026-03-10T05:46:10.763591+0000 mgr.y (mgr.14152) 106 : cluster [DBG] pgmap v83: 1 pgs: 1 active+clean; 449 KiB data, 34 MiB used, 120 GiB / 120 GiB avail 2026-03-10T05:46:12.008 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:46:11 vm05 bash[17864]: cluster 2026-03-10T05:46:10.766215+0000 mon.a (mon.0) 465 : cluster [DBG] osdmap e36: 6 total, 6 up, 6 in 2026-03-10T05:46:12.008 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:46:11 vm05 bash[17864]: cephadm 2026-03-10T05:46:11.222150+0000 mgr.y (mgr.14152) 107 : cephadm [INF] Detected new or changed devices on vm05 2026-03-10T05:46:12.008 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:46:11 vm05 bash[17864]: audit 2026-03-10T05:46:11.227615+0000 mon.a (mon.0) 466 : audit [INF] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' 2026-03-10T05:46:12.008 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:46:11 vm05 bash[17864]: audit 2026-03-10T05:46:11.229610+0000 mon.a (mon.0) 467 : audit [INF] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"}]: dispatch 2026-03-10T05:46:12.008 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:46:11 vm05 bash[17864]: audit 2026-03-10T05:46:11.230061+0000 mon.a (mon.0) 468 : audit [INF] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"}]: dispatch 2026-03-10T05:46:12.008 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:46:11 vm05 bash[17864]: cephadm 2026-03-10T05:46:11.230341+0000 mgr.y (mgr.14152) 108 : cephadm [INF] Adjusting osd_memory_target on vm05 to 227.8M 2026-03-10T05:46:12.008 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:46:11 vm05 bash[17864]: cephadm 2026-03-10T05:46:11.230690+0000 mgr.y (mgr.14152) 109 : cephadm [WRN] Unable to set osd_memory_target on vm05 to 238957977: error parsing value: Value '238957977' is below minimum 939524096 2026-03-10T05:46:12.008 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:46:11 vm05 bash[17864]: audit 2026-03-10T05:46:11.233865+0000 mon.a (mon.0) 469 : audit [INF] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' 2026-03-10T05:46:12.084 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:46:11 vm02 bash[17462]: cluster 2026-03-10T05:46:10.763591+0000 mgr.y (mgr.14152) 106 : cluster [DBG] pgmap v83: 1 pgs: 1 active+clean; 449 KiB data, 34 MiB used, 120 GiB / 120 GiB avail 2026-03-10T05:46:12.084 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:46:11 vm02 bash[17462]: cluster 2026-03-10T05:46:10.766215+0000 mon.a (mon.0) 465 : cluster [DBG] osdmap e36: 6 total, 6 up, 6 in 2026-03-10T05:46:12.084 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:46:11 vm02 bash[17462]: cephadm 2026-03-10T05:46:11.222150+0000 mgr.y (mgr.14152) 107 : cephadm [INF] Detected new or changed devices on vm05 2026-03-10T05:46:12.084 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:46:11 vm02 bash[17462]: audit 2026-03-10T05:46:11.227615+0000 mon.a (mon.0) 466 : audit [INF] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' 2026-03-10T05:46:12.084 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:46:11 vm02 bash[17462]: audit 2026-03-10T05:46:11.229610+0000 mon.a (mon.0) 467 : audit [INF] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"}]: dispatch 2026-03-10T05:46:12.084 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:46:11 vm02 bash[17462]: audit 2026-03-10T05:46:11.230061+0000 mon.a (mon.0) 468 : audit [INF] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"}]: dispatch 2026-03-10T05:46:12.084 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:46:11 vm02 bash[17462]: cephadm 2026-03-10T05:46:11.230341+0000 mgr.y (mgr.14152) 108 : cephadm [INF] Adjusting osd_memory_target on vm05 to 227.8M 2026-03-10T05:46:12.084 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:46:11 vm02 bash[17462]: cephadm 2026-03-10T05:46:11.230690+0000 mgr.y (mgr.14152) 109 : cephadm [WRN] Unable to set osd_memory_target on vm05 to 238957977: error parsing value: Value '238957977' is below minimum 939524096 2026-03-10T05:46:12.084 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:46:11 vm02 bash[17462]: audit 2026-03-10T05:46:11.233865+0000 mon.a (mon.0) 469 : audit [INF] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' 2026-03-10T05:46:12.084 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:46:11 vm02 bash[22526]: cluster 2026-03-10T05:46:10.763591+0000 mgr.y (mgr.14152) 106 : cluster [DBG] pgmap v83: 1 pgs: 1 active+clean; 449 KiB data, 34 MiB used, 120 GiB / 120 GiB avail 2026-03-10T05:46:12.084 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:46:11 vm02 bash[22526]: cluster 2026-03-10T05:46:10.766215+0000 mon.a (mon.0) 465 : cluster [DBG] osdmap e36: 6 total, 6 up, 6 in 2026-03-10T05:46:12.084 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:46:11 vm02 bash[22526]: cephadm 2026-03-10T05:46:11.222150+0000 mgr.y (mgr.14152) 107 : cephadm [INF] Detected new or changed devices on vm05 2026-03-10T05:46:12.084 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:46:11 vm02 bash[22526]: audit 2026-03-10T05:46:11.227615+0000 mon.a (mon.0) 466 : audit [INF] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' 2026-03-10T05:46:12.084 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:46:11 vm02 bash[22526]: audit 2026-03-10T05:46:11.229610+0000 mon.a (mon.0) 467 : audit [INF] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"}]: dispatch 2026-03-10T05:46:12.084 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:46:11 vm02 bash[22526]: audit 2026-03-10T05:46:11.230061+0000 mon.a (mon.0) 468 : audit [INF] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"}]: dispatch 2026-03-10T05:46:12.084 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:46:11 vm02 bash[22526]: cephadm 2026-03-10T05:46:11.230341+0000 mgr.y (mgr.14152) 108 : cephadm [INF] Adjusting osd_memory_target on vm05 to 227.8M 2026-03-10T05:46:12.084 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:46:11 vm02 bash[22526]: cephadm 2026-03-10T05:46:11.230690+0000 mgr.y (mgr.14152) 109 : cephadm [WRN] Unable to set osd_memory_target on vm05 to 238957977: error parsing value: Value '238957977' is below minimum 939524096 2026-03-10T05:46:12.084 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:46:11 vm02 bash[22526]: audit 2026-03-10T05:46:11.233865+0000 mon.a (mon.0) 469 : audit [INF] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' 2026-03-10T05:46:13.008 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:46:12 vm05 bash[17864]: audit 2026-03-10T05:46:12.161021+0000 mon.c (mon.1) 18 : audit [INF] from='client.? 192.168.123.105:0/1876349663' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "b2fa96ba-d56a-43b9-ab42-f9fc8abe2daf"}]: dispatch 2026-03-10T05:46:13.008 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:46:12 vm05 bash[17864]: audit 2026-03-10T05:46:12.161426+0000 mon.a (mon.0) 470 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "b2fa96ba-d56a-43b9-ab42-f9fc8abe2daf"}]: dispatch 2026-03-10T05:46:13.008 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:46:12 vm05 bash[17864]: audit 2026-03-10T05:46:12.167750+0000 mon.a (mon.0) 471 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "b2fa96ba-d56a-43b9-ab42-f9fc8abe2daf"}]': finished 2026-03-10T05:46:13.008 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:46:12 vm05 bash[17864]: cluster 2026-03-10T05:46:12.167880+0000 mon.a (mon.0) 472 : cluster [DBG] osdmap e37: 7 total, 6 up, 7 in 2026-03-10T05:46:13.008 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:46:12 vm05 bash[17864]: audit 2026-03-10T05:46:12.168032+0000 mon.a (mon.0) 473 : audit [DBG] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-10T05:46:13.008 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:46:12 vm05 bash[17864]: audit 2026-03-10T05:46:12.717564+0000 mon.b (mon.2) 12 : audit [DBG] from='client.? 192.168.123.105:0/1335567536' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-10T05:46:13.084 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:46:12 vm02 bash[17462]: audit 2026-03-10T05:46:12.161021+0000 mon.c (mon.1) 18 : audit [INF] from='client.? 192.168.123.105:0/1876349663' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "b2fa96ba-d56a-43b9-ab42-f9fc8abe2daf"}]: dispatch 2026-03-10T05:46:13.084 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:46:12 vm02 bash[17462]: audit 2026-03-10T05:46:12.161426+0000 mon.a (mon.0) 470 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "b2fa96ba-d56a-43b9-ab42-f9fc8abe2daf"}]: dispatch 2026-03-10T05:46:13.084 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:46:12 vm02 bash[17462]: audit 2026-03-10T05:46:12.167750+0000 mon.a (mon.0) 471 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "b2fa96ba-d56a-43b9-ab42-f9fc8abe2daf"}]': finished 2026-03-10T05:46:13.084 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:46:12 vm02 bash[17462]: cluster 2026-03-10T05:46:12.167880+0000 mon.a (mon.0) 472 : cluster [DBG] osdmap e37: 7 total, 6 up, 7 in 2026-03-10T05:46:13.084 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:46:12 vm02 bash[17462]: audit 2026-03-10T05:46:12.168032+0000 mon.a (mon.0) 473 : audit [DBG] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-10T05:46:13.084 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:46:12 vm02 bash[17462]: audit 2026-03-10T05:46:12.717564+0000 mon.b (mon.2) 12 : audit [DBG] from='client.? 192.168.123.105:0/1335567536' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-10T05:46:13.084 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:46:12 vm02 bash[22526]: audit 2026-03-10T05:46:12.161021+0000 mon.c (mon.1) 18 : audit [INF] from='client.? 192.168.123.105:0/1876349663' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "b2fa96ba-d56a-43b9-ab42-f9fc8abe2daf"}]: dispatch 2026-03-10T05:46:13.084 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:46:12 vm02 bash[22526]: audit 2026-03-10T05:46:12.161426+0000 mon.a (mon.0) 470 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "b2fa96ba-d56a-43b9-ab42-f9fc8abe2daf"}]: dispatch 2026-03-10T05:46:13.084 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:46:12 vm02 bash[22526]: audit 2026-03-10T05:46:12.167750+0000 mon.a (mon.0) 471 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "b2fa96ba-d56a-43b9-ab42-f9fc8abe2daf"}]': finished 2026-03-10T05:46:13.084 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:46:12 vm02 bash[22526]: cluster 2026-03-10T05:46:12.167880+0000 mon.a (mon.0) 472 : cluster [DBG] osdmap e37: 7 total, 6 up, 7 in 2026-03-10T05:46:13.084 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:46:12 vm02 bash[22526]: audit 2026-03-10T05:46:12.168032+0000 mon.a (mon.0) 473 : audit [DBG] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-10T05:46:13.084 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:46:12 vm02 bash[22526]: audit 2026-03-10T05:46:12.717564+0000 mon.b (mon.2) 12 : audit [DBG] from='client.? 192.168.123.105:0/1335567536' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-10T05:46:14.084 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:46:13 vm02 bash[17462]: cluster 2026-03-10T05:46:12.763829+0000 mgr.y (mgr.14152) 110 : cluster [DBG] pgmap v86: 1 pgs: 1 active+clean; 449 KiB data, 34 MiB used, 120 GiB / 120 GiB avail 2026-03-10T05:46:14.084 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:46:13 vm02 bash[22526]: cluster 2026-03-10T05:46:12.763829+0000 mgr.y (mgr.14152) 110 : cluster [DBG] pgmap v86: 1 pgs: 1 active+clean; 449 KiB data, 34 MiB used, 120 GiB / 120 GiB avail 2026-03-10T05:46:14.258 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:46:13 vm05 bash[17864]: cluster 2026-03-10T05:46:12.763829+0000 mgr.y (mgr.14152) 110 : cluster [DBG] pgmap v86: 1 pgs: 1 active+clean; 449 KiB data, 34 MiB used, 120 GiB / 120 GiB avail 2026-03-10T05:46:15.052 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:46:14 vm05 bash[17864]: audit 2026-03-10T05:46:14.039977+0000 mon.a (mon.0) 474 : audit [INF] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/mirror_snapshot_schedule"}]: dispatch 2026-03-10T05:46:15.052 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:46:14 vm05 bash[17864]: audit 2026-03-10T05:46:14.040707+0000 mon.a (mon.0) 475 : audit [INF] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/trash_purge_schedule"}]: dispatch 2026-03-10T05:46:15.084 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:46:14 vm02 bash[17462]: audit 2026-03-10T05:46:14.039977+0000 mon.a (mon.0) 474 : audit [INF] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/mirror_snapshot_schedule"}]: dispatch 2026-03-10T05:46:15.084 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:46:14 vm02 bash[17462]: audit 2026-03-10T05:46:14.040707+0000 mon.a (mon.0) 475 : audit [INF] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/trash_purge_schedule"}]: dispatch 2026-03-10T05:46:15.084 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:46:14 vm02 bash[22526]: audit 2026-03-10T05:46:14.039977+0000 mon.a (mon.0) 474 : audit [INF] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/mirror_snapshot_schedule"}]: dispatch 2026-03-10T05:46:15.084 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:46:14 vm02 bash[22526]: audit 2026-03-10T05:46:14.040707+0000 mon.a (mon.0) 475 : audit [INF] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/trash_purge_schedule"}]: dispatch 2026-03-10T05:46:16.084 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:46:15 vm02 bash[17462]: cluster 2026-03-10T05:46:14.764095+0000 mgr.y (mgr.14152) 111 : cluster [DBG] pgmap v87: 1 pgs: 1 active+clean; 449 KiB data, 35 MiB used, 120 GiB / 120 GiB avail 2026-03-10T05:46:16.084 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:46:15 vm02 bash[22526]: cluster 2026-03-10T05:46:14.764095+0000 mgr.y (mgr.14152) 111 : cluster [DBG] pgmap v87: 1 pgs: 1 active+clean; 449 KiB data, 35 MiB used, 120 GiB / 120 GiB avail 2026-03-10T05:46:16.258 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:46:15 vm05 bash[17864]: cluster 2026-03-10T05:46:14.764095+0000 mgr.y (mgr.14152) 111 : cluster [DBG] pgmap v87: 1 pgs: 1 active+clean; 449 KiB data, 35 MiB used, 120 GiB / 120 GiB avail 2026-03-10T05:46:18.008 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:46:17 vm05 bash[17864]: cluster 2026-03-10T05:46:16.764371+0000 mgr.y (mgr.14152) 112 : cluster [DBG] pgmap v88: 1 pgs: 1 active+clean; 449 KiB data, 35 MiB used, 120 GiB / 120 GiB avail 2026-03-10T05:46:18.084 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:46:17 vm02 bash[17462]: cluster 2026-03-10T05:46:16.764371+0000 mgr.y (mgr.14152) 112 : cluster [DBG] pgmap v88: 1 pgs: 1 active+clean; 449 KiB data, 35 MiB used, 120 GiB / 120 GiB avail 2026-03-10T05:46:18.084 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:46:17 vm02 bash[22526]: cluster 2026-03-10T05:46:16.764371+0000 mgr.y (mgr.14152) 112 : cluster [DBG] pgmap v88: 1 pgs: 1 active+clean; 449 KiB data, 35 MiB used, 120 GiB / 120 GiB avail 2026-03-10T05:46:19.008 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:46:18 vm05 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:46:19.008 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:46:18 vm05 bash[17864]: audit 2026-03-10T05:46:18.187603+0000 mon.a (mon.0) 476 : audit [INF] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.6"}]: dispatch 2026-03-10T05:46:19.008 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:46:18 vm05 bash[17864]: audit 2026-03-10T05:46:18.188103+0000 mon.a (mon.0) 477 : audit [DBG] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:46:19.008 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:46:18 vm05 bash[17864]: cephadm 2026-03-10T05:46:18.188485+0000 mgr.y (mgr.14152) 113 : cephadm [INF] Deploying daemon osd.6 on vm05 2026-03-10T05:46:19.008 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:46:18 vm05 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:46:19.008 INFO:journalctl@ceph.mgr.x.vm05.stdout:Mar 10 05:46:18 vm05 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:46:19.008 INFO:journalctl@ceph.mgr.x.vm05.stdout:Mar 10 05:46:18 vm05 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:46:19.008 INFO:journalctl@ceph.osd.4.vm05.stdout:Mar 10 05:46:18 vm05 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:46:19.008 INFO:journalctl@ceph.osd.4.vm05.stdout:Mar 10 05:46:18 vm05 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:46:19.008 INFO:journalctl@ceph.osd.5.vm05.stdout:Mar 10 05:46:18 vm05 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:46:19.008 INFO:journalctl@ceph.osd.5.vm05.stdout:Mar 10 05:46:18 vm05 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:46:19.084 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:46:18 vm02 bash[17462]: audit 2026-03-10T05:46:18.187603+0000 mon.a (mon.0) 476 : audit [INF] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.6"}]: dispatch 2026-03-10T05:46:19.084 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:46:18 vm02 bash[17462]: audit 2026-03-10T05:46:18.188103+0000 mon.a (mon.0) 477 : audit [DBG] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:46:19.084 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:46:18 vm02 bash[17462]: cephadm 2026-03-10T05:46:18.188485+0000 mgr.y (mgr.14152) 113 : cephadm [INF] Deploying daemon osd.6 on vm05 2026-03-10T05:46:19.084 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:46:18 vm02 bash[22526]: audit 2026-03-10T05:46:18.187603+0000 mon.a (mon.0) 476 : audit [INF] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.6"}]: dispatch 2026-03-10T05:46:19.084 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:46:18 vm02 bash[22526]: audit 2026-03-10T05:46:18.188103+0000 mon.a (mon.0) 477 : audit [DBG] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:46:19.084 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:46:18 vm02 bash[22526]: cephadm 2026-03-10T05:46:18.188485+0000 mgr.y (mgr.14152) 113 : cephadm [INF] Deploying daemon osd.6 on vm05 2026-03-10T05:46:20.334 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:46:20 vm02 bash[17462]: cluster 2026-03-10T05:46:18.764626+0000 mgr.y (mgr.14152) 114 : cluster [DBG] pgmap v89: 1 pgs: 1 active+clean; 449 KiB data, 35 MiB used, 120 GiB / 120 GiB avail 2026-03-10T05:46:20.334 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:46:20 vm02 bash[17462]: audit 2026-03-10T05:46:19.042616+0000 mon.a (mon.0) 478 : audit [INF] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' 2026-03-10T05:46:20.334 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:46:20 vm02 bash[17462]: audit 2026-03-10T05:46:19.059890+0000 mon.a (mon.0) 479 : audit [DBG] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T05:46:20.334 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:46:20 vm02 bash[17462]: audit 2026-03-10T05:46:19.060607+0000 mon.a (mon.0) 480 : audit [DBG] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:46:20.334 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:46:20 vm02 bash[17462]: audit 2026-03-10T05:46:19.060971+0000 mon.a (mon.0) 481 : audit [INF] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T05:46:20.334 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:46:20 vm02 bash[22526]: cluster 2026-03-10T05:46:18.764626+0000 mgr.y (mgr.14152) 114 : cluster [DBG] pgmap v89: 1 pgs: 1 active+clean; 449 KiB data, 35 MiB used, 120 GiB / 120 GiB avail 2026-03-10T05:46:20.334 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:46:20 vm02 bash[22526]: audit 2026-03-10T05:46:19.042616+0000 mon.a (mon.0) 478 : audit [INF] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' 2026-03-10T05:46:20.334 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:46:20 vm02 bash[22526]: audit 2026-03-10T05:46:19.059890+0000 mon.a (mon.0) 479 : audit [DBG] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T05:46:20.334 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:46:20 vm02 bash[22526]: audit 2026-03-10T05:46:19.060607+0000 mon.a (mon.0) 480 : audit [DBG] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:46:20.334 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:46:20 vm02 bash[22526]: audit 2026-03-10T05:46:19.060971+0000 mon.a (mon.0) 481 : audit [INF] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T05:46:20.508 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:46:20 vm05 bash[17864]: cluster 2026-03-10T05:46:18.764626+0000 mgr.y (mgr.14152) 114 : cluster [DBG] pgmap v89: 1 pgs: 1 active+clean; 449 KiB data, 35 MiB used, 120 GiB / 120 GiB avail 2026-03-10T05:46:20.508 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:46:20 vm05 bash[17864]: audit 2026-03-10T05:46:19.042616+0000 mon.a (mon.0) 478 : audit [INF] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' 2026-03-10T05:46:20.508 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:46:20 vm05 bash[17864]: audit 2026-03-10T05:46:19.059890+0000 mon.a (mon.0) 479 : audit [DBG] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T05:46:20.508 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:46:20 vm05 bash[17864]: audit 2026-03-10T05:46:19.060607+0000 mon.a (mon.0) 480 : audit [DBG] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:46:20.508 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:46:20 vm05 bash[17864]: audit 2026-03-10T05:46:19.060971+0000 mon.a (mon.0) 481 : audit [INF] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T05:46:22.315 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:46:22 vm05 bash[17864]: cluster 2026-03-10T05:46:20.764890+0000 mgr.y (mgr.14152) 115 : cluster [DBG] pgmap v90: 1 pgs: 1 active+clean; 449 KiB data, 35 MiB used, 120 GiB / 120 GiB avail 2026-03-10T05:46:22.316 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:46:22 vm05 bash[17864]: audit 2026-03-10T05:46:21.964344+0000 mon.a (mon.0) 482 : audit [INF] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' 2026-03-10T05:46:22.316 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:46:22 vm05 bash[17864]: audit 2026-03-10T05:46:21.968481+0000 mon.a (mon.0) 483 : audit [INF] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' 2026-03-10T05:46:22.334 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:46:22 vm02 bash[17462]: cluster 2026-03-10T05:46:20.764890+0000 mgr.y (mgr.14152) 115 : cluster [DBG] pgmap v90: 1 pgs: 1 active+clean; 449 KiB data, 35 MiB used, 120 GiB / 120 GiB avail 2026-03-10T05:46:22.334 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:46:22 vm02 bash[17462]: audit 2026-03-10T05:46:21.964344+0000 mon.a (mon.0) 482 : audit [INF] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' 2026-03-10T05:46:22.334 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:46:22 vm02 bash[17462]: audit 2026-03-10T05:46:21.968481+0000 mon.a (mon.0) 483 : audit [INF] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' 2026-03-10T05:46:22.334 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:46:22 vm02 bash[22526]: cluster 2026-03-10T05:46:20.764890+0000 mgr.y (mgr.14152) 115 : cluster [DBG] pgmap v90: 1 pgs: 1 active+clean; 449 KiB data, 35 MiB used, 120 GiB / 120 GiB avail 2026-03-10T05:46:22.334 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:46:22 vm02 bash[22526]: audit 2026-03-10T05:46:21.964344+0000 mon.a (mon.0) 482 : audit [INF] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' 2026-03-10T05:46:22.334 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:46:22 vm02 bash[22526]: audit 2026-03-10T05:46:21.968481+0000 mon.a (mon.0) 483 : audit [INF] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' 2026-03-10T05:46:22.373 INFO:teuthology.orchestra.run.vm05.stdout:Created osd(s) 6 on host 'vm05' 2026-03-10T05:46:22.445 DEBUG:teuthology.orchestra.run.vm05:osd.6> sudo journalctl -f -n 0 -u ceph-107483ae-1c44-11f1-b530-c1172cd6122a@osd.6.service 2026-03-10T05:46:22.446 INFO:tasks.cephadm:Deploying osd.7 on vm05 with /dev/vdb... 2026-03-10T05:46:22.446 DEBUG:teuthology.orchestra.run.vm05:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 ceph-volume -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 107483ae-1c44-11f1-b530-c1172cd6122a -- lvm zap /dev/vdb 2026-03-10T05:46:23.058 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-10T05:46:23.071 DEBUG:teuthology.orchestra.run.vm05:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 107483ae-1c44-11f1-b530-c1172cd6122a -- ceph orch daemon add osd vm05:/dev/vdb 2026-03-10T05:46:23.258 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:46:23 vm05 bash[17864]: audit 2026-03-10T05:46:22.162705+0000 mon.b (mon.2) 13 : audit [INF] from='osd.6 [v2:192.168.123.105:6816/566773014,v1:192.168.123.105:6817/566773014]' entity='osd.6' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["6"]}]: dispatch 2026-03-10T05:46:23.258 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:46:23 vm05 bash[17864]: audit 2026-03-10T05:46:22.168788+0000 mon.a (mon.0) 484 : audit [INF] from='osd.6 ' entity='osd.6' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["6"]}]: dispatch 2026-03-10T05:46:23.258 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:46:23 vm05 bash[17864]: audit 2026-03-10T05:46:22.370420+0000 mon.a (mon.0) 485 : audit [INF] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' 2026-03-10T05:46:23.258 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:46:23 vm05 bash[17864]: audit 2026-03-10T05:46:22.385614+0000 mon.a (mon.0) 486 : audit [DBG] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T05:46:23.258 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:46:23 vm05 bash[17864]: audit 2026-03-10T05:46:22.386514+0000 mon.a (mon.0) 487 : audit [DBG] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:46:23.258 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:46:23 vm05 bash[17864]: audit 2026-03-10T05:46:22.386947+0000 mon.a (mon.0) 488 : audit [INF] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T05:46:23.334 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:46:23 vm02 bash[17462]: audit 2026-03-10T05:46:22.162705+0000 mon.b (mon.2) 13 : audit [INF] from='osd.6 [v2:192.168.123.105:6816/566773014,v1:192.168.123.105:6817/566773014]' entity='osd.6' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["6"]}]: dispatch 2026-03-10T05:46:23.334 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:46:23 vm02 bash[17462]: audit 2026-03-10T05:46:22.168788+0000 mon.a (mon.0) 484 : audit [INF] from='osd.6 ' entity='osd.6' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["6"]}]: dispatch 2026-03-10T05:46:23.334 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:46:23 vm02 bash[17462]: audit 2026-03-10T05:46:22.370420+0000 mon.a (mon.0) 485 : audit [INF] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' 2026-03-10T05:46:23.334 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:46:23 vm02 bash[17462]: audit 2026-03-10T05:46:22.385614+0000 mon.a (mon.0) 486 : audit [DBG] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T05:46:23.334 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:46:23 vm02 bash[17462]: audit 2026-03-10T05:46:22.386514+0000 mon.a (mon.0) 487 : audit [DBG] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:46:23.334 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:46:23 vm02 bash[17462]: audit 2026-03-10T05:46:22.386947+0000 mon.a (mon.0) 488 : audit [INF] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T05:46:23.334 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:46:23 vm02 bash[22526]: audit 2026-03-10T05:46:22.162705+0000 mon.b (mon.2) 13 : audit [INF] from='osd.6 [v2:192.168.123.105:6816/566773014,v1:192.168.123.105:6817/566773014]' entity='osd.6' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["6"]}]: dispatch 2026-03-10T05:46:23.334 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:46:23 vm02 bash[22526]: audit 2026-03-10T05:46:22.168788+0000 mon.a (mon.0) 484 : audit [INF] from='osd.6 ' entity='osd.6' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["6"]}]: dispatch 2026-03-10T05:46:23.334 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:46:23 vm02 bash[22526]: audit 2026-03-10T05:46:22.370420+0000 mon.a (mon.0) 485 : audit [INF] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' 2026-03-10T05:46:23.334 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:46:23 vm02 bash[22526]: audit 2026-03-10T05:46:22.385614+0000 mon.a (mon.0) 486 : audit [DBG] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T05:46:23.334 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:46:23 vm02 bash[22526]: audit 2026-03-10T05:46:22.386514+0000 mon.a (mon.0) 487 : audit [DBG] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:46:23.334 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:46:23 vm02 bash[22526]: audit 2026-03-10T05:46:22.386947+0000 mon.a (mon.0) 488 : audit [INF] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T05:46:24.334 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:46:24 vm02 bash[17462]: cluster 2026-03-10T05:46:22.765132+0000 mgr.y (mgr.14152) 116 : cluster [DBG] pgmap v91: 1 pgs: 1 active+clean; 449 KiB data, 35 MiB used, 120 GiB / 120 GiB avail 2026-03-10T05:46:24.334 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:46:24 vm02 bash[17462]: audit 2026-03-10T05:46:23.055901+0000 mon.a (mon.0) 489 : audit [INF] from='osd.6 ' entity='osd.6' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["6"]}]': finished 2026-03-10T05:46:24.334 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:46:24 vm02 bash[17462]: cluster 2026-03-10T05:46:23.055992+0000 mon.a (mon.0) 490 : cluster [DBG] osdmap e38: 7 total, 6 up, 7 in 2026-03-10T05:46:24.334 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:46:24 vm02 bash[17462]: audit 2026-03-10T05:46:23.056980+0000 mon.a (mon.0) 491 : audit [DBG] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-10T05:46:24.334 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:46:24 vm02 bash[17462]: audit 2026-03-10T05:46:23.062346+0000 mon.b (mon.2) 14 : audit [INF] from='osd.6 [v2:192.168.123.105:6816/566773014,v1:192.168.123.105:6817/566773014]' entity='osd.6' cmd=[{"prefix": "osd crush create-or-move", "id": 6, "weight":0.0195, "args": ["host=vm05", "root=default"]}]: dispatch 2026-03-10T05:46:24.334 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:46:24 vm02 bash[17462]: audit 2026-03-10T05:46:23.069556+0000 mon.a (mon.0) 492 : audit [INF] from='osd.6 ' entity='osd.6' cmd=[{"prefix": "osd crush create-or-move", "id": 6, "weight":0.0195, "args": ["host=vm05", "root=default"]}]: dispatch 2026-03-10T05:46:24.334 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:46:24 vm02 bash[17462]: audit 2026-03-10T05:46:23.458380+0000 mon.a (mon.0) 493 : audit [DBG] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-10T05:46:24.334 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:46:24 vm02 bash[17462]: audit 2026-03-10T05:46:23.460107+0000 mon.a (mon.0) 494 : audit [INF] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-10T05:46:24.334 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:46:24 vm02 bash[17462]: audit 2026-03-10T05:46:23.460753+0000 mon.a (mon.0) 495 : audit [DBG] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:46:24.334 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:46:24 vm02 bash[22526]: cluster 2026-03-10T05:46:22.765132+0000 mgr.y (mgr.14152) 116 : cluster [DBG] pgmap v91: 1 pgs: 1 active+clean; 449 KiB data, 35 MiB used, 120 GiB / 120 GiB avail 2026-03-10T05:46:24.334 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:46:24 vm02 bash[22526]: audit 2026-03-10T05:46:23.055901+0000 mon.a (mon.0) 489 : audit [INF] from='osd.6 ' entity='osd.6' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["6"]}]': finished 2026-03-10T05:46:24.334 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:46:24 vm02 bash[22526]: cluster 2026-03-10T05:46:23.055992+0000 mon.a (mon.0) 490 : cluster [DBG] osdmap e38: 7 total, 6 up, 7 in 2026-03-10T05:46:24.334 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:46:24 vm02 bash[22526]: audit 2026-03-10T05:46:23.056980+0000 mon.a (mon.0) 491 : audit [DBG] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-10T05:46:24.334 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:46:24 vm02 bash[22526]: audit 2026-03-10T05:46:23.062346+0000 mon.b (mon.2) 14 : audit [INF] from='osd.6 [v2:192.168.123.105:6816/566773014,v1:192.168.123.105:6817/566773014]' entity='osd.6' cmd=[{"prefix": "osd crush create-or-move", "id": 6, "weight":0.0195, "args": ["host=vm05", "root=default"]}]: dispatch 2026-03-10T05:46:24.334 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:46:24 vm02 bash[22526]: audit 2026-03-10T05:46:23.069556+0000 mon.a (mon.0) 492 : audit [INF] from='osd.6 ' entity='osd.6' cmd=[{"prefix": "osd crush create-or-move", "id": 6, "weight":0.0195, "args": ["host=vm05", "root=default"]}]: dispatch 2026-03-10T05:46:24.334 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:46:24 vm02 bash[22526]: audit 2026-03-10T05:46:23.458380+0000 mon.a (mon.0) 493 : audit [DBG] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-10T05:46:24.334 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:46:24 vm02 bash[22526]: audit 2026-03-10T05:46:23.460107+0000 mon.a (mon.0) 494 : audit [INF] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-10T05:46:24.335 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:46:24 vm02 bash[22526]: audit 2026-03-10T05:46:23.460753+0000 mon.a (mon.0) 495 : audit [DBG] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:46:24.508 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:46:24 vm05 bash[17864]: cluster 2026-03-10T05:46:22.765132+0000 mgr.y (mgr.14152) 116 : cluster [DBG] pgmap v91: 1 pgs: 1 active+clean; 449 KiB data, 35 MiB used, 120 GiB / 120 GiB avail 2026-03-10T05:46:24.508 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:46:24 vm05 bash[17864]: audit 2026-03-10T05:46:23.055901+0000 mon.a (mon.0) 489 : audit [INF] from='osd.6 ' entity='osd.6' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["6"]}]': finished 2026-03-10T05:46:24.508 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:46:24 vm05 bash[17864]: cluster 2026-03-10T05:46:23.055992+0000 mon.a (mon.0) 490 : cluster [DBG] osdmap e38: 7 total, 6 up, 7 in 2026-03-10T05:46:24.508 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:46:24 vm05 bash[17864]: audit 2026-03-10T05:46:23.056980+0000 mon.a (mon.0) 491 : audit [DBG] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-10T05:46:24.508 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:46:24 vm05 bash[17864]: audit 2026-03-10T05:46:23.062346+0000 mon.b (mon.2) 14 : audit [INF] from='osd.6 [v2:192.168.123.105:6816/566773014,v1:192.168.123.105:6817/566773014]' entity='osd.6' cmd=[{"prefix": "osd crush create-or-move", "id": 6, "weight":0.0195, "args": ["host=vm05", "root=default"]}]: dispatch 2026-03-10T05:46:24.508 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:46:24 vm05 bash[17864]: audit 2026-03-10T05:46:23.069556+0000 mon.a (mon.0) 492 : audit [INF] from='osd.6 ' entity='osd.6' cmd=[{"prefix": "osd crush create-or-move", "id": 6, "weight":0.0195, "args": ["host=vm05", "root=default"]}]: dispatch 2026-03-10T05:46:24.508 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:46:24 vm05 bash[17864]: audit 2026-03-10T05:46:23.458380+0000 mon.a (mon.0) 493 : audit [DBG] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-10T05:46:24.508 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:46:24 vm05 bash[17864]: audit 2026-03-10T05:46:23.460107+0000 mon.a (mon.0) 494 : audit [INF] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-10T05:46:24.508 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:46:24 vm05 bash[17864]: audit 2026-03-10T05:46:23.460753+0000 mon.a (mon.0) 495 : audit [DBG] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:46:24.508 INFO:journalctl@ceph.osd.6.vm05.stdout:Mar 10 05:46:24 vm05 bash[27098]: debug 2026-03-10T05:46:24.078+0000 7fed9d331700 -1 osd.6 0 waiting for initial osdmap 2026-03-10T05:46:24.508 INFO:journalctl@ceph.osd.6.vm05.stdout:Mar 10 05:46:24 vm05 bash[27098]: debug 2026-03-10T05:46:24.082+0000 7fed97cc8700 -1 osd.6 39 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory 2026-03-10T05:46:25.334 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:46:25 vm02 bash[17462]: audit 2026-03-10T05:46:23.456997+0000 mgr.y (mgr.14152) 117 : audit [DBG] from='client.24254 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm05:/dev/vdb", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T05:46:25.334 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:46:25 vm02 bash[17462]: audit 2026-03-10T05:46:24.067630+0000 mon.a (mon.0) 496 : audit [INF] from='osd.6 ' entity='osd.6' cmd='[{"prefix": "osd crush create-or-move", "id": 6, "weight":0.0195, "args": ["host=vm05", "root=default"]}]': finished 2026-03-10T05:46:25.334 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:46:25 vm02 bash[17462]: cluster 2026-03-10T05:46:24.068103+0000 mon.a (mon.0) 497 : cluster [DBG] osdmap e39: 7 total, 6 up, 7 in 2026-03-10T05:46:25.334 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:46:25 vm02 bash[17462]: audit 2026-03-10T05:46:24.069885+0000 mon.a (mon.0) 498 : audit [DBG] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-10T05:46:25.334 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:46:25 vm02 bash[17462]: audit 2026-03-10T05:46:24.084470+0000 mon.a (mon.0) 499 : audit [DBG] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-10T05:46:25.334 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:46:25 vm02 bash[22526]: audit 2026-03-10T05:46:23.456997+0000 mgr.y (mgr.14152) 117 : audit [DBG] from='client.24254 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm05:/dev/vdb", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T05:46:25.334 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:46:25 vm02 bash[22526]: audit 2026-03-10T05:46:24.067630+0000 mon.a (mon.0) 496 : audit [INF] from='osd.6 ' entity='osd.6' cmd='[{"prefix": "osd crush create-or-move", "id": 6, "weight":0.0195, "args": ["host=vm05", "root=default"]}]': finished 2026-03-10T05:46:25.334 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:46:25 vm02 bash[22526]: cluster 2026-03-10T05:46:24.068103+0000 mon.a (mon.0) 497 : cluster [DBG] osdmap e39: 7 total, 6 up, 7 in 2026-03-10T05:46:25.334 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:46:25 vm02 bash[22526]: audit 2026-03-10T05:46:24.069885+0000 mon.a (mon.0) 498 : audit [DBG] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-10T05:46:25.334 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:46:25 vm02 bash[22526]: audit 2026-03-10T05:46:24.084470+0000 mon.a (mon.0) 499 : audit [DBG] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-10T05:46:25.508 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:46:25 vm05 bash[17864]: audit 2026-03-10T05:46:23.456997+0000 mgr.y (mgr.14152) 117 : audit [DBG] from='client.24254 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm05:/dev/vdb", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T05:46:25.508 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:46:25 vm05 bash[17864]: audit 2026-03-10T05:46:24.067630+0000 mon.a (mon.0) 496 : audit [INF] from='osd.6 ' entity='osd.6' cmd='[{"prefix": "osd crush create-or-move", "id": 6, "weight":0.0195, "args": ["host=vm05", "root=default"]}]': finished 2026-03-10T05:46:25.508 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:46:25 vm05 bash[17864]: cluster 2026-03-10T05:46:24.068103+0000 mon.a (mon.0) 497 : cluster [DBG] osdmap e39: 7 total, 6 up, 7 in 2026-03-10T05:46:25.508 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:46:25 vm05 bash[17864]: audit 2026-03-10T05:46:24.069885+0000 mon.a (mon.0) 498 : audit [DBG] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-10T05:46:25.508 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:46:25 vm05 bash[17864]: audit 2026-03-10T05:46:24.084470+0000 mon.a (mon.0) 499 : audit [DBG] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-10T05:46:26.334 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:46:26 vm02 bash[17462]: cluster 2026-03-10T05:46:23.196775+0000 osd.6 (osd.6) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-10T05:46:26.334 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:46:26 vm02 bash[17462]: cluster 2026-03-10T05:46:23.196860+0000 osd.6 (osd.6) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-10T05:46:26.334 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:46:26 vm02 bash[17462]: cluster 2026-03-10T05:46:24.765393+0000 mgr.y (mgr.14152) 118 : cluster [DBG] pgmap v94: 1 pgs: 1 active+clean; 449 KiB data, 35 MiB used, 120 GiB / 120 GiB avail 2026-03-10T05:46:26.334 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:46:26 vm02 bash[17462]: audit 2026-03-10T05:46:25.072405+0000 mon.a (mon.0) 500 : audit [DBG] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-10T05:46:26.334 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:46:26 vm02 bash[17462]: cluster 2026-03-10T05:46:25.082900+0000 mon.a (mon.0) 501 : cluster [INF] osd.6 [v2:192.168.123.105:6816/566773014,v1:192.168.123.105:6817/566773014] boot 2026-03-10T05:46:26.334 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:46:26 vm02 bash[17462]: cluster 2026-03-10T05:46:25.082924+0000 mon.a (mon.0) 502 : cluster [DBG] osdmap e40: 7 total, 7 up, 7 in 2026-03-10T05:46:26.334 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:46:26 vm02 bash[17462]: audit 2026-03-10T05:46:25.083203+0000 mon.a (mon.0) 503 : audit [DBG] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-10T05:46:26.334 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:46:26 vm02 bash[22526]: cluster 2026-03-10T05:46:23.196775+0000 osd.6 (osd.6) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-10T05:46:26.334 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:46:26 vm02 bash[22526]: cluster 2026-03-10T05:46:23.196860+0000 osd.6 (osd.6) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-10T05:46:26.334 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:46:26 vm02 bash[22526]: cluster 2026-03-10T05:46:24.765393+0000 mgr.y (mgr.14152) 118 : cluster [DBG] pgmap v94: 1 pgs: 1 active+clean; 449 KiB data, 35 MiB used, 120 GiB / 120 GiB avail 2026-03-10T05:46:26.334 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:46:26 vm02 bash[22526]: audit 2026-03-10T05:46:25.072405+0000 mon.a (mon.0) 500 : audit [DBG] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-10T05:46:26.334 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:46:26 vm02 bash[22526]: cluster 2026-03-10T05:46:25.082900+0000 mon.a (mon.0) 501 : cluster [INF] osd.6 [v2:192.168.123.105:6816/566773014,v1:192.168.123.105:6817/566773014] boot 2026-03-10T05:46:26.334 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:46:26 vm02 bash[22526]: cluster 2026-03-10T05:46:25.082924+0000 mon.a (mon.0) 502 : cluster [DBG] osdmap e40: 7 total, 7 up, 7 in 2026-03-10T05:46:26.334 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:46:26 vm02 bash[22526]: audit 2026-03-10T05:46:25.083203+0000 mon.a (mon.0) 503 : audit [DBG] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-10T05:46:26.508 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:46:26 vm05 bash[17864]: cluster 2026-03-10T05:46:23.196775+0000 osd.6 (osd.6) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-10T05:46:26.508 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:46:26 vm05 bash[17864]: cluster 2026-03-10T05:46:23.196860+0000 osd.6 (osd.6) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-10T05:46:26.508 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:46:26 vm05 bash[17864]: cluster 2026-03-10T05:46:24.765393+0000 mgr.y (mgr.14152) 118 : cluster [DBG] pgmap v94: 1 pgs: 1 active+clean; 449 KiB data, 35 MiB used, 120 GiB / 120 GiB avail 2026-03-10T05:46:26.508 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:46:26 vm05 bash[17864]: audit 2026-03-10T05:46:25.072405+0000 mon.a (mon.0) 500 : audit [DBG] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-10T05:46:26.508 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:46:26 vm05 bash[17864]: cluster 2026-03-10T05:46:25.082900+0000 mon.a (mon.0) 501 : cluster [INF] osd.6 [v2:192.168.123.105:6816/566773014,v1:192.168.123.105:6817/566773014] boot 2026-03-10T05:46:26.508 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:46:26 vm05 bash[17864]: cluster 2026-03-10T05:46:25.082924+0000 mon.a (mon.0) 502 : cluster [DBG] osdmap e40: 7 total, 7 up, 7 in 2026-03-10T05:46:26.508 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:46:26 vm05 bash[17864]: audit 2026-03-10T05:46:25.083203+0000 mon.a (mon.0) 503 : audit [DBG] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-10T05:46:27.508 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:46:27 vm05 bash[17864]: cluster 2026-03-10T05:46:26.095195+0000 mon.a (mon.0) 504 : cluster [DBG] osdmap e41: 7 total, 7 up, 7 in 2026-03-10T05:46:27.508 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:46:27 vm05 bash[17864]: audit 2026-03-10T05:46:26.639714+0000 mon.a (mon.0) 505 : audit [INF] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' 2026-03-10T05:46:27.508 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:46:27 vm05 bash[17864]: audit 2026-03-10T05:46:26.640288+0000 mon.a (mon.0) 506 : audit [INF] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"}]: dispatch 2026-03-10T05:46:27.508 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:46:27 vm05 bash[17864]: audit 2026-03-10T05:46:26.640712+0000 mon.a (mon.0) 507 : audit [INF] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"}]: dispatch 2026-03-10T05:46:27.508 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:46:27 vm05 bash[17864]: audit 2026-03-10T05:46:26.641052+0000 mon.a (mon.0) 508 : audit [INF] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.6", "name": "osd_memory_target"}]: dispatch 2026-03-10T05:46:27.508 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:46:27 vm05 bash[17864]: audit 2026-03-10T05:46:26.644778+0000 mon.a (mon.0) 509 : audit [INF] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' 2026-03-10T05:46:27.583 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:46:27 vm02 bash[17462]: cluster 2026-03-10T05:46:26.095195+0000 mon.a (mon.0) 504 : cluster [DBG] osdmap e41: 7 total, 7 up, 7 in 2026-03-10T05:46:27.584 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:46:27 vm02 bash[17462]: audit 2026-03-10T05:46:26.639714+0000 mon.a (mon.0) 505 : audit [INF] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' 2026-03-10T05:46:27.584 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:46:27 vm02 bash[17462]: audit 2026-03-10T05:46:26.640288+0000 mon.a (mon.0) 506 : audit [INF] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"}]: dispatch 2026-03-10T05:46:27.584 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:46:27 vm02 bash[17462]: audit 2026-03-10T05:46:26.640712+0000 mon.a (mon.0) 507 : audit [INF] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"}]: dispatch 2026-03-10T05:46:27.584 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:46:27 vm02 bash[17462]: audit 2026-03-10T05:46:26.641052+0000 mon.a (mon.0) 508 : audit [INF] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.6", "name": "osd_memory_target"}]: dispatch 2026-03-10T05:46:27.584 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:46:27 vm02 bash[17462]: audit 2026-03-10T05:46:26.644778+0000 mon.a (mon.0) 509 : audit [INF] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' 2026-03-10T05:46:27.584 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:46:27 vm02 bash[22526]: cluster 2026-03-10T05:46:26.095195+0000 mon.a (mon.0) 504 : cluster [DBG] osdmap e41: 7 total, 7 up, 7 in 2026-03-10T05:46:27.584 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:46:27 vm02 bash[22526]: audit 2026-03-10T05:46:26.639714+0000 mon.a (mon.0) 505 : audit [INF] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' 2026-03-10T05:46:27.584 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:46:27 vm02 bash[22526]: audit 2026-03-10T05:46:26.640288+0000 mon.a (mon.0) 506 : audit [INF] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"}]: dispatch 2026-03-10T05:46:27.584 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:46:27 vm02 bash[22526]: audit 2026-03-10T05:46:26.640712+0000 mon.a (mon.0) 507 : audit [INF] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"}]: dispatch 2026-03-10T05:46:27.584 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:46:27 vm02 bash[22526]: audit 2026-03-10T05:46:26.641052+0000 mon.a (mon.0) 508 : audit [INF] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.6", "name": "osd_memory_target"}]: dispatch 2026-03-10T05:46:27.584 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:46:27 vm02 bash[22526]: audit 2026-03-10T05:46:26.644778+0000 mon.a (mon.0) 509 : audit [INF] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' 2026-03-10T05:46:28.508 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:46:28 vm05 bash[17864]: cephadm 2026-03-10T05:46:26.633387+0000 mgr.y (mgr.14152) 119 : cephadm [INF] Detected new or changed devices on vm05 2026-03-10T05:46:28.508 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:46:28 vm05 bash[17864]: cephadm 2026-03-10T05:46:26.641322+0000 mgr.y (mgr.14152) 120 : cephadm [INF] Adjusting osd_memory_target on vm05 to 151.9M 2026-03-10T05:46:28.508 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:46:28 vm05 bash[17864]: cephadm 2026-03-10T05:46:26.641669+0000 mgr.y (mgr.14152) 121 : cephadm [WRN] Unable to set osd_memory_target on vm05 to 159305318: error parsing value: Value '159305318' is below minimum 939524096 2026-03-10T05:46:28.508 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:46:28 vm05 bash[17864]: cluster 2026-03-10T05:46:26.765646+0000 mgr.y (mgr.14152) 122 : cluster [DBG] pgmap v97: 1 pgs: 1 remapped+peering; 449 KiB data, 40 MiB used, 140 GiB / 140 GiB avail 2026-03-10T05:46:28.508 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:46:28 vm05 bash[17864]: cluster 2026-03-10T05:46:27.111226+0000 mon.a (mon.0) 510 : cluster [DBG] osdmap e42: 7 total, 7 up, 7 in 2026-03-10T05:46:28.508 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:46:28 vm05 bash[17864]: audit 2026-03-10T05:46:27.554182+0000 mon.b (mon.2) 15 : audit [INF] from='client.? 192.168.123.105:0/2073374132' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "2d1f3ab7-28e5-424b-a95a-4d9947f78095"}]: dispatch 2026-03-10T05:46:28.508 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:46:28 vm05 bash[17864]: audit 2026-03-10T05:46:27.559755+0000 mon.a (mon.0) 511 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "2d1f3ab7-28e5-424b-a95a-4d9947f78095"}]: dispatch 2026-03-10T05:46:28.508 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:46:28 vm05 bash[17864]: audit 2026-03-10T05:46:27.566179+0000 mon.a (mon.0) 512 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "2d1f3ab7-28e5-424b-a95a-4d9947f78095"}]': finished 2026-03-10T05:46:28.508 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:46:28 vm05 bash[17864]: cluster 2026-03-10T05:46:27.566298+0000 mon.a (mon.0) 513 : cluster [DBG] osdmap e43: 8 total, 7 up, 8 in 2026-03-10T05:46:28.508 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:46:28 vm05 bash[17864]: audit 2026-03-10T05:46:27.566487+0000 mon.a (mon.0) 514 : audit [DBG] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-10T05:46:28.584 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:46:28 vm02 bash[17462]: cephadm 2026-03-10T05:46:26.633387+0000 mgr.y (mgr.14152) 119 : cephadm [INF] Detected new or changed devices on vm05 2026-03-10T05:46:28.584 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:46:28 vm02 bash[17462]: cephadm 2026-03-10T05:46:26.641322+0000 mgr.y (mgr.14152) 120 : cephadm [INF] Adjusting osd_memory_target on vm05 to 151.9M 2026-03-10T05:46:28.584 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:46:28 vm02 bash[17462]: cephadm 2026-03-10T05:46:26.641669+0000 mgr.y (mgr.14152) 121 : cephadm [WRN] Unable to set osd_memory_target on vm05 to 159305318: error parsing value: Value '159305318' is below minimum 939524096 2026-03-10T05:46:28.584 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:46:28 vm02 bash[17462]: cluster 2026-03-10T05:46:26.765646+0000 mgr.y (mgr.14152) 122 : cluster [DBG] pgmap v97: 1 pgs: 1 remapped+peering; 449 KiB data, 40 MiB used, 140 GiB / 140 GiB avail 2026-03-10T05:46:28.584 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:46:28 vm02 bash[17462]: cluster 2026-03-10T05:46:27.111226+0000 mon.a (mon.0) 510 : cluster [DBG] osdmap e42: 7 total, 7 up, 7 in 2026-03-10T05:46:28.584 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:46:28 vm02 bash[17462]: audit 2026-03-10T05:46:27.554182+0000 mon.b (mon.2) 15 : audit [INF] from='client.? 192.168.123.105:0/2073374132' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "2d1f3ab7-28e5-424b-a95a-4d9947f78095"}]: dispatch 2026-03-10T05:46:28.584 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:46:28 vm02 bash[17462]: audit 2026-03-10T05:46:27.559755+0000 mon.a (mon.0) 511 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "2d1f3ab7-28e5-424b-a95a-4d9947f78095"}]: dispatch 2026-03-10T05:46:28.584 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:46:28 vm02 bash[17462]: audit 2026-03-10T05:46:27.566179+0000 mon.a (mon.0) 512 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "2d1f3ab7-28e5-424b-a95a-4d9947f78095"}]': finished 2026-03-10T05:46:28.584 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:46:28 vm02 bash[17462]: cluster 2026-03-10T05:46:27.566298+0000 mon.a (mon.0) 513 : cluster [DBG] osdmap e43: 8 total, 7 up, 8 in 2026-03-10T05:46:28.584 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:46:28 vm02 bash[17462]: audit 2026-03-10T05:46:27.566487+0000 mon.a (mon.0) 514 : audit [DBG] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-10T05:46:28.584 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:46:28 vm02 bash[22526]: cephadm 2026-03-10T05:46:26.633387+0000 mgr.y (mgr.14152) 119 : cephadm [INF] Detected new or changed devices on vm05 2026-03-10T05:46:28.584 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:46:28 vm02 bash[22526]: cephadm 2026-03-10T05:46:26.641322+0000 mgr.y (mgr.14152) 120 : cephadm [INF] Adjusting osd_memory_target on vm05 to 151.9M 2026-03-10T05:46:28.584 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:46:28 vm02 bash[22526]: cephadm 2026-03-10T05:46:26.641669+0000 mgr.y (mgr.14152) 121 : cephadm [WRN] Unable to set osd_memory_target on vm05 to 159305318: error parsing value: Value '159305318' is below minimum 939524096 2026-03-10T05:46:28.584 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:46:28 vm02 bash[22526]: cluster 2026-03-10T05:46:26.765646+0000 mgr.y (mgr.14152) 122 : cluster [DBG] pgmap v97: 1 pgs: 1 remapped+peering; 449 KiB data, 40 MiB used, 140 GiB / 140 GiB avail 2026-03-10T05:46:28.584 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:46:28 vm02 bash[22526]: cluster 2026-03-10T05:46:27.111226+0000 mon.a (mon.0) 510 : cluster [DBG] osdmap e42: 7 total, 7 up, 7 in 2026-03-10T05:46:28.584 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:46:28 vm02 bash[22526]: audit 2026-03-10T05:46:27.554182+0000 mon.b (mon.2) 15 : audit [INF] from='client.? 192.168.123.105:0/2073374132' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "2d1f3ab7-28e5-424b-a95a-4d9947f78095"}]: dispatch 2026-03-10T05:46:28.584 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:46:28 vm02 bash[22526]: audit 2026-03-10T05:46:27.559755+0000 mon.a (mon.0) 511 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "2d1f3ab7-28e5-424b-a95a-4d9947f78095"}]: dispatch 2026-03-10T05:46:28.584 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:46:28 vm02 bash[22526]: audit 2026-03-10T05:46:27.566179+0000 mon.a (mon.0) 512 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "2d1f3ab7-28e5-424b-a95a-4d9947f78095"}]': finished 2026-03-10T05:46:28.584 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:46:28 vm02 bash[22526]: cluster 2026-03-10T05:46:27.566298+0000 mon.a (mon.0) 513 : cluster [DBG] osdmap e43: 8 total, 7 up, 8 in 2026-03-10T05:46:28.584 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:46:28 vm02 bash[22526]: audit 2026-03-10T05:46:27.566487+0000 mon.a (mon.0) 514 : audit [DBG] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-10T05:46:29.508 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:46:29 vm05 bash[17864]: audit 2026-03-10T05:46:28.209851+0000 mon.a (mon.0) 515 : audit [DBG] from='client.? 192.168.123.105:0/2126503580' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-10T05:46:29.584 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:46:29 vm02 bash[17462]: audit 2026-03-10T05:46:28.209851+0000 mon.a (mon.0) 515 : audit [DBG] from='client.? 192.168.123.105:0/2126503580' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-10T05:46:29.584 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:46:29 vm02 bash[22526]: audit 2026-03-10T05:46:28.209851+0000 mon.a (mon.0) 515 : audit [DBG] from='client.? 192.168.123.105:0/2126503580' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-10T05:46:30.508 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:46:30 vm05 bash[17864]: cluster 2026-03-10T05:46:28.765880+0000 mgr.y (mgr.14152) 123 : cluster [DBG] pgmap v100: 1 pgs: 1 remapped+peering; 449 KiB data, 40 MiB used, 140 GiB / 140 GiB avail 2026-03-10T05:46:30.584 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:46:30 vm02 bash[17462]: cluster 2026-03-10T05:46:28.765880+0000 mgr.y (mgr.14152) 123 : cluster [DBG] pgmap v100: 1 pgs: 1 remapped+peering; 449 KiB data, 40 MiB used, 140 GiB / 140 GiB avail 2026-03-10T05:46:30.584 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:46:30 vm02 bash[22526]: cluster 2026-03-10T05:46:28.765880+0000 mgr.y (mgr.14152) 123 : cluster [DBG] pgmap v100: 1 pgs: 1 remapped+peering; 449 KiB data, 40 MiB used, 140 GiB / 140 GiB avail 2026-03-10T05:46:31.508 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:46:31 vm05 bash[17864]: cluster 2026-03-10T05:46:31.121655+0000 mon.a (mon.0) 516 : cluster [WRN] Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY) 2026-03-10T05:46:31.584 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:46:31 vm02 bash[17462]: cluster 2026-03-10T05:46:31.121655+0000 mon.a (mon.0) 516 : cluster [WRN] Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY) 2026-03-10T05:46:31.584 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:46:31 vm02 bash[22526]: cluster 2026-03-10T05:46:31.121655+0000 mon.a (mon.0) 516 : cluster [WRN] Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY) 2026-03-10T05:46:32.508 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:46:32 vm05 bash[17864]: cluster 2026-03-10T05:46:30.766179+0000 mgr.y (mgr.14152) 124 : cluster [DBG] pgmap v101: 1 pgs: 1 remapped+peering; 449 KiB data, 41 MiB used, 140 GiB / 140 GiB avail 2026-03-10T05:46:32.584 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:46:32 vm02 bash[17462]: cluster 2026-03-10T05:46:30.766179+0000 mgr.y (mgr.14152) 124 : cluster [DBG] pgmap v101: 1 pgs: 1 remapped+peering; 449 KiB data, 41 MiB used, 140 GiB / 140 GiB avail 2026-03-10T05:46:32.584 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:46:32 vm02 bash[22526]: cluster 2026-03-10T05:46:30.766179+0000 mgr.y (mgr.14152) 124 : cluster [DBG] pgmap v101: 1 pgs: 1 remapped+peering; 449 KiB data, 41 MiB used, 140 GiB / 140 GiB avail 2026-03-10T05:46:33.975 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:46:33 vm05 bash[17864]: cluster 2026-03-10T05:46:32.766450+0000 mgr.y (mgr.14152) 125 : cluster [DBG] pgmap v102: 1 pgs: 1 active+recovering; 449 KiB data, 42 MiB used, 140 GiB / 140 GiB avail; 10 KiB/s, 0 objects/s recovering 2026-03-10T05:46:34.084 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:46:33 vm02 bash[17462]: cluster 2026-03-10T05:46:32.766450+0000 mgr.y (mgr.14152) 125 : cluster [DBG] pgmap v102: 1 pgs: 1 active+recovering; 449 KiB data, 42 MiB used, 140 GiB / 140 GiB avail; 10 KiB/s, 0 objects/s recovering 2026-03-10T05:46:34.084 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:46:33 vm02 bash[22526]: cluster 2026-03-10T05:46:32.766450+0000 mgr.y (mgr.14152) 125 : cluster [DBG] pgmap v102: 1 pgs: 1 active+recovering; 449 KiB data, 42 MiB used, 140 GiB / 140 GiB avail; 10 KiB/s, 0 objects/s recovering 2026-03-10T05:46:34.556 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:46:34 vm05 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:46:34.556 INFO:journalctl@ceph.mgr.x.vm05.stdout:Mar 10 05:46:34 vm05 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:46:34.556 INFO:journalctl@ceph.osd.4.vm05.stdout:Mar 10 05:46:34 vm05 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:46:34.556 INFO:journalctl@ceph.osd.5.vm05.stdout:Mar 10 05:46:34 vm05 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:46:34.556 INFO:journalctl@ceph.osd.6.vm05.stdout:Mar 10 05:46:34 vm05 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:46:35.008 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:46:34 vm05 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:46:35.008 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:46:34 vm05 bash[17864]: cluster 2026-03-10T05:46:33.701360+0000 mon.a (mon.0) 517 : cluster [INF] Health check cleared: PG_AVAILABILITY (was: Reduced data availability: 1 pg peering) 2026-03-10T05:46:35.008 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:46:34 vm05 bash[17864]: cluster 2026-03-10T05:46:33.701393+0000 mon.a (mon.0) 518 : cluster [INF] Cluster is now healthy 2026-03-10T05:46:35.008 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:46:34 vm05 bash[17864]: audit 2026-03-10T05:46:33.826888+0000 mon.a (mon.0) 519 : audit [INF] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.7"}]: dispatch 2026-03-10T05:46:35.008 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:46:34 vm05 bash[17864]: audit 2026-03-10T05:46:33.827514+0000 mon.a (mon.0) 520 : audit [DBG] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:46:35.008 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:46:34 vm05 bash[17864]: cephadm 2026-03-10T05:46:33.828024+0000 mgr.y (mgr.14152) 126 : cephadm [INF] Deploying daemon osd.7 on vm05 2026-03-10T05:46:35.008 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:46:34 vm05 bash[17864]: audit 2026-03-10T05:46:34.657296+0000 mon.a (mon.0) 521 : audit [INF] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' 2026-03-10T05:46:35.008 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:46:34 vm05 bash[17864]: audit 2026-03-10T05:46:34.670921+0000 mon.a (mon.0) 522 : audit [DBG] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T05:46:35.008 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:46:34 vm05 bash[17864]: audit 2026-03-10T05:46:34.671992+0000 mon.a (mon.0) 523 : audit [DBG] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:46:35.008 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:46:34 vm05 bash[17864]: audit 2026-03-10T05:46:34.672638+0000 mon.a (mon.0) 524 : audit [INF] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T05:46:35.008 INFO:journalctl@ceph.mgr.x.vm05.stdout:Mar 10 05:46:34 vm05 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:46:35.008 INFO:journalctl@ceph.osd.4.vm05.stdout:Mar 10 05:46:34 vm05 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:46:35.009 INFO:journalctl@ceph.osd.5.vm05.stdout:Mar 10 05:46:34 vm05 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:46:35.009 INFO:journalctl@ceph.osd.6.vm05.stdout:Mar 10 05:46:34 vm05 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:46:35.084 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:46:34 vm02 bash[17462]: cluster 2026-03-10T05:46:33.701360+0000 mon.a (mon.0) 517 : cluster [INF] Health check cleared: PG_AVAILABILITY (was: Reduced data availability: 1 pg peering) 2026-03-10T05:46:35.084 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:46:34 vm02 bash[17462]: cluster 2026-03-10T05:46:33.701393+0000 mon.a (mon.0) 518 : cluster [INF] Cluster is now healthy 2026-03-10T05:46:35.084 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:46:34 vm02 bash[17462]: audit 2026-03-10T05:46:33.826888+0000 mon.a (mon.0) 519 : audit [INF] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.7"}]: dispatch 2026-03-10T05:46:35.084 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:46:34 vm02 bash[17462]: audit 2026-03-10T05:46:33.827514+0000 mon.a (mon.0) 520 : audit [DBG] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:46:35.084 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:46:34 vm02 bash[17462]: cephadm 2026-03-10T05:46:33.828024+0000 mgr.y (mgr.14152) 126 : cephadm [INF] Deploying daemon osd.7 on vm05 2026-03-10T05:46:35.084 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:46:34 vm02 bash[17462]: audit 2026-03-10T05:46:34.657296+0000 mon.a (mon.0) 521 : audit [INF] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' 2026-03-10T05:46:35.084 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:46:34 vm02 bash[17462]: audit 2026-03-10T05:46:34.670921+0000 mon.a (mon.0) 522 : audit [DBG] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T05:46:35.084 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:46:34 vm02 bash[17462]: audit 2026-03-10T05:46:34.671992+0000 mon.a (mon.0) 523 : audit [DBG] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:46:35.084 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:46:34 vm02 bash[17462]: audit 2026-03-10T05:46:34.672638+0000 mon.a (mon.0) 524 : audit [INF] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T05:46:35.084 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:46:34 vm02 bash[22526]: cluster 2026-03-10T05:46:33.701360+0000 mon.a (mon.0) 517 : cluster [INF] Health check cleared: PG_AVAILABILITY (was: Reduced data availability: 1 pg peering) 2026-03-10T05:46:35.084 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:46:34 vm02 bash[22526]: cluster 2026-03-10T05:46:33.701393+0000 mon.a (mon.0) 518 : cluster [INF] Cluster is now healthy 2026-03-10T05:46:35.084 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:46:34 vm02 bash[22526]: audit 2026-03-10T05:46:33.826888+0000 mon.a (mon.0) 519 : audit [INF] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.7"}]: dispatch 2026-03-10T05:46:35.084 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:46:34 vm02 bash[22526]: audit 2026-03-10T05:46:33.827514+0000 mon.a (mon.0) 520 : audit [DBG] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:46:35.084 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:46:34 vm02 bash[22526]: cephadm 2026-03-10T05:46:33.828024+0000 mgr.y (mgr.14152) 126 : cephadm [INF] Deploying daemon osd.7 on vm05 2026-03-10T05:46:35.084 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:46:34 vm02 bash[22526]: audit 2026-03-10T05:46:34.657296+0000 mon.a (mon.0) 521 : audit [INF] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' 2026-03-10T05:46:35.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:46:34 vm02 bash[22526]: audit 2026-03-10T05:46:34.670921+0000 mon.a (mon.0) 522 : audit [DBG] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T05:46:35.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:46:34 vm02 bash[22526]: audit 2026-03-10T05:46:34.671992+0000 mon.a (mon.0) 523 : audit [DBG] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:46:35.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:46:34 vm02 bash[22526]: audit 2026-03-10T05:46:34.672638+0000 mon.a (mon.0) 524 : audit [INF] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T05:46:36.008 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:46:35 vm05 bash[17864]: cluster 2026-03-10T05:46:34.766728+0000 mgr.y (mgr.14152) 127 : cluster [DBG] pgmap v103: 1 pgs: 1 active+recovering; 449 KiB data, 41 MiB used, 140 GiB / 140 GiB avail; 8.6 KiB/s, 0 objects/s recovering 2026-03-10T05:46:36.084 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:46:35 vm02 bash[17462]: cluster 2026-03-10T05:46:34.766728+0000 mgr.y (mgr.14152) 127 : cluster [DBG] pgmap v103: 1 pgs: 1 active+recovering; 449 KiB data, 41 MiB used, 140 GiB / 140 GiB avail; 8.6 KiB/s, 0 objects/s recovering 2026-03-10T05:46:36.084 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:46:35 vm02 bash[22526]: cluster 2026-03-10T05:46:34.766728+0000 mgr.y (mgr.14152) 127 : cluster [DBG] pgmap v103: 1 pgs: 1 active+recovering; 449 KiB data, 41 MiB used, 140 GiB / 140 GiB avail; 8.6 KiB/s, 0 objects/s recovering 2026-03-10T05:46:37.934 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:46:37 vm05 bash[17864]: cluster 2026-03-10T05:46:36.767010+0000 mgr.y (mgr.14152) 128 : cluster [DBG] pgmap v104: 1 pgs: 1 active+clean; 449 KiB data, 41 MiB used, 140 GiB / 140 GiB avail; 40 KiB/s, 0 objects/s recovering 2026-03-10T05:46:37.934 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:46:37 vm05 bash[17864]: audit 2026-03-10T05:46:37.601933+0000 mon.b (mon.2) 16 : audit [INF] from='osd.7 [v2:192.168.123.105:6824/3413503051,v1:192.168.123.105:6825/3413503051]' entity='osd.7' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["7"]}]: dispatch 2026-03-10T05:46:37.934 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:46:37 vm05 bash[17864]: audit 2026-03-10T05:46:37.604312+0000 mon.a (mon.0) 525 : audit [INF] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' 2026-03-10T05:46:37.934 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:46:37 vm05 bash[17864]: audit 2026-03-10T05:46:37.607423+0000 mon.a (mon.0) 526 : audit [INF] from='osd.7 ' entity='osd.7' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["7"]}]: dispatch 2026-03-10T05:46:37.934 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:46:37 vm05 bash[17864]: audit 2026-03-10T05:46:37.741787+0000 mon.a (mon.0) 527 : audit [INF] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' 2026-03-10T05:46:37.989 INFO:teuthology.orchestra.run.vm05.stdout:Created osd(s) 7 on host 'vm05' 2026-03-10T05:46:38.051 DEBUG:teuthology.orchestra.run.vm05:osd.7> sudo journalctl -f -n 0 -u ceph-107483ae-1c44-11f1-b530-c1172cd6122a@osd.7.service 2026-03-10T05:46:38.052 INFO:tasks.cephadm:Waiting for 8 OSDs to come up... 2026-03-10T05:46:38.052 DEBUG:teuthology.orchestra.run.vm02:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 107483ae-1c44-11f1-b530-c1172cd6122a -- ceph osd stat -f json 2026-03-10T05:46:38.084 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:46:37 vm02 bash[17462]: cluster 2026-03-10T05:46:36.767010+0000 mgr.y (mgr.14152) 128 : cluster [DBG] pgmap v104: 1 pgs: 1 active+clean; 449 KiB data, 41 MiB used, 140 GiB / 140 GiB avail; 40 KiB/s, 0 objects/s recovering 2026-03-10T05:46:38.084 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:46:37 vm02 bash[17462]: audit 2026-03-10T05:46:37.601933+0000 mon.b (mon.2) 16 : audit [INF] from='osd.7 [v2:192.168.123.105:6824/3413503051,v1:192.168.123.105:6825/3413503051]' entity='osd.7' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["7"]}]: dispatch 2026-03-10T05:46:38.084 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:46:37 vm02 bash[17462]: audit 2026-03-10T05:46:37.604312+0000 mon.a (mon.0) 525 : audit [INF] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' 2026-03-10T05:46:38.084 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:46:37 vm02 bash[17462]: audit 2026-03-10T05:46:37.607423+0000 mon.a (mon.0) 526 : audit [INF] from='osd.7 ' entity='osd.7' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["7"]}]: dispatch 2026-03-10T05:46:38.084 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:46:37 vm02 bash[17462]: audit 2026-03-10T05:46:37.741787+0000 mon.a (mon.0) 527 : audit [INF] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' 2026-03-10T05:46:38.084 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:46:37 vm02 bash[22526]: cluster 2026-03-10T05:46:36.767010+0000 mgr.y (mgr.14152) 128 : cluster [DBG] pgmap v104: 1 pgs: 1 active+clean; 449 KiB data, 41 MiB used, 140 GiB / 140 GiB avail; 40 KiB/s, 0 objects/s recovering 2026-03-10T05:46:38.084 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:46:37 vm02 bash[22526]: audit 2026-03-10T05:46:37.601933+0000 mon.b (mon.2) 16 : audit [INF] from='osd.7 [v2:192.168.123.105:6824/3413503051,v1:192.168.123.105:6825/3413503051]' entity='osd.7' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["7"]}]: dispatch 2026-03-10T05:46:38.084 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:46:37 vm02 bash[22526]: audit 2026-03-10T05:46:37.604312+0000 mon.a (mon.0) 525 : audit [INF] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' 2026-03-10T05:46:38.084 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:46:37 vm02 bash[22526]: audit 2026-03-10T05:46:37.607423+0000 mon.a (mon.0) 526 : audit [INF] from='osd.7 ' entity='osd.7' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["7"]}]: dispatch 2026-03-10T05:46:38.084 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:46:37 vm02 bash[22526]: audit 2026-03-10T05:46:37.741787+0000 mon.a (mon.0) 527 : audit [INF] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' 2026-03-10T05:46:38.426 INFO:teuthology.orchestra.run.vm02.stdout: 2026-03-10T05:46:38.474 INFO:teuthology.orchestra.run.vm02.stdout:{"epoch":43,"num_osds":8,"num_up_osds":7,"osd_up_since":1773121585,"num_in_osds":8,"osd_in_since":1773121587,"num_remapped_pgs":0} 2026-03-10T05:46:39.258 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:46:38 vm05 bash[17864]: audit 2026-03-10T05:46:37.986327+0000 mon.a (mon.0) 528 : audit [INF] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' 2026-03-10T05:46:39.258 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:46:38 vm05 bash[17864]: audit 2026-03-10T05:46:38.007777+0000 mon.a (mon.0) 529 : audit [DBG] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T05:46:39.258 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:46:38 vm05 bash[17864]: audit 2026-03-10T05:46:38.008657+0000 mon.a (mon.0) 530 : audit [DBG] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:46:39.258 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:46:38 vm05 bash[17864]: audit 2026-03-10T05:46:38.009088+0000 mon.a (mon.0) 531 : audit [INF] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T05:46:39.258 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:46:38 vm05 bash[17864]: audit 2026-03-10T05:46:38.426542+0000 mon.c (mon.1) 19 : audit [DBG] from='client.? 192.168.123.102:0/2655174858' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json"}]: dispatch 2026-03-10T05:46:39.258 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:46:38 vm05 bash[17864]: audit 2026-03-10T05:46:38.605153+0000 mon.b (mon.2) 17 : audit [INF] from='osd.7 [v2:192.168.123.105:6824/3413503051,v1:192.168.123.105:6825/3413503051]' entity='osd.7' cmd=[{"prefix": "osd crush create-or-move", "id": 7, "weight":0.0195, "args": ["host=vm05", "root=default"]}]: dispatch 2026-03-10T05:46:39.258 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:46:38 vm05 bash[17864]: audit 2026-03-10T05:46:38.609153+0000 mon.a (mon.0) 532 : audit [INF] from='osd.7 ' entity='osd.7' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["7"]}]': finished 2026-03-10T05:46:39.258 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:46:38 vm05 bash[17864]: cluster 2026-03-10T05:46:38.609198+0000 mon.a (mon.0) 533 : cluster [DBG] osdmap e44: 8 total, 7 up, 8 in 2026-03-10T05:46:39.258 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:46:38 vm05 bash[17864]: audit 2026-03-10T05:46:38.609476+0000 mon.a (mon.0) 534 : audit [DBG] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-10T05:46:39.258 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:46:38 vm05 bash[17864]: audit 2026-03-10T05:46:38.610449+0000 mon.a (mon.0) 535 : audit [INF] from='osd.7 ' entity='osd.7' cmd=[{"prefix": "osd crush create-or-move", "id": 7, "weight":0.0195, "args": ["host=vm05", "root=default"]}]: dispatch 2026-03-10T05:46:39.334 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:46:38 vm02 bash[17462]: audit 2026-03-10T05:46:37.986327+0000 mon.a (mon.0) 528 : audit [INF] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' 2026-03-10T05:46:39.334 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:46:38 vm02 bash[17462]: audit 2026-03-10T05:46:38.007777+0000 mon.a (mon.0) 529 : audit [DBG] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T05:46:39.334 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:46:38 vm02 bash[17462]: audit 2026-03-10T05:46:38.008657+0000 mon.a (mon.0) 530 : audit [DBG] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:46:39.334 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:46:38 vm02 bash[17462]: audit 2026-03-10T05:46:38.009088+0000 mon.a (mon.0) 531 : audit [INF] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T05:46:39.334 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:46:38 vm02 bash[17462]: audit 2026-03-10T05:46:38.426542+0000 mon.c (mon.1) 19 : audit [DBG] from='client.? 192.168.123.102:0/2655174858' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json"}]: dispatch 2026-03-10T05:46:39.334 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:46:38 vm02 bash[17462]: audit 2026-03-10T05:46:38.605153+0000 mon.b (mon.2) 17 : audit [INF] from='osd.7 [v2:192.168.123.105:6824/3413503051,v1:192.168.123.105:6825/3413503051]' entity='osd.7' cmd=[{"prefix": "osd crush create-or-move", "id": 7, "weight":0.0195, "args": ["host=vm05", "root=default"]}]: dispatch 2026-03-10T05:46:39.334 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:46:38 vm02 bash[17462]: audit 2026-03-10T05:46:38.609153+0000 mon.a (mon.0) 532 : audit [INF] from='osd.7 ' entity='osd.7' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["7"]}]': finished 2026-03-10T05:46:39.334 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:46:38 vm02 bash[17462]: cluster 2026-03-10T05:46:38.609198+0000 mon.a (mon.0) 533 : cluster [DBG] osdmap e44: 8 total, 7 up, 8 in 2026-03-10T05:46:39.334 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:46:38 vm02 bash[17462]: audit 2026-03-10T05:46:38.609476+0000 mon.a (mon.0) 534 : audit [DBG] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-10T05:46:39.334 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:46:38 vm02 bash[17462]: audit 2026-03-10T05:46:38.610449+0000 mon.a (mon.0) 535 : audit [INF] from='osd.7 ' entity='osd.7' cmd=[{"prefix": "osd crush create-or-move", "id": 7, "weight":0.0195, "args": ["host=vm05", "root=default"]}]: dispatch 2026-03-10T05:46:39.334 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:46:38 vm02 bash[22526]: audit 2026-03-10T05:46:37.986327+0000 mon.a (mon.0) 528 : audit [INF] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' 2026-03-10T05:46:39.334 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:46:38 vm02 bash[22526]: audit 2026-03-10T05:46:38.007777+0000 mon.a (mon.0) 529 : audit [DBG] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T05:46:39.334 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:46:38 vm02 bash[22526]: audit 2026-03-10T05:46:38.008657+0000 mon.a (mon.0) 530 : audit [DBG] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:46:39.334 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:46:38 vm02 bash[22526]: audit 2026-03-10T05:46:38.009088+0000 mon.a (mon.0) 531 : audit [INF] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T05:46:39.334 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:46:38 vm02 bash[22526]: audit 2026-03-10T05:46:38.426542+0000 mon.c (mon.1) 19 : audit [DBG] from='client.? 192.168.123.102:0/2655174858' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json"}]: dispatch 2026-03-10T05:46:39.334 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:46:38 vm02 bash[22526]: audit 2026-03-10T05:46:38.605153+0000 mon.b (mon.2) 17 : audit [INF] from='osd.7 [v2:192.168.123.105:6824/3413503051,v1:192.168.123.105:6825/3413503051]' entity='osd.7' cmd=[{"prefix": "osd crush create-or-move", "id": 7, "weight":0.0195, "args": ["host=vm05", "root=default"]}]: dispatch 2026-03-10T05:46:39.334 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:46:38 vm02 bash[22526]: audit 2026-03-10T05:46:38.609153+0000 mon.a (mon.0) 532 : audit [INF] from='osd.7 ' entity='osd.7' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["7"]}]': finished 2026-03-10T05:46:39.334 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:46:38 vm02 bash[22526]: cluster 2026-03-10T05:46:38.609198+0000 mon.a (mon.0) 533 : cluster [DBG] osdmap e44: 8 total, 7 up, 8 in 2026-03-10T05:46:39.334 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:46:38 vm02 bash[22526]: audit 2026-03-10T05:46:38.609476+0000 mon.a (mon.0) 534 : audit [DBG] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-10T05:46:39.334 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:46:38 vm02 bash[22526]: audit 2026-03-10T05:46:38.610449+0000 mon.a (mon.0) 535 : audit [INF] from='osd.7 ' entity='osd.7' cmd=[{"prefix": "osd crush create-or-move", "id": 7, "weight":0.0195, "args": ["host=vm05", "root=default"]}]: dispatch 2026-03-10T05:46:39.475 DEBUG:teuthology.orchestra.run.vm02:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 107483ae-1c44-11f1-b530-c1172cd6122a -- ceph osd stat -f json 2026-03-10T05:46:39.871 INFO:teuthology.orchestra.run.vm02.stdout: 2026-03-10T05:46:39.917 INFO:teuthology.orchestra.run.vm02.stdout:{"epoch":45,"num_osds":8,"num_up_osds":7,"osd_up_since":1773121585,"num_in_osds":8,"osd_in_since":1773121587,"num_remapped_pgs":0} 2026-03-10T05:46:39.991 INFO:journalctl@ceph.osd.7.vm05.stdout:Mar 10 05:46:39 vm05 bash[30264]: debug 2026-03-10T05:46:39.610+0000 7f9ba1087700 -1 osd.7 0 waiting for initial osdmap 2026-03-10T05:46:39.991 INFO:journalctl@ceph.osd.7.vm05.stdout:Mar 10 05:46:39 vm05 bash[30264]: debug 2026-03-10T05:46:39.622+0000 7f9b9a21b700 -1 osd.7 45 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory 2026-03-10T05:46:40.258 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:46:39 vm05 bash[17864]: cluster 2026-03-10T05:46:38.767247+0000 mgr.y (mgr.14152) 129 : cluster [DBG] pgmap v106: 1 pgs: 1 active+clean; 449 KiB data, 41 MiB used, 140 GiB / 140 GiB avail; 39 KiB/s, 0 objects/s recovering 2026-03-10T05:46:40.258 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:46:39 vm05 bash[17864]: audit 2026-03-10T05:46:39.614265+0000 mon.a (mon.0) 536 : audit [INF] from='osd.7 ' entity='osd.7' cmd='[{"prefix": "osd crush create-or-move", "id": 7, "weight":0.0195, "args": ["host=vm05", "root=default"]}]': finished 2026-03-10T05:46:40.258 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:46:39 vm05 bash[17864]: cluster 2026-03-10T05:46:39.614302+0000 mon.a (mon.0) 537 : cluster [DBG] osdmap e45: 8 total, 7 up, 8 in 2026-03-10T05:46:40.258 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:46:39 vm05 bash[17864]: audit 2026-03-10T05:46:39.614746+0000 mon.a (mon.0) 538 : audit [DBG] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-10T05:46:40.258 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:46:39 vm05 bash[17864]: audit 2026-03-10T05:46:39.618574+0000 mon.a (mon.0) 539 : audit [DBG] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-10T05:46:40.258 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:46:39 vm05 bash[17864]: audit 2026-03-10T05:46:39.871823+0000 mon.c (mon.1) 20 : audit [DBG] from='client.? 192.168.123.102:0/546352769' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json"}]: dispatch 2026-03-10T05:46:40.333 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:46:39 vm02 bash[17462]: cluster 2026-03-10T05:46:38.767247+0000 mgr.y (mgr.14152) 129 : cluster [DBG] pgmap v106: 1 pgs: 1 active+clean; 449 KiB data, 41 MiB used, 140 GiB / 140 GiB avail; 39 KiB/s, 0 objects/s recovering 2026-03-10T05:46:40.334 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:46:39 vm02 bash[17462]: audit 2026-03-10T05:46:39.614265+0000 mon.a (mon.0) 536 : audit [INF] from='osd.7 ' entity='osd.7' cmd='[{"prefix": "osd crush create-or-move", "id": 7, "weight":0.0195, "args": ["host=vm05", "root=default"]}]': finished 2026-03-10T05:46:40.334 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:46:39 vm02 bash[17462]: cluster 2026-03-10T05:46:39.614302+0000 mon.a (mon.0) 537 : cluster [DBG] osdmap e45: 8 total, 7 up, 8 in 2026-03-10T05:46:40.334 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:46:39 vm02 bash[17462]: audit 2026-03-10T05:46:39.614746+0000 mon.a (mon.0) 538 : audit [DBG] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-10T05:46:40.334 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:46:39 vm02 bash[17462]: audit 2026-03-10T05:46:39.618574+0000 mon.a (mon.0) 539 : audit [DBG] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-10T05:46:40.334 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:46:39 vm02 bash[17462]: audit 2026-03-10T05:46:39.871823+0000 mon.c (mon.1) 20 : audit [DBG] from='client.? 192.168.123.102:0/546352769' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json"}]: dispatch 2026-03-10T05:46:40.334 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:46:39 vm02 bash[22526]: cluster 2026-03-10T05:46:38.767247+0000 mgr.y (mgr.14152) 129 : cluster [DBG] pgmap v106: 1 pgs: 1 active+clean; 449 KiB data, 41 MiB used, 140 GiB / 140 GiB avail; 39 KiB/s, 0 objects/s recovering 2026-03-10T05:46:40.334 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:46:39 vm02 bash[22526]: audit 2026-03-10T05:46:39.614265+0000 mon.a (mon.0) 536 : audit [INF] from='osd.7 ' entity='osd.7' cmd='[{"prefix": "osd crush create-or-move", "id": 7, "weight":0.0195, "args": ["host=vm05", "root=default"]}]': finished 2026-03-10T05:46:40.334 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:46:39 vm02 bash[22526]: cluster 2026-03-10T05:46:39.614302+0000 mon.a (mon.0) 537 : cluster [DBG] osdmap e45: 8 total, 7 up, 8 in 2026-03-10T05:46:40.334 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:46:39 vm02 bash[22526]: audit 2026-03-10T05:46:39.614746+0000 mon.a (mon.0) 538 : audit [DBG] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-10T05:46:40.334 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:46:39 vm02 bash[22526]: audit 2026-03-10T05:46:39.618574+0000 mon.a (mon.0) 539 : audit [DBG] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-10T05:46:40.334 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:46:39 vm02 bash[22526]: audit 2026-03-10T05:46:39.871823+0000 mon.c (mon.1) 20 : audit [DBG] from='client.? 192.168.123.102:0/546352769' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json"}]: dispatch 2026-03-10T05:46:40.917 DEBUG:teuthology.orchestra.run.vm02:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 107483ae-1c44-11f1-b530-c1172cd6122a -- ceph osd stat -f json 2026-03-10T05:46:41.174 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:46:40 vm02 bash[22526]: cluster 2026-03-10T05:46:38.565717+0000 osd.7 (osd.7) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-10T05:46:41.174 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:46:40 vm02 bash[22526]: cluster 2026-03-10T05:46:38.565807+0000 osd.7 (osd.7) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-10T05:46:41.174 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:46:40 vm02 bash[22526]: audit 2026-03-10T05:46:40.618454+0000 mon.a (mon.0) 540 : audit [DBG] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-10T05:46:41.174 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:46:40 vm02 bash[22526]: cluster 2026-03-10T05:46:40.623508+0000 mon.a (mon.0) 541 : cluster [INF] osd.7 [v2:192.168.123.105:6824/3413503051,v1:192.168.123.105:6825/3413503051] boot 2026-03-10T05:46:41.174 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:46:40 vm02 bash[22526]: cluster 2026-03-10T05:46:40.623592+0000 mon.a (mon.0) 542 : cluster [DBG] osdmap e46: 8 total, 8 up, 8 in 2026-03-10T05:46:41.174 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:46:40 vm02 bash[22526]: audit 2026-03-10T05:46:40.623802+0000 mon.a (mon.0) 543 : audit [DBG] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-10T05:46:41.174 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:46:40 vm02 bash[17462]: cluster 2026-03-10T05:46:38.565717+0000 osd.7 (osd.7) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-10T05:46:41.174 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:46:40 vm02 bash[17462]: cluster 2026-03-10T05:46:38.565807+0000 osd.7 (osd.7) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-10T05:46:41.174 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:46:40 vm02 bash[17462]: audit 2026-03-10T05:46:40.618454+0000 mon.a (mon.0) 540 : audit [DBG] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-10T05:46:41.174 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:46:40 vm02 bash[17462]: cluster 2026-03-10T05:46:40.623508+0000 mon.a (mon.0) 541 : cluster [INF] osd.7 [v2:192.168.123.105:6824/3413503051,v1:192.168.123.105:6825/3413503051] boot 2026-03-10T05:46:41.174 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:46:40 vm02 bash[17462]: cluster 2026-03-10T05:46:40.623592+0000 mon.a (mon.0) 542 : cluster [DBG] osdmap e46: 8 total, 8 up, 8 in 2026-03-10T05:46:41.174 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:46:40 vm02 bash[17462]: audit 2026-03-10T05:46:40.623802+0000 mon.a (mon.0) 543 : audit [DBG] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-10T05:46:41.258 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:46:40 vm05 bash[17864]: cluster 2026-03-10T05:46:38.565717+0000 osd.7 (osd.7) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-10T05:46:41.258 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:46:40 vm05 bash[17864]: cluster 2026-03-10T05:46:38.565807+0000 osd.7 (osd.7) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-10T05:46:41.258 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:46:40 vm05 bash[17864]: audit 2026-03-10T05:46:40.618454+0000 mon.a (mon.0) 540 : audit [DBG] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-10T05:46:41.258 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:46:40 vm05 bash[17864]: cluster 2026-03-10T05:46:40.623508+0000 mon.a (mon.0) 541 : cluster [INF] osd.7 [v2:192.168.123.105:6824/3413503051,v1:192.168.123.105:6825/3413503051] boot 2026-03-10T05:46:41.258 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:46:40 vm05 bash[17864]: cluster 2026-03-10T05:46:40.623592+0000 mon.a (mon.0) 542 : cluster [DBG] osdmap e46: 8 total, 8 up, 8 in 2026-03-10T05:46:41.258 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:46:40 vm05 bash[17864]: audit 2026-03-10T05:46:40.623802+0000 mon.a (mon.0) 543 : audit [DBG] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-10T05:46:41.336 INFO:teuthology.orchestra.run.vm02.stdout: 2026-03-10T05:46:41.389 INFO:teuthology.orchestra.run.vm02.stdout:{"epoch":46,"num_osds":8,"num_up_osds":8,"osd_up_since":1773121600,"num_in_osds":8,"osd_in_since":1773121587,"num_remapped_pgs":1} 2026-03-10T05:46:41.390 DEBUG:teuthology.orchestra.run.vm02:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 shell --fsid 107483ae-1c44-11f1-b530-c1172cd6122a -- ceph osd dump --format=json 2026-03-10T05:46:42.258 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:46:41 vm05 bash[17864]: cluster 2026-03-10T05:46:40.767468+0000 mgr.y (mgr.14152) 130 : cluster [DBG] pgmap v109: 1 pgs: 1 peering; 449 KiB data, 41 MiB used, 140 GiB / 140 GiB avail 2026-03-10T05:46:42.258 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:46:41 vm05 bash[17864]: audit 2026-03-10T05:46:41.336245+0000 mon.a (mon.0) 544 : audit [DBG] from='client.? 192.168.123.102:0/712836248' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json"}]: dispatch 2026-03-10T05:46:42.258 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:46:41 vm05 bash[17864]: cluster 2026-03-10T05:46:41.621886+0000 mon.a (mon.0) 545 : cluster [WRN] Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY) 2026-03-10T05:46:42.258 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:46:41 vm05 bash[17864]: cluster 2026-03-10T05:46:41.627277+0000 mon.a (mon.0) 546 : cluster [DBG] osdmap e47: 8 total, 8 up, 8 in 2026-03-10T05:46:42.334 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:46:41 vm02 bash[17462]: cluster 2026-03-10T05:46:40.767468+0000 mgr.y (mgr.14152) 130 : cluster [DBG] pgmap v109: 1 pgs: 1 peering; 449 KiB data, 41 MiB used, 140 GiB / 140 GiB avail 2026-03-10T05:46:42.334 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:46:41 vm02 bash[17462]: audit 2026-03-10T05:46:41.336245+0000 mon.a (mon.0) 544 : audit [DBG] from='client.? 192.168.123.102:0/712836248' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json"}]: dispatch 2026-03-10T05:46:42.334 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:46:41 vm02 bash[17462]: cluster 2026-03-10T05:46:41.621886+0000 mon.a (mon.0) 545 : cluster [WRN] Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY) 2026-03-10T05:46:42.334 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:46:41 vm02 bash[17462]: cluster 2026-03-10T05:46:41.627277+0000 mon.a (mon.0) 546 : cluster [DBG] osdmap e47: 8 total, 8 up, 8 in 2026-03-10T05:46:42.334 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:46:41 vm02 bash[22526]: cluster 2026-03-10T05:46:40.767468+0000 mgr.y (mgr.14152) 130 : cluster [DBG] pgmap v109: 1 pgs: 1 peering; 449 KiB data, 41 MiB used, 140 GiB / 140 GiB avail 2026-03-10T05:46:42.334 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:46:41 vm02 bash[22526]: audit 2026-03-10T05:46:41.336245+0000 mon.a (mon.0) 544 : audit [DBG] from='client.? 192.168.123.102:0/712836248' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json"}]: dispatch 2026-03-10T05:46:42.334 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:46:41 vm02 bash[22526]: cluster 2026-03-10T05:46:41.621886+0000 mon.a (mon.0) 545 : cluster [WRN] Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY) 2026-03-10T05:46:42.334 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:46:41 vm02 bash[22526]: cluster 2026-03-10T05:46:41.627277+0000 mon.a (mon.0) 546 : cluster [DBG] osdmap e47: 8 total, 8 up, 8 in 2026-03-10T05:46:43.758 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:46:43 vm05 bash[17864]: cephadm 2026-03-10T05:46:42.367560+0000 mgr.y (mgr.14152) 131 : cephadm [INF] Detected new or changed devices on vm05 2026-03-10T05:46:43.758 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:46:43 vm05 bash[17864]: audit 2026-03-10T05:46:42.375590+0000 mon.a (mon.0) 547 : audit [INF] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' 2026-03-10T05:46:43.758 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:46:43 vm05 bash[17864]: audit 2026-03-10T05:46:42.376359+0000 mon.a (mon.0) 548 : audit [INF] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"}]: dispatch 2026-03-10T05:46:43.758 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:46:43 vm05 bash[17864]: audit 2026-03-10T05:46:42.376786+0000 mon.a (mon.0) 549 : audit [INF] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"}]: dispatch 2026-03-10T05:46:43.758 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:46:43 vm05 bash[17864]: audit 2026-03-10T05:46:42.377133+0000 mon.a (mon.0) 550 : audit [INF] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.6", "name": "osd_memory_target"}]: dispatch 2026-03-10T05:46:43.758 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:46:43 vm05 bash[17864]: audit 2026-03-10T05:46:42.377504+0000 mon.a (mon.0) 551 : audit [INF] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.7", "name": "osd_memory_target"}]: dispatch 2026-03-10T05:46:43.758 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:46:43 vm05 bash[17864]: cephadm 2026-03-10T05:46:42.377810+0000 mgr.y (mgr.14152) 132 : cephadm [INF] Adjusting osd_memory_target on vm05 to 113.9M 2026-03-10T05:46:43.758 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:46:43 vm05 bash[17864]: cephadm 2026-03-10T05:46:42.378157+0000 mgr.y (mgr.14152) 133 : cephadm [WRN] Unable to set osd_memory_target on vm05 to 119478988: error parsing value: Value '119478988' is below minimum 939524096 2026-03-10T05:46:43.758 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:46:43 vm05 bash[17864]: audit 2026-03-10T05:46:42.382922+0000 mon.a (mon.0) 552 : audit [INF] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' 2026-03-10T05:46:43.758 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:46:43 vm05 bash[17864]: cluster 2026-03-10T05:46:42.641566+0000 mon.a (mon.0) 553 : cluster [DBG] osdmap e48: 8 total, 8 up, 8 in 2026-03-10T05:46:43.758 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:46:43 vm05 bash[17864]: cluster 2026-03-10T05:46:42.767762+0000 mgr.y (mgr.14152) 134 : cluster [DBG] pgmap v112: 1 pgs: 1 peering; 449 KiB data, 47 MiB used, 160 GiB / 160 GiB avail 2026-03-10T05:46:43.834 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:46:43 vm02 bash[17462]: cephadm 2026-03-10T05:46:42.367560+0000 mgr.y (mgr.14152) 131 : cephadm [INF] Detected new or changed devices on vm05 2026-03-10T05:46:43.834 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:46:43 vm02 bash[17462]: audit 2026-03-10T05:46:42.375590+0000 mon.a (mon.0) 547 : audit [INF] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' 2026-03-10T05:46:43.834 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:46:43 vm02 bash[17462]: audit 2026-03-10T05:46:42.376359+0000 mon.a (mon.0) 548 : audit [INF] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"}]: dispatch 2026-03-10T05:46:43.834 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:46:43 vm02 bash[17462]: audit 2026-03-10T05:46:42.376786+0000 mon.a (mon.0) 549 : audit [INF] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"}]: dispatch 2026-03-10T05:46:43.834 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:46:43 vm02 bash[17462]: audit 2026-03-10T05:46:42.377133+0000 mon.a (mon.0) 550 : audit [INF] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.6", "name": "osd_memory_target"}]: dispatch 2026-03-10T05:46:43.834 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:46:43 vm02 bash[17462]: audit 2026-03-10T05:46:42.377504+0000 mon.a (mon.0) 551 : audit [INF] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.7", "name": "osd_memory_target"}]: dispatch 2026-03-10T05:46:43.834 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:46:43 vm02 bash[17462]: cephadm 2026-03-10T05:46:42.377810+0000 mgr.y (mgr.14152) 132 : cephadm [INF] Adjusting osd_memory_target on vm05 to 113.9M 2026-03-10T05:46:43.834 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:46:43 vm02 bash[17462]: cephadm 2026-03-10T05:46:42.378157+0000 mgr.y (mgr.14152) 133 : cephadm [WRN] Unable to set osd_memory_target on vm05 to 119478988: error parsing value: Value '119478988' is below minimum 939524096 2026-03-10T05:46:43.834 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:46:43 vm02 bash[17462]: audit 2026-03-10T05:46:42.382922+0000 mon.a (mon.0) 552 : audit [INF] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' 2026-03-10T05:46:43.834 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:46:43 vm02 bash[17462]: cluster 2026-03-10T05:46:42.641566+0000 mon.a (mon.0) 553 : cluster [DBG] osdmap e48: 8 total, 8 up, 8 in 2026-03-10T05:46:43.834 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:46:43 vm02 bash[17462]: cluster 2026-03-10T05:46:42.767762+0000 mgr.y (mgr.14152) 134 : cluster [DBG] pgmap v112: 1 pgs: 1 peering; 449 KiB data, 47 MiB used, 160 GiB / 160 GiB avail 2026-03-10T05:46:43.834 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:46:43 vm02 bash[22526]: cephadm 2026-03-10T05:46:42.367560+0000 mgr.y (mgr.14152) 131 : cephadm [INF] Detected new or changed devices on vm05 2026-03-10T05:46:43.834 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:46:43 vm02 bash[22526]: audit 2026-03-10T05:46:42.375590+0000 mon.a (mon.0) 547 : audit [INF] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' 2026-03-10T05:46:43.834 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:46:43 vm02 bash[22526]: audit 2026-03-10T05:46:42.376359+0000 mon.a (mon.0) 548 : audit [INF] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"}]: dispatch 2026-03-10T05:46:43.834 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:46:43 vm02 bash[22526]: audit 2026-03-10T05:46:42.376786+0000 mon.a (mon.0) 549 : audit [INF] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"}]: dispatch 2026-03-10T05:46:43.834 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:46:43 vm02 bash[22526]: audit 2026-03-10T05:46:42.377133+0000 mon.a (mon.0) 550 : audit [INF] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.6", "name": "osd_memory_target"}]: dispatch 2026-03-10T05:46:43.834 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:46:43 vm02 bash[22526]: audit 2026-03-10T05:46:42.377504+0000 mon.a (mon.0) 551 : audit [INF] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.7", "name": "osd_memory_target"}]: dispatch 2026-03-10T05:46:43.834 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:46:43 vm02 bash[22526]: cephadm 2026-03-10T05:46:42.377810+0000 mgr.y (mgr.14152) 132 : cephadm [INF] Adjusting osd_memory_target on vm05 to 113.9M 2026-03-10T05:46:43.834 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:46:43 vm02 bash[22526]: cephadm 2026-03-10T05:46:42.378157+0000 mgr.y (mgr.14152) 133 : cephadm [WRN] Unable to set osd_memory_target on vm05 to 119478988: error parsing value: Value '119478988' is below minimum 939524096 2026-03-10T05:46:43.834 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:46:43 vm02 bash[22526]: audit 2026-03-10T05:46:42.382922+0000 mon.a (mon.0) 552 : audit [INF] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' 2026-03-10T05:46:43.834 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:46:43 vm02 bash[22526]: cluster 2026-03-10T05:46:42.641566+0000 mon.a (mon.0) 553 : cluster [DBG] osdmap e48: 8 total, 8 up, 8 in 2026-03-10T05:46:43.834 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:46:43 vm02 bash[22526]: cluster 2026-03-10T05:46:42.767762+0000 mgr.y (mgr.14152) 134 : cluster [DBG] pgmap v112: 1 pgs: 1 peering; 449 KiB data, 47 MiB used, 160 GiB / 160 GiB avail 2026-03-10T05:46:43.990 INFO:teuthology.orchestra.run.vm02.stderr:Inferring config /var/lib/ceph/107483ae-1c44-11f1-b530-c1172cd6122a/mon.c/config 2026-03-10T05:46:44.306 INFO:teuthology.orchestra.run.vm02.stdout: 2026-03-10T05:46:44.306 INFO:teuthology.orchestra.run.vm02.stdout:{"epoch":48,"fsid":"107483ae-1c44-11f1-b530-c1172cd6122a","created":"2026-03-10T05:43:51.949234+0000","modified":"2026-03-10T05:46:42.631499+0000","last_up_change":"2026-03-10T05:46:40.617204+0000","last_in_change":"2026-03-10T05:46:27.560077+0000","flags":"sortbitwise,recovery_deletes,purged_snapdirs,pglog_hardlimit","flags_num":5799936,"flags_set":["pglog_hardlimit","purged_snapdirs","recovery_deletes","sortbitwise"],"crush_version":18,"full_ratio":0.94999998807907104,"backfillfull_ratio":0.89999997615814209,"nearfull_ratio":0.85000002384185791,"cluster_snapshot":"","pool_max":1,"max_osd":8,"require_min_compat_client":"luminous","min_compat_client":"jewel","require_osd_release":"quincy","pools":[{"pool":1,"pool_name":".mgr","create_time":"2026-03-10T05:45:26.175212+0000","flags":1,"flags_names":"hashpspool","type":1,"size":3,"min_size":2,"crush_rule":0,"peering_crush_bucket_count":0,"peering_crush_bucket_target":0,"peering_crush_bucket_barrier":0,"peering_crush_bucket_mandatory_member":2147483647,"object_hash":2,"pg_autoscale_mode":"off","pg_num":1,"pg_placement_num":1,"pg_placement_num_target":1,"pg_num_target":1,"pg_num_pending":1,"last_pg_merge_meta":{"source_pgid":"0.0","ready_epoch":0,"last_epoch_started":0,"last_epoch_clean":0,"source_version":"0'0","target_version":"0'0"},"last_change":"20","last_force_op_resend":"0","last_force_op_resend_prenautilus":"0","last_force_op_resend_preluminous":"0","auid":0,"snap_mode":"selfmanaged","snap_seq":0,"snap_epoch":0,"pool_snaps":[],"removed_snaps":"[]","quota_max_bytes":0,"quota_max_objects":0,"tiers":[],"tier_of":-1,"read_tier":-1,"write_tier":-1,"cache_mode":"none","target_max_bytes":0,"target_max_objects":0,"cache_target_dirty_ratio_micro":400000,"cache_target_dirty_high_ratio_micro":600000,"cache_target_full_ratio_micro":800000,"cache_min_flush_age":0,"cache_min_evict_age":0,"erasure_code_profile":"","hit_set_params":{"type":"none"},"hit_set_period":0,"hit_set_count":0,"use_gmt_hitset":true,"min_read_recency_for_promote":0,"min_write_recency_for_promote":0,"hit_set_grade_decay_rate":0,"hit_set_search_last_n":0,"grade_table":[],"stripe_width":0,"expected_num_objects":0,"fast_read":false,"options":{"pg_num_max":32,"pg_num_min":1},"application_metadata":{"mgr":{}}}],"osds":[{"osd":0,"uuid":"181bfe3a-c244-4b31-bf3a-c6074cc650d1","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":8,"up_thru":46,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.102:6802","nonce":3358143121},{"type":"v1","addr":"192.168.123.102:6803","nonce":3358143121}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.102:6804","nonce":3358143121},{"type":"v1","addr":"192.168.123.102:6805","nonce":3358143121}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.102:6808","nonce":3358143121},{"type":"v1","addr":"192.168.123.102:6809","nonce":3358143121}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.102:6806","nonce":3358143121},{"type":"v1","addr":"192.168.123.102:6807","nonce":3358143121}]},"public_addr":"192.168.123.102:6803/3358143121","cluster_addr":"192.168.123.102:6805/3358143121","heartbeat_back_addr":"192.168.123.102:6809/3358143121","heartbeat_front_addr":"192.168.123.102:6807/3358143121","state":["exists","up"]},{"osd":1,"uuid":"c0820da9-42eb-422f-88aa-598d51d4e5e7","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":12,"up_thru":29,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.102:6810","nonce":3944310722},{"type":"v1","addr":"192.168.123.102:6811","nonce":3944310722}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.102:6812","nonce":3944310722},{"type":"v1","addr":"192.168.123.102:6813","nonce":3944310722}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.102:6816","nonce":3944310722},{"type":"v1","addr":"192.168.123.102:6817","nonce":3944310722}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.102:6814","nonce":3944310722},{"type":"v1","addr":"192.168.123.102:6815","nonce":3944310722}]},"public_addr":"192.168.123.102:6811/3944310722","cluster_addr":"192.168.123.102:6813/3944310722","heartbeat_back_addr":"192.168.123.102:6817/3944310722","heartbeat_front_addr":"192.168.123.102:6815/3944310722","state":["exists","up"]},{"osd":2,"uuid":"2d5b11d8-3856-47e7-80bc-ba0d5e91fd6c","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":17,"up_thru":0,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.102:6818","nonce":1818843754},{"type":"v1","addr":"192.168.123.102:6819","nonce":1818843754}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.102:6820","nonce":1818843754},{"type":"v1","addr":"192.168.123.102:6821","nonce":1818843754}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.102:6824","nonce":1818843754},{"type":"v1","addr":"192.168.123.102:6825","nonce":1818843754}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.102:6822","nonce":1818843754},{"type":"v1","addr":"192.168.123.102:6823","nonce":1818843754}]},"public_addr":"192.168.123.102:6819/1818843754","cluster_addr":"192.168.123.102:6821/1818843754","heartbeat_back_addr":"192.168.123.102:6825/1818843754","heartbeat_front_addr":"192.168.123.102:6823/1818843754","state":["exists","up"]},{"osd":3,"uuid":"c8c62231-6895-42f2-ba03-c49e0ca5380e","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":23,"up_thru":0,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.102:6826","nonce":268408037},{"type":"v1","addr":"192.168.123.102:6827","nonce":268408037}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.102:6828","nonce":268408037},{"type":"v1","addr":"192.168.123.102:6829","nonce":268408037}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.102:6832","nonce":268408037},{"type":"v1","addr":"192.168.123.102:6833","nonce":268408037}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.102:6830","nonce":268408037},{"type":"v1","addr":"192.168.123.102:6831","nonce":268408037}]},"public_addr":"192.168.123.102:6827/268408037","cluster_addr":"192.168.123.102:6829/268408037","heartbeat_back_addr":"192.168.123.102:6833/268408037","heartbeat_front_addr":"192.168.123.102:6831/268408037","state":["exists","up"]},{"osd":4,"uuid":"49541bd1-b8b0-4d09-9b97-6ca490c33f9d","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":28,"up_thru":0,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.105:6800","nonce":1737072685},{"type":"v1","addr":"192.168.123.105:6801","nonce":1737072685}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.105:6802","nonce":1737072685},{"type":"v1","addr":"192.168.123.105:6803","nonce":1737072685}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.105:6806","nonce":1737072685},{"type":"v1","addr":"192.168.123.105:6807","nonce":1737072685}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.105:6804","nonce":1737072685},{"type":"v1","addr":"192.168.123.105:6805","nonce":1737072685}]},"public_addr":"192.168.123.105:6801/1737072685","cluster_addr":"192.168.123.105:6803/1737072685","heartbeat_back_addr":"192.168.123.105:6807/1737072685","heartbeat_front_addr":"192.168.123.105:6805/1737072685","state":["exists","up"]},{"osd":5,"uuid":"2b35feb0-b492-4603-81e0-b864fb275f8c","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":34,"up_thru":35,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.105:6808","nonce":3303341454},{"type":"v1","addr":"192.168.123.105:6809","nonce":3303341454}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.105:6810","nonce":3303341454},{"type":"v1","addr":"192.168.123.105:6811","nonce":3303341454}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.105:6814","nonce":3303341454},{"type":"v1","addr":"192.168.123.105:6815","nonce":3303341454}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.105:6812","nonce":3303341454},{"type":"v1","addr":"192.168.123.105:6813","nonce":3303341454}]},"public_addr":"192.168.123.105:6809/3303341454","cluster_addr":"192.168.123.105:6811/3303341454","heartbeat_back_addr":"192.168.123.105:6815/3303341454","heartbeat_front_addr":"192.168.123.105:6813/3303341454","state":["exists","up"]},{"osd":6,"uuid":"b2fa96ba-d56a-43b9-ab42-f9fc8abe2daf","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":40,"up_thru":41,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.105:6816","nonce":566773014},{"type":"v1","addr":"192.168.123.105:6817","nonce":566773014}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.105:6818","nonce":566773014},{"type":"v1","addr":"192.168.123.105:6819","nonce":566773014}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.105:6822","nonce":566773014},{"type":"v1","addr":"192.168.123.105:6823","nonce":566773014}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.105:6820","nonce":566773014},{"type":"v1","addr":"192.168.123.105:6821","nonce":566773014}]},"public_addr":"192.168.123.105:6817/566773014","cluster_addr":"192.168.123.105:6819/566773014","heartbeat_back_addr":"192.168.123.105:6823/566773014","heartbeat_front_addr":"192.168.123.105:6821/566773014","state":["exists","up"]},{"osd":7,"uuid":"2d1f3ab7-28e5-424b-a95a-4d9947f78095","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":46,"up_thru":47,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.105:6824","nonce":3413503051},{"type":"v1","addr":"192.168.123.105:6825","nonce":3413503051}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.105:6826","nonce":3413503051},{"type":"v1","addr":"192.168.123.105:6827","nonce":3413503051}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.105:6830","nonce":3413503051},{"type":"v1","addr":"192.168.123.105:6831","nonce":3413503051}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.105:6828","nonce":3413503051},{"type":"v1","addr":"192.168.123.105:6829","nonce":3413503051}]},"public_addr":"192.168.123.105:6825/3413503051","cluster_addr":"192.168.123.105:6827/3413503051","heartbeat_back_addr":"192.168.123.105:6831/3413503051","heartbeat_front_addr":"192.168.123.105:6829/3413503051","state":["exists","up"]}],"osd_xinfo":[{"osd":0,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540138303579357183,"old_weight":0,"last_purged_snaps_scrub":"2026-03-10T05:44:53.076359+0000","dead_epoch":0},{"osd":1,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540138303579357183,"old_weight":0,"last_purged_snaps_scrub":"2026-03-10T05:45:08.109400+0000","dead_epoch":0},{"osd":2,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540138303579357183,"old_weight":0,"last_purged_snaps_scrub":"2026-03-10T05:45:23.624885+0000","dead_epoch":0},{"osd":3,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540138303579357183,"old_weight":0,"last_purged_snaps_scrub":"2026-03-10T05:45:39.146568+0000","dead_epoch":0},{"osd":4,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540138303579357183,"old_weight":0,"last_purged_snaps_scrub":"2026-03-10T05:45:53.043525+0000","dead_epoch":0},{"osd":5,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540138303579357183,"old_weight":0,"last_purged_snaps_scrub":"2026-03-10T05:46:07.589294+0000","dead_epoch":0},{"osd":6,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540138303579357183,"old_weight":0,"last_purged_snaps_scrub":"2026-03-10T05:46:23.196862+0000","dead_epoch":0},{"osd":7,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540138303579357183,"old_weight":0,"last_purged_snaps_scrub":"2026-03-10T05:46:38.565809+0000","dead_epoch":0}],"pg_upmap":[],"pg_upmap_items":[],"pg_temp":[],"primary_temp":[],"blocklist":{"192.168.123.102:0/180339681":"2026-03-11T05:44:13.944512+0000","192.168.123.102:0/3558265816":"2026-03-11T05:44:13.944512+0000","192.168.123.102:0/1876503597":"2026-03-11T05:44:13.944512+0000","192.168.123.102:6801/3932825893":"2026-03-11T05:44:13.944512+0000","192.168.123.102:6800/3932825893":"2026-03-11T05:44:13.944512+0000","192.168.123.102:0/2702126893":"2026-03-11T05:44:04.884697+0000","192.168.123.102:0/4232033379":"2026-03-11T05:44:04.884697+0000","192.168.123.102:6801/123828670":"2026-03-11T05:44:04.884697+0000","192.168.123.102:0/3250290581":"2026-03-11T05:44:04.884697+0000","192.168.123.102:6800/123828670":"2026-03-11T05:44:04.884697+0000"},"erasure_code_profiles":{"default":{"crush-failure-domain":"osd","k":"2","m":"1","plugin":"jerasure","technique":"reed_sol_van"}},"removed_snaps_queue":[],"new_removed_snaps":[],"new_purged_snaps":[],"crush_node_flags":{},"device_class_flags":{},"stretch_mode":{"stretch_mode_enabled":false,"stretch_bucket_count":0,"degraded_stretch_mode":0,"recovering_stretch_mode":0,"stretch_mode_bucket":0}} 2026-03-10T05:46:44.354 INFO:tasks.cephadm.ceph_manager.ceph:[{'pool': 1, 'pool_name': '.mgr', 'create_time': '2026-03-10T05:45:26.175212+0000', 'flags': 1, 'flags_names': 'hashpspool', 'type': 1, 'size': 3, 'min_size': 2, 'crush_rule': 0, 'peering_crush_bucket_count': 0, 'peering_crush_bucket_target': 0, 'peering_crush_bucket_barrier': 0, 'peering_crush_bucket_mandatory_member': 2147483647, 'object_hash': 2, 'pg_autoscale_mode': 'off', 'pg_num': 1, 'pg_placement_num': 1, 'pg_placement_num_target': 1, 'pg_num_target': 1, 'pg_num_pending': 1, 'last_pg_merge_meta': {'source_pgid': '0.0', 'ready_epoch': 0, 'last_epoch_started': 0, 'last_epoch_clean': 0, 'source_version': "0'0", 'target_version': "0'0"}, 'last_change': '20', 'last_force_op_resend': '0', 'last_force_op_resend_prenautilus': '0', 'last_force_op_resend_preluminous': '0', 'auid': 0, 'snap_mode': 'selfmanaged', 'snap_seq': 0, 'snap_epoch': 0, 'pool_snaps': [], 'removed_snaps': '[]', 'quota_max_bytes': 0, 'quota_max_objects': 0, 'tiers': [], 'tier_of': -1, 'read_tier': -1, 'write_tier': -1, 'cache_mode': 'none', 'target_max_bytes': 0, 'target_max_objects': 0, 'cache_target_dirty_ratio_micro': 400000, 'cache_target_dirty_high_ratio_micro': 600000, 'cache_target_full_ratio_micro': 800000, 'cache_min_flush_age': 0, 'cache_min_evict_age': 0, 'erasure_code_profile': '', 'hit_set_params': {'type': 'none'}, 'hit_set_period': 0, 'hit_set_count': 0, 'use_gmt_hitset': True, 'min_read_recency_for_promote': 0, 'min_write_recency_for_promote': 0, 'hit_set_grade_decay_rate': 0, 'hit_set_search_last_n': 0, 'grade_table': [], 'stripe_width': 0, 'expected_num_objects': 0, 'fast_read': False, 'options': {'pg_num_max': 32, 'pg_num_min': 1}, 'application_metadata': {'mgr': {}}}] 2026-03-10T05:46:44.354 DEBUG:teuthology.orchestra.run.vm02:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 shell --fsid 107483ae-1c44-11f1-b530-c1172cd6122a -- ceph osd pool get .mgr pg_num 2026-03-10T05:46:44.713 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:46:44 vm02 bash[17462]: audit 2026-03-10T05:46:44.305826+0000 mon.c (mon.1) 21 : audit [DBG] from='client.? 192.168.123.102:0/205308868' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-10T05:46:45.003 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:46:44 vm02 bash[22526]: audit 2026-03-10T05:46:44.305826+0000 mon.c (mon.1) 21 : audit [DBG] from='client.? 192.168.123.102:0/205308868' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-10T05:46:45.008 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:46:44 vm05 bash[17864]: audit 2026-03-10T05:46:44.305826+0000 mon.c (mon.1) 21 : audit [DBG] from='client.? 192.168.123.102:0/205308868' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-10T05:46:46.000 INFO:teuthology.orchestra.run.vm02.stderr:Inferring config /var/lib/ceph/107483ae-1c44-11f1-b530-c1172cd6122a/mon.c/config 2026-03-10T05:46:46.008 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:46:45 vm05 bash[17864]: cluster 2026-03-10T05:46:44.768012+0000 mgr.y (mgr.14152) 135 : cluster [DBG] pgmap v113: 1 pgs: 1 peering; 449 KiB data, 48 MiB used, 160 GiB / 160 GiB avail 2026-03-10T05:46:46.014 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:46:45 vm02 bash[17462]: cluster 2026-03-10T05:46:44.768012+0000 mgr.y (mgr.14152) 135 : cluster [DBG] pgmap v113: 1 pgs: 1 peering; 449 KiB data, 48 MiB used, 160 GiB / 160 GiB avail 2026-03-10T05:46:46.014 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:46:45 vm02 bash[22526]: cluster 2026-03-10T05:46:44.768012+0000 mgr.y (mgr.14152) 135 : cluster [DBG] pgmap v113: 1 pgs: 1 peering; 449 KiB data, 48 MiB used, 160 GiB / 160 GiB avail 2026-03-10T05:46:46.300 INFO:teuthology.orchestra.run.vm02.stdout:pg_num: 1 2026-03-10T05:46:46.345 INFO:tasks.cephadm:Adding prometheus.a on vm05 2026-03-10T05:46:46.345 DEBUG:teuthology.orchestra.run.vm05:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 107483ae-1c44-11f1-b530-c1172cd6122a -- ceph orch apply prometheus '1;vm05=a' 2026-03-10T05:46:46.786 INFO:teuthology.orchestra.run.vm05.stdout:Scheduled prometheus update... 2026-03-10T05:46:46.844 DEBUG:teuthology.orchestra.run.vm05:prometheus.a> sudo journalctl -f -n 0 -u ceph-107483ae-1c44-11f1-b530-c1172cd6122a@prometheus.a.service 2026-03-10T05:46:46.845 INFO:tasks.cephadm:Adding node-exporter.a on vm02 2026-03-10T05:46:46.845 INFO:tasks.cephadm:Adding node-exporter.b on vm05 2026-03-10T05:46:46.845 DEBUG:teuthology.orchestra.run.vm05:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 107483ae-1c44-11f1-b530-c1172cd6122a -- ceph orch apply node-exporter '2;vm02=a;vm05=b' 2026-03-10T05:46:47.005 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:46:46 vm05 bash[17864]: audit 2026-03-10T05:46:46.299828+0000 mon.c (mon.1) 22 : audit [DBG] from='client.? 192.168.123.102:0/316852680' entity='client.admin' cmd=[{"prefix": "osd pool get", "pool": ".mgr", "var": "pg_num"}]: dispatch 2026-03-10T05:46:47.013 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:46:46 vm02 bash[17462]: audit 2026-03-10T05:46:46.299828+0000 mon.c (mon.1) 22 : audit [DBG] from='client.? 192.168.123.102:0/316852680' entity='client.admin' cmd=[{"prefix": "osd pool get", "pool": ".mgr", "var": "pg_num"}]: dispatch 2026-03-10T05:46:47.013 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:46:46 vm02 bash[22526]: audit 2026-03-10T05:46:46.299828+0000 mon.c (mon.1) 22 : audit [DBG] from='client.? 192.168.123.102:0/316852680' entity='client.admin' cmd=[{"prefix": "osd pool get", "pool": ".mgr", "var": "pg_num"}]: dispatch 2026-03-10T05:46:47.329 INFO:teuthology.orchestra.run.vm05.stdout:Scheduled node-exporter update... 2026-03-10T05:46:47.384 DEBUG:teuthology.orchestra.run.vm02:node-exporter.a> sudo journalctl -f -n 0 -u ceph-107483ae-1c44-11f1-b530-c1172cd6122a@node-exporter.a.service 2026-03-10T05:46:47.386 DEBUG:teuthology.orchestra.run.vm05:node-exporter.b> sudo journalctl -f -n 0 -u ceph-107483ae-1c44-11f1-b530-c1172cd6122a@node-exporter.b.service 2026-03-10T05:46:47.387 INFO:tasks.cephadm:Adding alertmanager.a on vm02 2026-03-10T05:46:47.387 DEBUG:teuthology.orchestra.run.vm05:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 107483ae-1c44-11f1-b530-c1172cd6122a -- ceph orch apply alertmanager '1;vm02=a' 2026-03-10T05:46:47.936 INFO:journalctl@ceph.mgr.x.vm05.stdout:Mar 10 05:46:47 vm05 bash[18520]: ignoring --setuser ceph since I am not root 2026-03-10T05:46:47.936 INFO:journalctl@ceph.mgr.x.vm05.stdout:Mar 10 05:46:47 vm05 bash[18520]: ignoring --setgroup ceph since I am not root 2026-03-10T05:46:47.936 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:46:47 vm05 bash[17864]: cluster 2026-03-10T05:46:46.768246+0000 mgr.y (mgr.14152) 136 : cluster [DBG] pgmap v114: 1 pgs: 1 active+recovering; 449 KiB data, 48 MiB used, 160 GiB / 160 GiB avail; 11 KiB/s, 0 objects/s recovering 2026-03-10T05:46:47.936 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:46:47 vm05 bash[17864]: audit 2026-03-10T05:46:46.781495+0000 mgr.y (mgr.14152) 137 : audit [DBG] from='client.24293 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "prometheus", "placement": "1;vm05=a", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T05:46:47.936 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:46:47 vm05 bash[17864]: cephadm 2026-03-10T05:46:46.782287+0000 mgr.y (mgr.14152) 138 : cephadm [INF] Saving service prometheus spec with placement vm05=a;count:1 2026-03-10T05:46:47.936 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:46:47 vm05 bash[17864]: audit 2026-03-10T05:46:46.786008+0000 mon.a (mon.0) 554 : audit [INF] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' 2026-03-10T05:46:47.936 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:46:47 vm05 bash[17864]: audit 2026-03-10T05:46:46.806448+0000 mon.a (mon.0) 555 : audit [DBG] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T05:46:47.936 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:46:47 vm05 bash[17864]: audit 2026-03-10T05:46:46.807212+0000 mon.a (mon.0) 556 : audit [DBG] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:46:47.937 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:46:47 vm05 bash[17864]: audit 2026-03-10T05:46:46.807614+0000 mon.a (mon.0) 557 : audit [INF] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T05:46:47.937 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:46:47 vm05 bash[17864]: audit 2026-03-10T05:46:46.811950+0000 mon.a (mon.0) 558 : audit [INF] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' 2026-03-10T05:46:47.937 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:46:47 vm05 bash[17864]: audit 2026-03-10T05:46:46.814296+0000 mon.a (mon.0) 559 : audit [INF] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "mgr module enable", "module": "prometheus"}]: dispatch 2026-03-10T05:46:47.937 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:46:47 vm05 bash[17864]: audit 2026-03-10T05:46:47.328815+0000 mon.a (mon.0) 560 : audit [INF] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' 2026-03-10T05:46:47.937 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:46:47 vm05 bash[17864]: cluster 2026-03-10T05:46:47.720430+0000 mon.a (mon.0) 561 : cluster [INF] Health check cleared: PG_AVAILABILITY (was: Reduced data availability: 1 pg peering) 2026-03-10T05:46:47.937 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:46:47 vm05 bash[17864]: cluster 2026-03-10T05:46:47.720485+0000 mon.a (mon.0) 562 : cluster [INF] Cluster is now healthy 2026-03-10T05:46:48.084 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:46:47 vm02 bash[22526]: cluster 2026-03-10T05:46:46.768246+0000 mgr.y (mgr.14152) 136 : cluster [DBG] pgmap v114: 1 pgs: 1 active+recovering; 449 KiB data, 48 MiB used, 160 GiB / 160 GiB avail; 11 KiB/s, 0 objects/s recovering 2026-03-10T05:46:48.084 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:46:47 vm02 bash[22526]: audit 2026-03-10T05:46:46.781495+0000 mgr.y (mgr.14152) 137 : audit [DBG] from='client.24293 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "prometheus", "placement": "1;vm05=a", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T05:46:48.084 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:46:47 vm02 bash[22526]: cephadm 2026-03-10T05:46:46.782287+0000 mgr.y (mgr.14152) 138 : cephadm [INF] Saving service prometheus spec with placement vm05=a;count:1 2026-03-10T05:46:48.084 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:46:47 vm02 bash[22526]: audit 2026-03-10T05:46:46.786008+0000 mon.a (mon.0) 554 : audit [INF] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' 2026-03-10T05:46:48.084 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:46:47 vm02 bash[22526]: audit 2026-03-10T05:46:46.806448+0000 mon.a (mon.0) 555 : audit [DBG] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T05:46:48.084 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:46:47 vm02 bash[22526]: audit 2026-03-10T05:46:46.807212+0000 mon.a (mon.0) 556 : audit [DBG] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:46:48.084 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:46:47 vm02 bash[22526]: audit 2026-03-10T05:46:46.807614+0000 mon.a (mon.0) 557 : audit [INF] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T05:46:48.084 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:46:47 vm02 bash[22526]: audit 2026-03-10T05:46:46.811950+0000 mon.a (mon.0) 558 : audit [INF] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' 2026-03-10T05:46:48.084 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:46:47 vm02 bash[22526]: audit 2026-03-10T05:46:46.814296+0000 mon.a (mon.0) 559 : audit [INF] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "mgr module enable", "module": "prometheus"}]: dispatch 2026-03-10T05:46:48.084 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:46:47 vm02 bash[22526]: audit 2026-03-10T05:46:47.328815+0000 mon.a (mon.0) 560 : audit [INF] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' 2026-03-10T05:46:48.084 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:46:47 vm02 bash[22526]: cluster 2026-03-10T05:46:47.720430+0000 mon.a (mon.0) 561 : cluster [INF] Health check cleared: PG_AVAILABILITY (was: Reduced data availability: 1 pg peering) 2026-03-10T05:46:48.084 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:46:47 vm02 bash[22526]: cluster 2026-03-10T05:46:47.720485+0000 mon.a (mon.0) 562 : cluster [INF] Cluster is now healthy 2026-03-10T05:46:48.084 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:46:47 vm02 bash[17462]: cluster 2026-03-10T05:46:46.768246+0000 mgr.y (mgr.14152) 136 : cluster [DBG] pgmap v114: 1 pgs: 1 active+recovering; 449 KiB data, 48 MiB used, 160 GiB / 160 GiB avail; 11 KiB/s, 0 objects/s recovering 2026-03-10T05:46:48.084 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:46:47 vm02 bash[17462]: audit 2026-03-10T05:46:46.781495+0000 mgr.y (mgr.14152) 137 : audit [DBG] from='client.24293 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "prometheus", "placement": "1;vm05=a", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T05:46:48.084 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:46:47 vm02 bash[17462]: cephadm 2026-03-10T05:46:46.782287+0000 mgr.y (mgr.14152) 138 : cephadm [INF] Saving service prometheus spec with placement vm05=a;count:1 2026-03-10T05:46:48.084 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:46:47 vm02 bash[17462]: audit 2026-03-10T05:46:46.786008+0000 mon.a (mon.0) 554 : audit [INF] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' 2026-03-10T05:46:48.084 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:46:47 vm02 bash[17462]: audit 2026-03-10T05:46:46.806448+0000 mon.a (mon.0) 555 : audit [DBG] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T05:46:48.084 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:46:47 vm02 bash[17462]: audit 2026-03-10T05:46:46.807212+0000 mon.a (mon.0) 556 : audit [DBG] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:46:48.084 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:46:47 vm02 bash[17462]: audit 2026-03-10T05:46:46.807614+0000 mon.a (mon.0) 557 : audit [INF] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T05:46:48.084 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:46:47 vm02 bash[17462]: audit 2026-03-10T05:46:46.811950+0000 mon.a (mon.0) 558 : audit [INF] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' 2026-03-10T05:46:48.084 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:46:47 vm02 bash[17462]: audit 2026-03-10T05:46:46.814296+0000 mon.a (mon.0) 559 : audit [INF] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd=[{"prefix": "mgr module enable", "module": "prometheus"}]: dispatch 2026-03-10T05:46:48.084 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:46:47 vm02 bash[17462]: audit 2026-03-10T05:46:47.328815+0000 mon.a (mon.0) 560 : audit [INF] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' 2026-03-10T05:46:48.084 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:46:47 vm02 bash[17462]: cluster 2026-03-10T05:46:47.720430+0000 mon.a (mon.0) 561 : cluster [INF] Health check cleared: PG_AVAILABILITY (was: Reduced data availability: 1 pg peering) 2026-03-10T05:46:48.084 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:46:47 vm02 bash[17462]: cluster 2026-03-10T05:46:47.720485+0000 mon.a (mon.0) 562 : cluster [INF] Cluster is now healthy 2026-03-10T05:46:48.084 INFO:journalctl@ceph.mgr.y.vm02.stdout:Mar 10 05:46:47 vm02 bash[17731]: ignoring --setuser ceph since I am not root 2026-03-10T05:46:48.084 INFO:journalctl@ceph.mgr.y.vm02.stdout:Mar 10 05:46:47 vm02 bash[17731]: ignoring --setgroup ceph since I am not root 2026-03-10T05:46:48.084 INFO:journalctl@ceph.mgr.y.vm02.stdout:Mar 10 05:46:47 vm02 bash[17731]: debug 2026-03-10T05:46:47.859+0000 7f497aa5c700 1 -- 192.168.123.102:0/2073188173 <== mon.0 v2:192.168.123.102:3300/0 4 ==== auth_reply(proto 2 0 (0) Success) v1 ==== 194+0+0 (secure 0 0 0) 0x55cabfa40340 con 0x55cac07bc400 2026-03-10T05:46:48.084 INFO:journalctl@ceph.mgr.y.vm02.stdout:Mar 10 05:46:47 vm02 bash[17731]: debug 2026-03-10T05:46:47.927+0000 7f49834b8000 -1 mgr[py] Module status has missing NOTIFY_TYPES member 2026-03-10T05:46:48.084 INFO:journalctl@ceph.mgr.y.vm02.stdout:Mar 10 05:46:47 vm02 bash[17731]: debug 2026-03-10T05:46:47.971+0000 7f49834b8000 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member 2026-03-10T05:46:48.258 INFO:journalctl@ceph.mgr.x.vm05.stdout:Mar 10 05:46:47 vm05 bash[18520]: debug 2026-03-10T05:46:47.926+0000 7f9338678000 -1 mgr[py] Module status has missing NOTIFY_TYPES member 2026-03-10T05:46:48.258 INFO:journalctl@ceph.mgr.x.vm05.stdout:Mar 10 05:46:47 vm05 bash[18520]: debug 2026-03-10T05:46:47.974+0000 7f9338678000 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member 2026-03-10T05:46:48.584 INFO:journalctl@ceph.mgr.y.vm02.stdout:Mar 10 05:46:48 vm02 bash[17731]: debug 2026-03-10T05:46:48.243+0000 7f49834b8000 -1 mgr[py] Module rook has missing NOTIFY_TYPES member 2026-03-10T05:46:48.742 INFO:journalctl@ceph.mgr.x.vm05.stdout:Mar 10 05:46:48 vm05 bash[18520]: debug 2026-03-10T05:46:48.278+0000 7f9338678000 -1 mgr[py] Module rook has missing NOTIFY_TYPES member 2026-03-10T05:46:48.981 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:46:48 vm02 bash[22526]: audit 2026-03-10T05:46:47.821642+0000 mon.a (mon.0) 563 : audit [INF] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd='[{"prefix": "mgr module enable", "module": "prometheus"}]': finished 2026-03-10T05:46:48.981 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:46:48 vm02 bash[22526]: cluster 2026-03-10T05:46:47.821714+0000 mon.a (mon.0) 564 : cluster [DBG] mgrmap e16: y(active, since 2m), standbys: x 2026-03-10T05:46:48.981 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:46:48 vm02 bash[17462]: audit 2026-03-10T05:46:47.821642+0000 mon.a (mon.0) 563 : audit [INF] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd='[{"prefix": "mgr module enable", "module": "prometheus"}]': finished 2026-03-10T05:46:48.981 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:46:48 vm02 bash[17462]: cluster 2026-03-10T05:46:47.821714+0000 mon.a (mon.0) 564 : cluster [DBG] mgrmap e16: y(active, since 2m), standbys: x 2026-03-10T05:46:48.981 INFO:journalctl@ceph.mgr.y.vm02.stdout:Mar 10 05:46:48 vm02 bash[17731]: debug 2026-03-10T05:46:48.707+0000 7f49834b8000 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member 2026-03-10T05:46:48.981 INFO:journalctl@ceph.mgr.y.vm02.stdout:Mar 10 05:46:48 vm02 bash[17731]: debug 2026-03-10T05:46:48.791+0000 7f49834b8000 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member 2026-03-10T05:46:49.007 INFO:journalctl@ceph.mgr.x.vm05.stdout:Mar 10 05:46:48 vm05 bash[18520]: debug 2026-03-10T05:46:48.730+0000 7f9338678000 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member 2026-03-10T05:46:49.008 INFO:journalctl@ceph.mgr.x.vm05.stdout:Mar 10 05:46:48 vm05 bash[18520]: debug 2026-03-10T05:46:48.818+0000 7f9338678000 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member 2026-03-10T05:46:49.008 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:46:48 vm05 bash[17864]: audit 2026-03-10T05:46:47.821642+0000 mon.a (mon.0) 563 : audit [INF] from='mgr.14152 192.168.123.102:0/875640324' entity='mgr.y' cmd='[{"prefix": "mgr module enable", "module": "prometheus"}]': finished 2026-03-10T05:46:49.008 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:46:48 vm05 bash[17864]: cluster 2026-03-10T05:46:47.821714+0000 mon.a (mon.0) 564 : cluster [DBG] mgrmap e16: y(active, since 2m), standbys: x 2026-03-10T05:46:49.250 INFO:journalctl@ceph.mgr.y.vm02.stdout:Mar 10 05:46:48 vm02 bash[17731]: debug 2026-03-10T05:46:48.975+0000 7f49834b8000 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member 2026-03-10T05:46:49.250 INFO:journalctl@ceph.mgr.y.vm02.stdout:Mar 10 05:46:49 vm02 bash[17731]: debug 2026-03-10T05:46:49.067+0000 7f49834b8000 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member 2026-03-10T05:46:49.250 INFO:journalctl@ceph.mgr.y.vm02.stdout:Mar 10 05:46:49 vm02 bash[17731]: debug 2026-03-10T05:46:49.119+0000 7f49834b8000 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member 2026-03-10T05:46:49.250 INFO:journalctl@ceph.mgr.y.vm02.stdout:Mar 10 05:46:49 vm02 bash[17731]: debug 2026-03-10T05:46:49.247+0000 7f49834b8000 -1 mgr[py] Module influx has missing NOTIFY_TYPES member 2026-03-10T05:46:49.275 INFO:journalctl@ceph.mgr.x.vm05.stdout:Mar 10 05:46:49 vm05 bash[18520]: debug 2026-03-10T05:46:48.998+0000 7f9338678000 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member 2026-03-10T05:46:49.275 INFO:journalctl@ceph.mgr.x.vm05.stdout:Mar 10 05:46:49 vm05 bash[18520]: debug 2026-03-10T05:46:49.090+0000 7f9338678000 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member 2026-03-10T05:46:49.275 INFO:journalctl@ceph.mgr.x.vm05.stdout:Mar 10 05:46:49 vm05 bash[18520]: debug 2026-03-10T05:46:49.138+0000 7f9338678000 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member 2026-03-10T05:46:49.584 INFO:journalctl@ceph.mgr.y.vm02.stdout:Mar 10 05:46:49 vm02 bash[17731]: debug 2026-03-10T05:46:49.299+0000 7f49834b8000 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member 2026-03-10T05:46:49.584 INFO:journalctl@ceph.mgr.y.vm02.stdout:Mar 10 05:46:49 vm02 bash[17731]: debug 2026-03-10T05:46:49.363+0000 7f49834b8000 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member 2026-03-10T05:46:49.758 INFO:journalctl@ceph.mgr.x.vm05.stdout:Mar 10 05:46:49 vm05 bash[18520]: debug 2026-03-10T05:46:49.266+0000 7f9338678000 -1 mgr[py] Module influx has missing NOTIFY_TYPES member 2026-03-10T05:46:49.758 INFO:journalctl@ceph.mgr.x.vm05.stdout:Mar 10 05:46:49 vm05 bash[18520]: debug 2026-03-10T05:46:49.318+0000 7f9338678000 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member 2026-03-10T05:46:49.758 INFO:journalctl@ceph.mgr.x.vm05.stdout:Mar 10 05:46:49 vm05 bash[18520]: debug 2026-03-10T05:46:49.382+0000 7f9338678000 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member 2026-03-10T05:46:50.258 INFO:journalctl@ceph.mgr.x.vm05.stdout:Mar 10 05:46:49 vm05 bash[18520]: debug 2026-03-10T05:46:49.862+0000 7f9338678000 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member 2026-03-10T05:46:50.258 INFO:journalctl@ceph.mgr.x.vm05.stdout:Mar 10 05:46:49 vm05 bash[18520]: debug 2026-03-10T05:46:49.914+0000 7f9338678000 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member 2026-03-10T05:46:50.258 INFO:journalctl@ceph.mgr.x.vm05.stdout:Mar 10 05:46:49 vm05 bash[18520]: debug 2026-03-10T05:46:49.966+0000 7f9338678000 -1 mgr[py] Module progress has missing NOTIFY_TYPES member 2026-03-10T05:46:50.269 INFO:journalctl@ceph.mgr.y.vm02.stdout:Mar 10 05:46:49 vm02 bash[17731]: debug 2026-03-10T05:46:49.851+0000 7f49834b8000 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member 2026-03-10T05:46:50.269 INFO:journalctl@ceph.mgr.y.vm02.stdout:Mar 10 05:46:49 vm02 bash[17731]: debug 2026-03-10T05:46:49.907+0000 7f49834b8000 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member 2026-03-10T05:46:50.269 INFO:journalctl@ceph.mgr.y.vm02.stdout:Mar 10 05:46:49 vm02 bash[17731]: debug 2026-03-10T05:46:49.959+0000 7f49834b8000 -1 mgr[py] Module progress has missing NOTIFY_TYPES member 2026-03-10T05:46:50.584 INFO:journalctl@ceph.mgr.y.vm02.stdout:Mar 10 05:46:50 vm02 bash[17731]: debug 2026-03-10T05:46:50.263+0000 7f49834b8000 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member 2026-03-10T05:46:50.584 INFO:journalctl@ceph.mgr.y.vm02.stdout:Mar 10 05:46:50 vm02 bash[17731]: debug 2026-03-10T05:46:50.323+0000 7f49834b8000 -1 mgr[py] Module crash has missing NOTIFY_TYPES member 2026-03-10T05:46:50.584 INFO:journalctl@ceph.mgr.y.vm02.stdout:Mar 10 05:46:50 vm02 bash[17731]: debug 2026-03-10T05:46:50.379+0000 7f49834b8000 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member 2026-03-10T05:46:50.584 INFO:journalctl@ceph.mgr.y.vm02.stdout:Mar 10 05:46:50 vm02 bash[17731]: debug 2026-03-10T05:46:50.459+0000 7f49834b8000 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member 2026-03-10T05:46:50.758 INFO:journalctl@ceph.mgr.x.vm05.stdout:Mar 10 05:46:50 vm05 bash[18520]: debug 2026-03-10T05:46:50.278+0000 7f9338678000 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member 2026-03-10T05:46:50.758 INFO:journalctl@ceph.mgr.x.vm05.stdout:Mar 10 05:46:50 vm05 bash[18520]: debug 2026-03-10T05:46:50.338+0000 7f9338678000 -1 mgr[py] Module crash has missing NOTIFY_TYPES member 2026-03-10T05:46:50.758 INFO:journalctl@ceph.mgr.x.vm05.stdout:Mar 10 05:46:50 vm05 bash[18520]: debug 2026-03-10T05:46:50.394+0000 7f9338678000 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member 2026-03-10T05:46:50.758 INFO:journalctl@ceph.mgr.x.vm05.stdout:Mar 10 05:46:50 vm05 bash[18520]: debug 2026-03-10T05:46:50.474+0000 7f9338678000 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member 2026-03-10T05:46:51.035 INFO:journalctl@ceph.mgr.y.vm02.stdout:Mar 10 05:46:50 vm02 bash[17731]: debug 2026-03-10T05:46:50.755+0000 7f49834b8000 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member 2026-03-10T05:46:51.035 INFO:journalctl@ceph.mgr.y.vm02.stdout:Mar 10 05:46:50 vm02 bash[17731]: debug 2026-03-10T05:46:50.923+0000 7f49834b8000 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member 2026-03-10T05:46:51.035 INFO:journalctl@ceph.mgr.y.vm02.stdout:Mar 10 05:46:50 vm02 bash[17731]: debug 2026-03-10T05:46:50.971+0000 7f49834b8000 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member 2026-03-10T05:46:51.052 INFO:journalctl@ceph.mgr.x.vm05.stdout:Mar 10 05:46:50 vm05 bash[18520]: debug 2026-03-10T05:46:50.766+0000 7f9338678000 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member 2026-03-10T05:46:51.052 INFO:journalctl@ceph.mgr.x.vm05.stdout:Mar 10 05:46:50 vm05 bash[18520]: debug 2026-03-10T05:46:50.934+0000 7f9338678000 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member 2026-03-10T05:46:51.052 INFO:journalctl@ceph.mgr.x.vm05.stdout:Mar 10 05:46:50 vm05 bash[18520]: debug 2026-03-10T05:46:50.986+0000 7f9338678000 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member 2026-03-10T05:46:51.334 INFO:journalctl@ceph.mgr.y.vm02.stdout:Mar 10 05:46:51 vm02 bash[17731]: debug 2026-03-10T05:46:51.031+0000 7f49834b8000 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member 2026-03-10T05:46:51.334 INFO:journalctl@ceph.mgr.y.vm02.stdout:Mar 10 05:46:51 vm02 bash[17731]: debug 2026-03-10T05:46:51.163+0000 7f49834b8000 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member 2026-03-10T05:46:51.508 INFO:journalctl@ceph.mgr.x.vm05.stdout:Mar 10 05:46:51 vm05 bash[18520]: debug 2026-03-10T05:46:51.042+0000 7f9338678000 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member 2026-03-10T05:46:51.508 INFO:journalctl@ceph.mgr.x.vm05.stdout:Mar 10 05:46:51 vm05 bash[18520]: debug 2026-03-10T05:46:51.174+0000 7f9338678000 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member 2026-03-10T05:46:51.962 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:46:51 vm02 bash[17462]: cluster 2026-03-10T05:46:51.606057+0000 mon.a (mon.0) 565 : cluster [INF] Active manager daemon y restarted 2026-03-10T05:46:51.962 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:46:51 vm02 bash[17462]: cluster 2026-03-10T05:46:51.607426+0000 mon.a (mon.0) 566 : cluster [INF] Activating manager daemon y 2026-03-10T05:46:51.962 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:46:51 vm02 bash[17462]: cluster 2026-03-10T05:46:51.612593+0000 mon.a (mon.0) 567 : cluster [DBG] osdmap e49: 8 total, 8 up, 8 in 2026-03-10T05:46:51.962 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:46:51 vm02 bash[17462]: audit 2026-03-10T05:46:51.634103+0000 mon.b (mon.2) 18 : audit [DBG] from='mgr.? 192.168.123.105:0/2967788651' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/x/crt"}]: dispatch 2026-03-10T05:46:51.962 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:46:51 vm02 bash[17462]: audit 2026-03-10T05:46:51.634653+0000 mon.b (mon.2) 19 : audit [DBG] from='mgr.? 192.168.123.105:0/2967788651' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/crt"}]: dispatch 2026-03-10T05:46:51.962 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:46:51 vm02 bash[17462]: audit 2026-03-10T05:46:51.636168+0000 mon.b (mon.2) 20 : audit [DBG] from='mgr.? 192.168.123.105:0/2967788651' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/x/key"}]: dispatch 2026-03-10T05:46:51.962 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:46:51 vm02 bash[17462]: audit 2026-03-10T05:46:51.637101+0000 mon.b (mon.2) 21 : audit [DBG] from='mgr.? 192.168.123.105:0/2967788651' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/key"}]: dispatch 2026-03-10T05:46:51.962 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:46:51 vm02 bash[17462]: cluster 2026-03-10T05:46:51.637607+0000 mon.a (mon.0) 568 : cluster [DBG] Standby manager daemon x restarted 2026-03-10T05:46:51.962 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:46:51 vm02 bash[17462]: cluster 2026-03-10T05:46:51.637732+0000 mon.a (mon.0) 569 : cluster [DBG] Standby manager daemon x started 2026-03-10T05:46:51.962 INFO:journalctl@ceph.mgr.y.vm02.stdout:Mar 10 05:46:51 vm02 bash[17731]: debug 2026-03-10T05:46:51.599+0000 7f49834b8000 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member 2026-03-10T05:46:51.962 INFO:journalctl@ceph.mgr.y.vm02.stdout:Mar 10 05:46:51 vm02 bash[17731]: [10/Mar/2026:05:46:51] ENGINE Bus STARTING 2026-03-10T05:46:51.962 INFO:journalctl@ceph.mgr.y.vm02.stdout:Mar 10 05:46:51 vm02 bash[17731]: [10/Mar/2026:05:46:51] ENGINE Bus STARTING 2026-03-10T05:46:51.962 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:46:51 vm02 bash[22526]: cluster 2026-03-10T05:46:51.606057+0000 mon.a (mon.0) 565 : cluster [INF] Active manager daemon y restarted 2026-03-10T05:46:51.962 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:46:51 vm02 bash[22526]: cluster 2026-03-10T05:46:51.607426+0000 mon.a (mon.0) 566 : cluster [INF] Activating manager daemon y 2026-03-10T05:46:51.962 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:46:51 vm02 bash[22526]: cluster 2026-03-10T05:46:51.612593+0000 mon.a (mon.0) 567 : cluster [DBG] osdmap e49: 8 total, 8 up, 8 in 2026-03-10T05:46:51.962 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:46:51 vm02 bash[22526]: audit 2026-03-10T05:46:51.634103+0000 mon.b (mon.2) 18 : audit [DBG] from='mgr.? 192.168.123.105:0/2967788651' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/x/crt"}]: dispatch 2026-03-10T05:46:51.962 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:46:51 vm02 bash[22526]: audit 2026-03-10T05:46:51.634653+0000 mon.b (mon.2) 19 : audit [DBG] from='mgr.? 192.168.123.105:0/2967788651' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/crt"}]: dispatch 2026-03-10T05:46:51.963 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:46:51 vm02 bash[22526]: audit 2026-03-10T05:46:51.636168+0000 mon.b (mon.2) 20 : audit [DBG] from='mgr.? 192.168.123.105:0/2967788651' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/x/key"}]: dispatch 2026-03-10T05:46:51.963 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:46:51 vm02 bash[22526]: audit 2026-03-10T05:46:51.637101+0000 mon.b (mon.2) 21 : audit [DBG] from='mgr.? 192.168.123.105:0/2967788651' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/key"}]: dispatch 2026-03-10T05:46:51.963 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:46:51 vm02 bash[22526]: cluster 2026-03-10T05:46:51.637607+0000 mon.a (mon.0) 568 : cluster [DBG] Standby manager daemon x restarted 2026-03-10T05:46:51.963 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:46:51 vm02 bash[22526]: cluster 2026-03-10T05:46:51.637732+0000 mon.a (mon.0) 569 : cluster [DBG] Standby manager daemon x started 2026-03-10T05:46:52.008 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:46:51 vm05 bash[17864]: cluster 2026-03-10T05:46:51.606057+0000 mon.a (mon.0) 565 : cluster [INF] Active manager daemon y restarted 2026-03-10T05:46:52.008 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:46:51 vm05 bash[17864]: cluster 2026-03-10T05:46:51.607426+0000 mon.a (mon.0) 566 : cluster [INF] Activating manager daemon y 2026-03-10T05:46:52.008 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:46:51 vm05 bash[17864]: cluster 2026-03-10T05:46:51.612593+0000 mon.a (mon.0) 567 : cluster [DBG] osdmap e49: 8 total, 8 up, 8 in 2026-03-10T05:46:52.008 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:46:51 vm05 bash[17864]: audit 2026-03-10T05:46:51.634103+0000 mon.b (mon.2) 18 : audit [DBG] from='mgr.? 192.168.123.105:0/2967788651' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/x/crt"}]: dispatch 2026-03-10T05:46:52.008 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:46:51 vm05 bash[17864]: audit 2026-03-10T05:46:51.634653+0000 mon.b (mon.2) 19 : audit [DBG] from='mgr.? 192.168.123.105:0/2967788651' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/crt"}]: dispatch 2026-03-10T05:46:52.008 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:46:51 vm05 bash[17864]: audit 2026-03-10T05:46:51.636168+0000 mon.b (mon.2) 20 : audit [DBG] from='mgr.? 192.168.123.105:0/2967788651' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/x/key"}]: dispatch 2026-03-10T05:46:52.008 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:46:51 vm05 bash[17864]: audit 2026-03-10T05:46:51.637101+0000 mon.b (mon.2) 21 : audit [DBG] from='mgr.? 192.168.123.105:0/2967788651' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/key"}]: dispatch 2026-03-10T05:46:52.008 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:46:51 vm05 bash[17864]: cluster 2026-03-10T05:46:51.637607+0000 mon.a (mon.0) 568 : cluster [DBG] Standby manager daemon x restarted 2026-03-10T05:46:52.008 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:46:51 vm05 bash[17864]: cluster 2026-03-10T05:46:51.637732+0000 mon.a (mon.0) 569 : cluster [DBG] Standby manager daemon x started 2026-03-10T05:46:52.008 INFO:journalctl@ceph.mgr.x.vm05.stdout:Mar 10 05:46:51 vm05 bash[18520]: debug 2026-03-10T05:46:51.626+0000 7f9338678000 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member 2026-03-10T05:46:52.008 INFO:journalctl@ceph.mgr.x.vm05.stdout:Mar 10 05:46:51 vm05 bash[18520]: [10/Mar/2026:05:46:51] ENGINE Bus STARTING 2026-03-10T05:46:52.008 INFO:journalctl@ceph.mgr.x.vm05.stdout:Mar 10 05:46:51 vm05 bash[18520]: CherryPy Checker: 2026-03-10T05:46:52.008 INFO:journalctl@ceph.mgr.x.vm05.stdout:Mar 10 05:46:51 vm05 bash[18520]: The Application mounted at '' has an empty config. 2026-03-10T05:46:52.008 INFO:journalctl@ceph.mgr.x.vm05.stdout:Mar 10 05:46:51 vm05 bash[18520]: [10/Mar/2026:05:46:51] ENGINE Serving on http://:::9283 2026-03-10T05:46:52.008 INFO:journalctl@ceph.mgr.x.vm05.stdout:Mar 10 05:46:51 vm05 bash[18520]: [10/Mar/2026:05:46:51] ENGINE Bus STARTED 2026-03-10T05:46:52.334 INFO:journalctl@ceph.mgr.y.vm02.stdout:Mar 10 05:46:51 vm02 bash[17731]: CherryPy Checker: 2026-03-10T05:46:52.334 INFO:journalctl@ceph.mgr.y.vm02.stdout:Mar 10 05:46:51 vm02 bash[17731]: The Application mounted at '' has an empty config. 2026-03-10T05:46:52.334 INFO:journalctl@ceph.mgr.y.vm02.stdout:Mar 10 05:46:51 vm02 bash[17731]: [10/Mar/2026:05:46:51] ENGINE Serving on http://:::9283 2026-03-10T05:46:52.334 INFO:journalctl@ceph.mgr.y.vm02.stdout:Mar 10 05:46:51 vm02 bash[17731]: [10/Mar/2026:05:46:51] ENGINE Bus STARTED 2026-03-10T05:46:52.334 INFO:journalctl@ceph.mgr.y.vm02.stdout:Mar 10 05:46:52 vm02 bash[17731]: [10/Mar/2026:05:46:52] ENGINE Serving on https://192.168.123.102:7150 2026-03-10T05:46:52.334 INFO:journalctl@ceph.mgr.y.vm02.stdout:Mar 10 05:46:52 vm02 bash[17731]: [10/Mar/2026:05:46:52] ENGINE Bus STARTED 2026-03-10T05:46:52.718 INFO:teuthology.orchestra.run.vm05.stdout:Scheduled alertmanager update... 2026-03-10T05:46:52.776 DEBUG:teuthology.orchestra.run.vm02:alertmanager.a> sudo journalctl -f -n 0 -u ceph-107483ae-1c44-11f1-b530-c1172cd6122a@alertmanager.a.service 2026-03-10T05:46:52.777 INFO:tasks.cephadm:Adding grafana.a on vm05 2026-03-10T05:46:52.777 DEBUG:teuthology.orchestra.run.vm05:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 107483ae-1c44-11f1-b530-c1172cd6122a -- ceph orch apply grafana '1;vm05=a' 2026-03-10T05:46:52.929 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:46:52 vm05 bash[17864]: cluster 2026-03-10T05:46:51.667697+0000 mon.a (mon.0) 570 : cluster [DBG] mgrmap e17: y(active, starting, since 0.0603879s), standbys: x 2026-03-10T05:46:52.930 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:46:52 vm05 bash[17864]: audit 2026-03-10T05:46:51.671450+0000 mon.c (mon.1) 23 : audit [DBG] from='mgr.14409 192.168.123.102:0/4073702081' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-10T05:46:52.930 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:46:52 vm05 bash[17864]: audit 2026-03-10T05:46:51.671818+0000 mon.c (mon.1) 24 : audit [DBG] from='mgr.14409 192.168.123.102:0/4073702081' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T05:46:52.930 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:46:52 vm05 bash[17864]: audit 2026-03-10T05:46:51.672026+0000 mon.c (mon.1) 25 : audit [DBG] from='mgr.14409 192.168.123.102:0/4073702081' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T05:46:52.930 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:46:52 vm05 bash[17864]: audit 2026-03-10T05:46:51.672712+0000 mon.c (mon.1) 26 : audit [DBG] from='mgr.14409 192.168.123.102:0/4073702081' entity='mgr.y' cmd=[{"prefix": "mgr metadata", "who": "y", "id": "y"}]: dispatch 2026-03-10T05:46:52.930 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:46:52 vm05 bash[17864]: audit 2026-03-10T05:46:51.695692+0000 mon.c (mon.1) 27 : audit [DBG] from='mgr.14409 192.168.123.102:0/4073702081' entity='mgr.y' cmd=[{"prefix": "mgr metadata", "who": "x", "id": "x"}]: dispatch 2026-03-10T05:46:52.930 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:46:52 vm05 bash[17864]: audit 2026-03-10T05:46:51.696088+0000 mon.c (mon.1) 28 : audit [DBG] from='mgr.14409 192.168.123.102:0/4073702081' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T05:46:52.930 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:46:52 vm05 bash[17864]: audit 2026-03-10T05:46:51.696505+0000 mon.c (mon.1) 29 : audit [DBG] from='mgr.14409 192.168.123.102:0/4073702081' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T05:46:52.930 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:46:52 vm05 bash[17864]: audit 2026-03-10T05:46:51.696908+0000 mon.c (mon.1) 30 : audit [DBG] from='mgr.14409 192.168.123.102:0/4073702081' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T05:46:52.930 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:46:52 vm05 bash[17864]: audit 2026-03-10T05:46:51.697316+0000 mon.c (mon.1) 31 : audit [DBG] from='mgr.14409 192.168.123.102:0/4073702081' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-10T05:46:52.930 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:46:52 vm05 bash[17864]: audit 2026-03-10T05:46:51.697722+0000 mon.c (mon.1) 32 : audit [DBG] from='mgr.14409 192.168.123.102:0/4073702081' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-10T05:46:52.930 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:46:52 vm05 bash[17864]: audit 2026-03-10T05:46:51.698160+0000 mon.c (mon.1) 33 : audit [DBG] from='mgr.14409 192.168.123.102:0/4073702081' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-10T05:46:52.930 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:46:52 vm05 bash[17864]: audit 2026-03-10T05:46:51.698570+0000 mon.c (mon.1) 34 : audit [DBG] from='mgr.14409 192.168.123.102:0/4073702081' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-10T05:46:52.930 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:46:52 vm05 bash[17864]: audit 2026-03-10T05:46:51.699002+0000 mon.c (mon.1) 35 : audit [DBG] from='mgr.14409 192.168.123.102:0/4073702081' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-10T05:46:52.930 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:46:52 vm05 bash[17864]: audit 2026-03-10T05:46:51.699498+0000 mon.c (mon.1) 36 : audit [DBG] from='mgr.14409 192.168.123.102:0/4073702081' entity='mgr.y' cmd=[{"prefix": "mds metadata"}]: dispatch 2026-03-10T05:46:52.930 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:46:52 vm05 bash[17864]: audit 2026-03-10T05:46:51.699904+0000 mon.c (mon.1) 37 : audit [DBG] from='mgr.14409 192.168.123.102:0/4073702081' entity='mgr.y' cmd=[{"prefix": "osd metadata"}]: dispatch 2026-03-10T05:46:52.930 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:46:52 vm05 bash[17864]: audit 2026-03-10T05:46:51.700422+0000 mon.c (mon.1) 38 : audit [DBG] from='mgr.14409 192.168.123.102:0/4073702081' entity='mgr.y' cmd=[{"prefix": "mon metadata"}]: dispatch 2026-03-10T05:46:52.930 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:46:52 vm05 bash[17864]: cluster 2026-03-10T05:46:51.706260+0000 mon.a (mon.0) 571 : cluster [INF] Manager daemon y is now available 2026-03-10T05:46:52.930 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:46:52 vm05 bash[17864]: audit 2026-03-10T05:46:51.729623+0000 mon.a (mon.0) 572 : audit [INF] from='mgr.14409 ' entity='mgr.y' 2026-03-10T05:46:52.930 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:46:52 vm05 bash[17864]: audit 2026-03-10T05:46:51.735263+0000 mon.c (mon.1) 39 : audit [DBG] from='mgr.14409 192.168.123.102:0/4073702081' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T05:46:52.930 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:46:52 vm05 bash[17864]: audit 2026-03-10T05:46:51.736785+0000 mon.c (mon.1) 40 : audit [DBG] from='mgr.14409 192.168.123.102:0/4073702081' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T05:46:52.930 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:46:52 vm05 bash[17864]: audit 2026-03-10T05:46:51.740756+0000 mon.c (mon.1) 41 : audit [DBG] from='mgr.14409 192.168.123.102:0/4073702081' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:46:52.930 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:46:52 vm05 bash[17864]: audit 2026-03-10T05:46:51.741479+0000 mon.c (mon.1) 42 : audit [INF] from='mgr.14409 192.168.123.102:0/4073702081' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T05:46:52.930 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:46:52 vm05 bash[17864]: audit 2026-03-10T05:46:51.750717+0000 mon.c (mon.1) 43 : audit [INF] from='mgr.14409 192.168.123.102:0/4073702081' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/mirror_snapshot_schedule"}]: dispatch 2026-03-10T05:46:52.930 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:46:52 vm05 bash[17864]: audit 2026-03-10T05:46:51.751091+0000 mon.a (mon.0) 573 : audit [INF] from='mgr.14409 ' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/mirror_snapshot_schedule"}]: dispatch 2026-03-10T05:46:52.930 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:46:52 vm05 bash[17864]: audit 2026-03-10T05:46:51.762649+0000 mon.c (mon.1) 44 : audit [INF] from='mgr.14409 192.168.123.102:0/4073702081' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/trash_purge_schedule"}]: dispatch 2026-03-10T05:46:52.930 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:46:52 vm05 bash[17864]: audit 2026-03-10T05:46:51.763038+0000 mon.a (mon.0) 574 : audit [INF] from='mgr.14409 ' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/trash_purge_schedule"}]: dispatch 2026-03-10T05:46:52.930 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:46:52 vm05 bash[17864]: audit 2026-03-10T05:46:52.082355+0000 mon.a (mon.0) 575 : audit [INF] from='mgr.14409 ' entity='mgr.y' 2026-03-10T05:46:53.084 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:46:52 vm02 bash[17462]: cluster 2026-03-10T05:46:51.667697+0000 mon.a (mon.0) 570 : cluster [DBG] mgrmap e17: y(active, starting, since 0.0603879s), standbys: x 2026-03-10T05:46:53.084 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:46:52 vm02 bash[17462]: audit 2026-03-10T05:46:51.671450+0000 mon.c (mon.1) 23 : audit [DBG] from='mgr.14409 192.168.123.102:0/4073702081' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-10T05:46:53.084 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:46:52 vm02 bash[17462]: audit 2026-03-10T05:46:51.671818+0000 mon.c (mon.1) 24 : audit [DBG] from='mgr.14409 192.168.123.102:0/4073702081' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T05:46:53.084 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:46:52 vm02 bash[17462]: audit 2026-03-10T05:46:51.672026+0000 mon.c (mon.1) 25 : audit [DBG] from='mgr.14409 192.168.123.102:0/4073702081' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T05:46:53.084 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:46:52 vm02 bash[17462]: audit 2026-03-10T05:46:51.672712+0000 mon.c (mon.1) 26 : audit [DBG] from='mgr.14409 192.168.123.102:0/4073702081' entity='mgr.y' cmd=[{"prefix": "mgr metadata", "who": "y", "id": "y"}]: dispatch 2026-03-10T05:46:53.084 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:46:52 vm02 bash[17462]: audit 2026-03-10T05:46:51.695692+0000 mon.c (mon.1) 27 : audit [DBG] from='mgr.14409 192.168.123.102:0/4073702081' entity='mgr.y' cmd=[{"prefix": "mgr metadata", "who": "x", "id": "x"}]: dispatch 2026-03-10T05:46:53.084 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:46:52 vm02 bash[17462]: audit 2026-03-10T05:46:51.696088+0000 mon.c (mon.1) 28 : audit [DBG] from='mgr.14409 192.168.123.102:0/4073702081' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T05:46:53.084 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:46:52 vm02 bash[17462]: audit 2026-03-10T05:46:51.696505+0000 mon.c (mon.1) 29 : audit [DBG] from='mgr.14409 192.168.123.102:0/4073702081' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T05:46:53.084 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:46:52 vm02 bash[17462]: audit 2026-03-10T05:46:51.696908+0000 mon.c (mon.1) 30 : audit [DBG] from='mgr.14409 192.168.123.102:0/4073702081' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T05:46:53.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:46:52 vm02 bash[17462]: audit 2026-03-10T05:46:51.697316+0000 mon.c (mon.1) 31 : audit [DBG] from='mgr.14409 192.168.123.102:0/4073702081' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-10T05:46:53.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:46:52 vm02 bash[17462]: audit 2026-03-10T05:46:51.697722+0000 mon.c (mon.1) 32 : audit [DBG] from='mgr.14409 192.168.123.102:0/4073702081' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-10T05:46:53.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:46:52 vm02 bash[17462]: audit 2026-03-10T05:46:51.698160+0000 mon.c (mon.1) 33 : audit [DBG] from='mgr.14409 192.168.123.102:0/4073702081' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-10T05:46:53.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:46:52 vm02 bash[17462]: audit 2026-03-10T05:46:51.698570+0000 mon.c (mon.1) 34 : audit [DBG] from='mgr.14409 192.168.123.102:0/4073702081' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-10T05:46:53.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:46:52 vm02 bash[17462]: audit 2026-03-10T05:46:51.699002+0000 mon.c (mon.1) 35 : audit [DBG] from='mgr.14409 192.168.123.102:0/4073702081' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-10T05:46:53.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:46:52 vm02 bash[17462]: audit 2026-03-10T05:46:51.699498+0000 mon.c (mon.1) 36 : audit [DBG] from='mgr.14409 192.168.123.102:0/4073702081' entity='mgr.y' cmd=[{"prefix": "mds metadata"}]: dispatch 2026-03-10T05:46:53.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:46:52 vm02 bash[17462]: audit 2026-03-10T05:46:51.699904+0000 mon.c (mon.1) 37 : audit [DBG] from='mgr.14409 192.168.123.102:0/4073702081' entity='mgr.y' cmd=[{"prefix": "osd metadata"}]: dispatch 2026-03-10T05:46:53.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:46:52 vm02 bash[17462]: audit 2026-03-10T05:46:51.700422+0000 mon.c (mon.1) 38 : audit [DBG] from='mgr.14409 192.168.123.102:0/4073702081' entity='mgr.y' cmd=[{"prefix": "mon metadata"}]: dispatch 2026-03-10T05:46:53.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:46:52 vm02 bash[17462]: cluster 2026-03-10T05:46:51.706260+0000 mon.a (mon.0) 571 : cluster [INF] Manager daemon y is now available 2026-03-10T05:46:53.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:46:52 vm02 bash[17462]: audit 2026-03-10T05:46:51.729623+0000 mon.a (mon.0) 572 : audit [INF] from='mgr.14409 ' entity='mgr.y' 2026-03-10T05:46:53.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:46:52 vm02 bash[17462]: audit 2026-03-10T05:46:51.735263+0000 mon.c (mon.1) 39 : audit [DBG] from='mgr.14409 192.168.123.102:0/4073702081' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T05:46:53.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:46:52 vm02 bash[17462]: audit 2026-03-10T05:46:51.736785+0000 mon.c (mon.1) 40 : audit [DBG] from='mgr.14409 192.168.123.102:0/4073702081' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T05:46:53.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:46:52 vm02 bash[17462]: audit 2026-03-10T05:46:51.740756+0000 mon.c (mon.1) 41 : audit [DBG] from='mgr.14409 192.168.123.102:0/4073702081' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:46:53.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:46:52 vm02 bash[17462]: audit 2026-03-10T05:46:51.741479+0000 mon.c (mon.1) 42 : audit [INF] from='mgr.14409 192.168.123.102:0/4073702081' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T05:46:53.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:46:52 vm02 bash[17462]: audit 2026-03-10T05:46:51.750717+0000 mon.c (mon.1) 43 : audit [INF] from='mgr.14409 192.168.123.102:0/4073702081' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/mirror_snapshot_schedule"}]: dispatch 2026-03-10T05:46:53.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:46:52 vm02 bash[17462]: audit 2026-03-10T05:46:51.751091+0000 mon.a (mon.0) 573 : audit [INF] from='mgr.14409 ' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/mirror_snapshot_schedule"}]: dispatch 2026-03-10T05:46:53.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:46:52 vm02 bash[17462]: audit 2026-03-10T05:46:51.762649+0000 mon.c (mon.1) 44 : audit [INF] from='mgr.14409 192.168.123.102:0/4073702081' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/trash_purge_schedule"}]: dispatch 2026-03-10T05:46:53.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:46:52 vm02 bash[17462]: audit 2026-03-10T05:46:51.763038+0000 mon.a (mon.0) 574 : audit [INF] from='mgr.14409 ' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/trash_purge_schedule"}]: dispatch 2026-03-10T05:46:53.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:46:52 vm02 bash[17462]: audit 2026-03-10T05:46:52.082355+0000 mon.a (mon.0) 575 : audit [INF] from='mgr.14409 ' entity='mgr.y' 2026-03-10T05:46:53.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:46:52 vm02 bash[22526]: cluster 2026-03-10T05:46:51.667697+0000 mon.a (mon.0) 570 : cluster [DBG] mgrmap e17: y(active, starting, since 0.0603879s), standbys: x 2026-03-10T05:46:53.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:46:52 vm02 bash[22526]: audit 2026-03-10T05:46:51.671450+0000 mon.c (mon.1) 23 : audit [DBG] from='mgr.14409 192.168.123.102:0/4073702081' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-10T05:46:53.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:46:52 vm02 bash[22526]: audit 2026-03-10T05:46:51.671818+0000 mon.c (mon.1) 24 : audit [DBG] from='mgr.14409 192.168.123.102:0/4073702081' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T05:46:53.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:46:52 vm02 bash[22526]: audit 2026-03-10T05:46:51.672026+0000 mon.c (mon.1) 25 : audit [DBG] from='mgr.14409 192.168.123.102:0/4073702081' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T05:46:53.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:46:52 vm02 bash[22526]: audit 2026-03-10T05:46:51.672712+0000 mon.c (mon.1) 26 : audit [DBG] from='mgr.14409 192.168.123.102:0/4073702081' entity='mgr.y' cmd=[{"prefix": "mgr metadata", "who": "y", "id": "y"}]: dispatch 2026-03-10T05:46:53.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:46:52 vm02 bash[22526]: audit 2026-03-10T05:46:51.695692+0000 mon.c (mon.1) 27 : audit [DBG] from='mgr.14409 192.168.123.102:0/4073702081' entity='mgr.y' cmd=[{"prefix": "mgr metadata", "who": "x", "id": "x"}]: dispatch 2026-03-10T05:46:53.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:46:52 vm02 bash[22526]: audit 2026-03-10T05:46:51.696088+0000 mon.c (mon.1) 28 : audit [DBG] from='mgr.14409 192.168.123.102:0/4073702081' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T05:46:53.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:46:52 vm02 bash[22526]: audit 2026-03-10T05:46:51.696505+0000 mon.c (mon.1) 29 : audit [DBG] from='mgr.14409 192.168.123.102:0/4073702081' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T05:46:53.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:46:52 vm02 bash[22526]: audit 2026-03-10T05:46:51.696908+0000 mon.c (mon.1) 30 : audit [DBG] from='mgr.14409 192.168.123.102:0/4073702081' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T05:46:53.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:46:52 vm02 bash[22526]: audit 2026-03-10T05:46:51.697316+0000 mon.c (mon.1) 31 : audit [DBG] from='mgr.14409 192.168.123.102:0/4073702081' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-10T05:46:53.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:46:52 vm02 bash[22526]: audit 2026-03-10T05:46:51.697722+0000 mon.c (mon.1) 32 : audit [DBG] from='mgr.14409 192.168.123.102:0/4073702081' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-10T05:46:53.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:46:52 vm02 bash[22526]: audit 2026-03-10T05:46:51.698160+0000 mon.c (mon.1) 33 : audit [DBG] from='mgr.14409 192.168.123.102:0/4073702081' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-10T05:46:53.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:46:52 vm02 bash[22526]: audit 2026-03-10T05:46:51.698570+0000 mon.c (mon.1) 34 : audit [DBG] from='mgr.14409 192.168.123.102:0/4073702081' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-10T05:46:53.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:46:52 vm02 bash[22526]: audit 2026-03-10T05:46:51.699002+0000 mon.c (mon.1) 35 : audit [DBG] from='mgr.14409 192.168.123.102:0/4073702081' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-10T05:46:53.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:46:52 vm02 bash[22526]: audit 2026-03-10T05:46:51.699498+0000 mon.c (mon.1) 36 : audit [DBG] from='mgr.14409 192.168.123.102:0/4073702081' entity='mgr.y' cmd=[{"prefix": "mds metadata"}]: dispatch 2026-03-10T05:46:53.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:46:52 vm02 bash[22526]: audit 2026-03-10T05:46:51.699904+0000 mon.c (mon.1) 37 : audit [DBG] from='mgr.14409 192.168.123.102:0/4073702081' entity='mgr.y' cmd=[{"prefix": "osd metadata"}]: dispatch 2026-03-10T05:46:53.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:46:52 vm02 bash[22526]: audit 2026-03-10T05:46:51.700422+0000 mon.c (mon.1) 38 : audit [DBG] from='mgr.14409 192.168.123.102:0/4073702081' entity='mgr.y' cmd=[{"prefix": "mon metadata"}]: dispatch 2026-03-10T05:46:53.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:46:52 vm02 bash[22526]: cluster 2026-03-10T05:46:51.706260+0000 mon.a (mon.0) 571 : cluster [INF] Manager daemon y is now available 2026-03-10T05:46:53.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:46:52 vm02 bash[22526]: audit 2026-03-10T05:46:51.729623+0000 mon.a (mon.0) 572 : audit [INF] from='mgr.14409 ' entity='mgr.y' 2026-03-10T05:46:53.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:46:52 vm02 bash[22526]: audit 2026-03-10T05:46:51.735263+0000 mon.c (mon.1) 39 : audit [DBG] from='mgr.14409 192.168.123.102:0/4073702081' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T05:46:53.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:46:52 vm02 bash[22526]: audit 2026-03-10T05:46:51.736785+0000 mon.c (mon.1) 40 : audit [DBG] from='mgr.14409 192.168.123.102:0/4073702081' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T05:46:53.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:46:52 vm02 bash[22526]: audit 2026-03-10T05:46:51.740756+0000 mon.c (mon.1) 41 : audit [DBG] from='mgr.14409 192.168.123.102:0/4073702081' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:46:53.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:46:52 vm02 bash[22526]: audit 2026-03-10T05:46:51.741479+0000 mon.c (mon.1) 42 : audit [INF] from='mgr.14409 192.168.123.102:0/4073702081' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T05:46:53.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:46:52 vm02 bash[22526]: audit 2026-03-10T05:46:51.750717+0000 mon.c (mon.1) 43 : audit [INF] from='mgr.14409 192.168.123.102:0/4073702081' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/mirror_snapshot_schedule"}]: dispatch 2026-03-10T05:46:53.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:46:52 vm02 bash[22526]: audit 2026-03-10T05:46:51.751091+0000 mon.a (mon.0) 573 : audit [INF] from='mgr.14409 ' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/mirror_snapshot_schedule"}]: dispatch 2026-03-10T05:46:53.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:46:52 vm02 bash[22526]: audit 2026-03-10T05:46:51.762649+0000 mon.c (mon.1) 44 : audit [INF] from='mgr.14409 192.168.123.102:0/4073702081' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/trash_purge_schedule"}]: dispatch 2026-03-10T05:46:53.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:46:52 vm02 bash[22526]: audit 2026-03-10T05:46:51.763038+0000 mon.a (mon.0) 574 : audit [INF] from='mgr.14409 ' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/trash_purge_schedule"}]: dispatch 2026-03-10T05:46:53.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:46:52 vm02 bash[22526]: audit 2026-03-10T05:46:52.082355+0000 mon.a (mon.0) 575 : audit [INF] from='mgr.14409 ' entity='mgr.y' 2026-03-10T05:46:53.171 INFO:teuthology.orchestra.run.vm05.stdout:Scheduled grafana update... 2026-03-10T05:46:53.224 DEBUG:teuthology.orchestra.run.vm05:grafana.a> sudo journalctl -f -n 0 -u ceph-107483ae-1c44-11f1-b530-c1172cd6122a@grafana.a.service 2026-03-10T05:46:53.225 INFO:tasks.cephadm:Setting up client nodes... 2026-03-10T05:46:53.225 DEBUG:teuthology.orchestra.run.vm02:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 107483ae-1c44-11f1-b530-c1172cd6122a -- ceph auth get-or-create client.0 mon 'allow *' osd 'allow *' mds 'allow *' mgr 'allow *' 2026-03-10T05:46:53.641 INFO:teuthology.orchestra.run.vm02.stdout:[client.0] 2026-03-10T05:46:53.641 INFO:teuthology.orchestra.run.vm02.stdout: key = AQBNsK9pTw/zJRAAdmYGNhEWWADgYY22oaLhjQ== 2026-03-10T05:46:53.688 DEBUG:teuthology.orchestra.run.vm02:> set -ex 2026-03-10T05:46:53.688 DEBUG:teuthology.orchestra.run.vm02:> sudo dd of=/etc/ceph/ceph.client.0.keyring 2026-03-10T05:46:53.688 DEBUG:teuthology.orchestra.run.vm02:> sudo chmod 0644 /etc/ceph/ceph.client.0.keyring 2026-03-10T05:46:53.703 DEBUG:teuthology.orchestra.run.vm05:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 107483ae-1c44-11f1-b530-c1172cd6122a -- ceph auth get-or-create client.1 mon 'allow *' osd 'allow *' mds 'allow *' mgr 'allow *' 2026-03-10T05:46:54.008 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:46:53 vm05 bash[17864]: cephadm 2026-03-10T05:46:51.962048+0000 mgr.y (mgr.14409) 1 : cephadm [INF] [10/Mar/2026:05:46:51] ENGINE Bus STARTING 2026-03-10T05:46:54.008 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:46:53 vm05 bash[17864]: cephadm 2026-03-10T05:46:52.076710+0000 mgr.y (mgr.14409) 2 : cephadm [INF] [10/Mar/2026:05:46:52] ENGINE Serving on https://192.168.123.102:7150 2026-03-10T05:46:54.008 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:46:53 vm05 bash[17864]: cephadm 2026-03-10T05:46:52.076843+0000 mgr.y (mgr.14409) 3 : cephadm [INF] [10/Mar/2026:05:46:52] ENGINE Bus STARTED 2026-03-10T05:46:54.008 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:46:53 vm05 bash[17864]: cluster 2026-03-10T05:46:52.694368+0000 mon.a (mon.0) 576 : cluster [DBG] mgrmap e18: y(active, since 1.08706s), standbys: x 2026-03-10T05:46:54.008 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:46:53 vm05 bash[17864]: audit 2026-03-10T05:46:52.704392+0000 mgr.y (mgr.14409) 4 : audit [DBG] from='client.24305 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "alertmanager", "placement": "1;vm02=a", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T05:46:54.008 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:46:53 vm05 bash[17864]: cephadm 2026-03-10T05:46:52.707795+0000 mgr.y (mgr.14409) 5 : cephadm [INF] Saving service alertmanager spec with placement vm02=a;count:1 2026-03-10T05:46:54.008 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:46:53 vm05 bash[17864]: cluster 2026-03-10T05:46:52.709606+0000 mgr.y (mgr.14409) 6 : cluster [DBG] pgmap v3: 1 pgs: 1 active+clean; 449 KiB data, 48 MiB used, 160 GiB / 160 GiB avail 2026-03-10T05:46:54.008 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:46:53 vm05 bash[17864]: audit 2026-03-10T05:46:52.715690+0000 mon.a (mon.0) 577 : audit [INF] from='mgr.14409 ' entity='mgr.y' 2026-03-10T05:46:54.008 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:46:53 vm05 bash[17864]: audit 2026-03-10T05:46:53.168586+0000 mon.a (mon.0) 578 : audit [INF] from='mgr.14409 ' entity='mgr.y' 2026-03-10T05:46:54.008 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:46:53 vm05 bash[17864]: audit 2026-03-10T05:46:53.636555+0000 mon.a (mon.0) 579 : audit [INF] from='client.? 192.168.123.102:0/333607826' entity='client.admin' cmd=[{"prefix": "auth get-or-create", "entity": "client.0", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]: dispatch 2026-03-10T05:46:54.008 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:46:53 vm05 bash[17864]: audit 2026-03-10T05:46:53.641354+0000 mon.a (mon.0) 580 : audit [INF] from='client.? 192.168.123.102:0/333607826' entity='client.admin' cmd='[{"prefix": "auth get-or-create", "entity": "client.0", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]': finished 2026-03-10T05:46:54.084 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:46:53 vm02 bash[17462]: cephadm 2026-03-10T05:46:51.962048+0000 mgr.y (mgr.14409) 1 : cephadm [INF] [10/Mar/2026:05:46:51] ENGINE Bus STARTING 2026-03-10T05:46:54.084 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:46:53 vm02 bash[17462]: cephadm 2026-03-10T05:46:52.076710+0000 mgr.y (mgr.14409) 2 : cephadm [INF] [10/Mar/2026:05:46:52] ENGINE Serving on https://192.168.123.102:7150 2026-03-10T05:46:54.084 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:46:53 vm02 bash[17462]: cephadm 2026-03-10T05:46:52.076843+0000 mgr.y (mgr.14409) 3 : cephadm [INF] [10/Mar/2026:05:46:52] ENGINE Bus STARTED 2026-03-10T05:46:54.084 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:46:53 vm02 bash[17462]: cluster 2026-03-10T05:46:52.694368+0000 mon.a (mon.0) 576 : cluster [DBG] mgrmap e18: y(active, since 1.08706s), standbys: x 2026-03-10T05:46:54.084 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:46:53 vm02 bash[17462]: audit 2026-03-10T05:46:52.704392+0000 mgr.y (mgr.14409) 4 : audit [DBG] from='client.24305 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "alertmanager", "placement": "1;vm02=a", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T05:46:54.084 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:46:53 vm02 bash[17462]: cephadm 2026-03-10T05:46:52.707795+0000 mgr.y (mgr.14409) 5 : cephadm [INF] Saving service alertmanager spec with placement vm02=a;count:1 2026-03-10T05:46:54.084 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:46:53 vm02 bash[17462]: cluster 2026-03-10T05:46:52.709606+0000 mgr.y (mgr.14409) 6 : cluster [DBG] pgmap v3: 1 pgs: 1 active+clean; 449 KiB data, 48 MiB used, 160 GiB / 160 GiB avail 2026-03-10T05:46:54.084 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:46:53 vm02 bash[17462]: audit 2026-03-10T05:46:52.715690+0000 mon.a (mon.0) 577 : audit [INF] from='mgr.14409 ' entity='mgr.y' 2026-03-10T05:46:54.084 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:46:53 vm02 bash[17462]: audit 2026-03-10T05:46:53.168586+0000 mon.a (mon.0) 578 : audit [INF] from='mgr.14409 ' entity='mgr.y' 2026-03-10T05:46:54.084 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:46:53 vm02 bash[17462]: audit 2026-03-10T05:46:53.636555+0000 mon.a (mon.0) 579 : audit [INF] from='client.? 192.168.123.102:0/333607826' entity='client.admin' cmd=[{"prefix": "auth get-or-create", "entity": "client.0", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]: dispatch 2026-03-10T05:46:54.084 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:46:53 vm02 bash[17462]: audit 2026-03-10T05:46:53.641354+0000 mon.a (mon.0) 580 : audit [INF] from='client.? 192.168.123.102:0/333607826' entity='client.admin' cmd='[{"prefix": "auth get-or-create", "entity": "client.0", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]': finished 2026-03-10T05:46:54.084 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:46:53 vm02 bash[22526]: cephadm 2026-03-10T05:46:51.962048+0000 mgr.y (mgr.14409) 1 : cephadm [INF] [10/Mar/2026:05:46:51] ENGINE Bus STARTING 2026-03-10T05:46:54.084 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:46:53 vm02 bash[22526]: cephadm 2026-03-10T05:46:52.076710+0000 mgr.y (mgr.14409) 2 : cephadm [INF] [10/Mar/2026:05:46:52] ENGINE Serving on https://192.168.123.102:7150 2026-03-10T05:46:54.084 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:46:53 vm02 bash[22526]: cephadm 2026-03-10T05:46:52.076843+0000 mgr.y (mgr.14409) 3 : cephadm [INF] [10/Mar/2026:05:46:52] ENGINE Bus STARTED 2026-03-10T05:46:54.084 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:46:53 vm02 bash[22526]: cluster 2026-03-10T05:46:52.694368+0000 mon.a (mon.0) 576 : cluster [DBG] mgrmap e18: y(active, since 1.08706s), standbys: x 2026-03-10T05:46:54.084 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:46:53 vm02 bash[22526]: audit 2026-03-10T05:46:52.704392+0000 mgr.y (mgr.14409) 4 : audit [DBG] from='client.24305 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "alertmanager", "placement": "1;vm02=a", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T05:46:54.084 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:46:53 vm02 bash[22526]: cephadm 2026-03-10T05:46:52.707795+0000 mgr.y (mgr.14409) 5 : cephadm [INF] Saving service alertmanager spec with placement vm02=a;count:1 2026-03-10T05:46:54.084 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:46:53 vm02 bash[22526]: cluster 2026-03-10T05:46:52.709606+0000 mgr.y (mgr.14409) 6 : cluster [DBG] pgmap v3: 1 pgs: 1 active+clean; 449 KiB data, 48 MiB used, 160 GiB / 160 GiB avail 2026-03-10T05:46:54.084 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:46:53 vm02 bash[22526]: audit 2026-03-10T05:46:52.715690+0000 mon.a (mon.0) 577 : audit [INF] from='mgr.14409 ' entity='mgr.y' 2026-03-10T05:46:54.084 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:46:53 vm02 bash[22526]: audit 2026-03-10T05:46:53.168586+0000 mon.a (mon.0) 578 : audit [INF] from='mgr.14409 ' entity='mgr.y' 2026-03-10T05:46:54.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:46:53 vm02 bash[22526]: audit 2026-03-10T05:46:53.636555+0000 mon.a (mon.0) 579 : audit [INF] from='client.? 192.168.123.102:0/333607826' entity='client.admin' cmd=[{"prefix": "auth get-or-create", "entity": "client.0", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]: dispatch 2026-03-10T05:46:54.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:46:53 vm02 bash[22526]: audit 2026-03-10T05:46:53.641354+0000 mon.a (mon.0) 580 : audit [INF] from='client.? 192.168.123.102:0/333607826' entity='client.admin' cmd='[{"prefix": "auth get-or-create", "entity": "client.0", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]': finished 2026-03-10T05:46:54.113 INFO:teuthology.orchestra.run.vm05.stdout:[client.1] 2026-03-10T05:46:54.113 INFO:teuthology.orchestra.run.vm05.stdout: key = AQBOsK9pszJgBhAAjqlrJrtRsJUabIUSYsoUGw== 2026-03-10T05:46:54.180 DEBUG:teuthology.orchestra.run.vm05:> set -ex 2026-03-10T05:46:54.180 DEBUG:teuthology.orchestra.run.vm05:> sudo dd of=/etc/ceph/ceph.client.1.keyring 2026-03-10T05:46:54.180 DEBUG:teuthology.orchestra.run.vm05:> sudo chmod 0644 /etc/ceph/ceph.client.1.keyring 2026-03-10T05:46:54.191 INFO:tasks.ceph:Waiting until ceph daemons up and pgs clean... 2026-03-10T05:46:54.191 INFO:tasks.cephadm.ceph_manager.ceph:waiting for mgr available 2026-03-10T05:46:54.191 DEBUG:teuthology.orchestra.run.vm02:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 shell --fsid 107483ae-1c44-11f1-b530-c1172cd6122a -- ceph mgr dump --format=json 2026-03-10T05:46:55.008 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:46:54 vm05 bash[17864]: audit 2026-03-10T05:46:53.163787+0000 mgr.y (mgr.14409) 7 : audit [DBG] from='client.24338 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "grafana", "placement": "1;vm05=a", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T05:46:55.008 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:46:54 vm05 bash[17864]: cephadm 2026-03-10T05:46:53.164587+0000 mgr.y (mgr.14409) 8 : cephadm [INF] Saving service grafana spec with placement vm05=a;count:1 2026-03-10T05:46:55.008 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:46:54 vm05 bash[17864]: cluster 2026-03-10T05:46:53.680020+0000 mgr.y (mgr.14409) 9 : cluster [DBG] pgmap v4: 1 pgs: 1 active+clean; 449 KiB data, 48 MiB used, 160 GiB / 160 GiB avail 2026-03-10T05:46:55.008 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:46:54 vm05 bash[17864]: audit 2026-03-10T05:46:54.101177+0000 mon.b (mon.2) 22 : audit [INF] from='client.? 192.168.123.105:0/469727025' entity='client.admin' cmd=[{"prefix": "auth get-or-create", "entity": "client.1", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]: dispatch 2026-03-10T05:46:55.008 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:46:54 vm05 bash[17864]: audit 2026-03-10T05:46:54.106756+0000 mon.a (mon.0) 581 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "auth get-or-create", "entity": "client.1", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]: dispatch 2026-03-10T05:46:55.008 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:46:54 vm05 bash[17864]: audit 2026-03-10T05:46:54.112954+0000 mon.a (mon.0) 582 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "auth get-or-create", "entity": "client.1", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]': finished 2026-03-10T05:46:55.008 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:46:54 vm05 bash[17864]: cluster 2026-03-10T05:46:54.181914+0000 mon.a (mon.0) 583 : cluster [DBG] mgrmap e19: y(active, since 2s), standbys: x 2026-03-10T05:46:55.031 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:46:54 vm02 bash[17462]: audit 2026-03-10T05:46:53.163787+0000 mgr.y (mgr.14409) 7 : audit [DBG] from='client.24338 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "grafana", "placement": "1;vm05=a", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T05:46:55.031 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:46:54 vm02 bash[17462]: cephadm 2026-03-10T05:46:53.164587+0000 mgr.y (mgr.14409) 8 : cephadm [INF] Saving service grafana spec with placement vm05=a;count:1 2026-03-10T05:46:55.031 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:46:54 vm02 bash[17462]: cluster 2026-03-10T05:46:53.680020+0000 mgr.y (mgr.14409) 9 : cluster [DBG] pgmap v4: 1 pgs: 1 active+clean; 449 KiB data, 48 MiB used, 160 GiB / 160 GiB avail 2026-03-10T05:46:55.031 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:46:54 vm02 bash[17462]: audit 2026-03-10T05:46:54.101177+0000 mon.b (mon.2) 22 : audit [INF] from='client.? 192.168.123.105:0/469727025' entity='client.admin' cmd=[{"prefix": "auth get-or-create", "entity": "client.1", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]: dispatch 2026-03-10T05:46:55.031 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:46:54 vm02 bash[17462]: audit 2026-03-10T05:46:54.106756+0000 mon.a (mon.0) 581 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "auth get-or-create", "entity": "client.1", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]: dispatch 2026-03-10T05:46:55.031 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:46:54 vm02 bash[17462]: audit 2026-03-10T05:46:54.112954+0000 mon.a (mon.0) 582 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "auth get-or-create", "entity": "client.1", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]': finished 2026-03-10T05:46:55.031 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:46:54 vm02 bash[17462]: cluster 2026-03-10T05:46:54.181914+0000 mon.a (mon.0) 583 : cluster [DBG] mgrmap e19: y(active, since 2s), standbys: x 2026-03-10T05:46:55.031 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:46:54 vm02 bash[22526]: audit 2026-03-10T05:46:53.163787+0000 mgr.y (mgr.14409) 7 : audit [DBG] from='client.24338 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "grafana", "placement": "1;vm05=a", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T05:46:55.031 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:46:54 vm02 bash[22526]: cephadm 2026-03-10T05:46:53.164587+0000 mgr.y (mgr.14409) 8 : cephadm [INF] Saving service grafana spec with placement vm05=a;count:1 2026-03-10T05:46:55.031 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:46:54 vm02 bash[22526]: cluster 2026-03-10T05:46:53.680020+0000 mgr.y (mgr.14409) 9 : cluster [DBG] pgmap v4: 1 pgs: 1 active+clean; 449 KiB data, 48 MiB used, 160 GiB / 160 GiB avail 2026-03-10T05:46:55.031 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:46:54 vm02 bash[22526]: audit 2026-03-10T05:46:54.101177+0000 mon.b (mon.2) 22 : audit [INF] from='client.? 192.168.123.105:0/469727025' entity='client.admin' cmd=[{"prefix": "auth get-or-create", "entity": "client.1", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]: dispatch 2026-03-10T05:46:55.031 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:46:54 vm02 bash[22526]: audit 2026-03-10T05:46:54.106756+0000 mon.a (mon.0) 581 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "auth get-or-create", "entity": "client.1", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]: dispatch 2026-03-10T05:46:55.031 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:46:54 vm02 bash[22526]: audit 2026-03-10T05:46:54.112954+0000 mon.a (mon.0) 582 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "auth get-or-create", "entity": "client.1", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]': finished 2026-03-10T05:46:55.031 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:46:54 vm02 bash[22526]: cluster 2026-03-10T05:46:54.181914+0000 mon.a (mon.0) 583 : cluster [DBG] mgrmap e19: y(active, since 2s), standbys: x 2026-03-10T05:46:55.583 INFO:journalctl@ceph.osd.0.vm02.stdout:Mar 10 05:46:55 vm02 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:46:55.584 INFO:journalctl@ceph.osd.1.vm02.stdout:Mar 10 05:46:55 vm02 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:46:55.584 INFO:journalctl@ceph.osd.2.vm02.stdout:Mar 10 05:46:55 vm02 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:46:55.584 INFO:journalctl@ceph.osd.3.vm02.stdout:Mar 10 05:46:55 vm02 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:46:55.584 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:46:55 vm02 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:46:55.584 INFO:journalctl@ceph.mgr.y.vm02.stdout:Mar 10 05:46:55 vm02 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:46:55.584 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:46:55 vm02 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:46:55.900 INFO:journalctl@ceph.osd.0.vm02.stdout:Mar 10 05:46:55 vm02 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:46:55.900 INFO:journalctl@ceph.osd.1.vm02.stdout:Mar 10 05:46:55 vm02 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:46:55.900 INFO:journalctl@ceph.osd.2.vm02.stdout:Mar 10 05:46:55 vm02 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:46:55.900 INFO:journalctl@ceph.osd.3.vm02.stdout:Mar 10 05:46:55 vm02 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:46:55.900 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:46:55 vm02 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:46:55.900 INFO:journalctl@ceph.mgr.y.vm02.stdout:Mar 10 05:46:55 vm02 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:46:55.901 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:46:55 vm02 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:46:55.901 INFO:journalctl@ceph.node-exporter.a.vm02.stdout:Mar 10 05:46:55 vm02 systemd[1]: Started Ceph node-exporter.a for 107483ae-1c44-11f1-b530-c1172cd6122a. 2026-03-10T05:46:55.901 INFO:journalctl@ceph.node-exporter.a.vm02.stdout:Mar 10 05:46:55 vm02 bash[37510]: Unable to find image 'quay.io/prometheus/node-exporter:v1.3.1' locally 2026-03-10T05:46:56.118 INFO:journalctl@ceph.mgr.x.vm05.stdout:Mar 10 05:46:56 vm05 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:46:56.118 INFO:journalctl@ceph.osd.4.vm05.stdout:Mar 10 05:46:56 vm05 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:46:56.118 INFO:journalctl@ceph.osd.5.vm05.stdout:Mar 10 05:46:56 vm05 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:46:56.118 INFO:journalctl@ceph.osd.6.vm05.stdout:Mar 10 05:46:56 vm05 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:46:56.118 INFO:journalctl@ceph.osd.7.vm05.stdout:Mar 10 05:46:56 vm05 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:46:56.118 INFO:journalctl@ceph.node-exporter.b.vm05.stdout:Mar 10 05:46:56 vm05 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:46:56.119 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:46:55 vm05 bash[17864]: audit 2026-03-10T05:46:54.898979+0000 mon.a (mon.0) 584 : audit [INF] from='mgr.14409 ' entity='mgr.y' 2026-03-10T05:46:56.119 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:46:55 vm05 bash[17864]: audit 2026-03-10T05:46:54.904772+0000 mon.a (mon.0) 585 : audit [INF] from='mgr.14409 ' entity='mgr.y' 2026-03-10T05:46:56.119 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:46:55 vm05 bash[17864]: audit 2026-03-10T05:46:55.141426+0000 mon.a (mon.0) 586 : audit [INF] from='mgr.14409 ' entity='mgr.y' 2026-03-10T05:46:56.119 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:46:55 vm05 bash[17864]: audit 2026-03-10T05:46:55.144339+0000 mon.c (mon.1) 45 : audit [INF] from='mgr.14409 192.168.123.102:0/4073702081' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm02", "name": "osd_memory_target"}]: dispatch 2026-03-10T05:46:56.119 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:46:55 vm05 bash[17864]: audit 2026-03-10T05:46:55.144552+0000 mon.a (mon.0) 587 : audit [INF] from='mgr.14409 ' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm02", "name": "osd_memory_target"}]: dispatch 2026-03-10T05:46:56.119 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:46:55 vm05 bash[17864]: cephadm 2026-03-10T05:46:55.145287+0000 mgr.y (mgr.14409) 10 : cephadm [INF] Updating vm02:/etc/ceph/ceph.conf 2026-03-10T05:46:56.119 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:46:55 vm05 bash[17864]: audit 2026-03-10T05:46:55.152540+0000 mon.a (mon.0) 588 : audit [INF] from='mgr.14409 ' entity='mgr.y' 2026-03-10T05:46:56.119 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:46:55 vm05 bash[17864]: audit 2026-03-10T05:46:55.157551+0000 mon.c (mon.1) 46 : audit [INF] from='mgr.14409 192.168.123.102:0/4073702081' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"}]: dispatch 2026-03-10T05:46:56.119 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:46:55 vm05 bash[17864]: audit 2026-03-10T05:46:55.157789+0000 mon.a (mon.0) 589 : audit [INF] from='mgr.14409 ' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"}]: dispatch 2026-03-10T05:46:56.119 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:46:55 vm05 bash[17864]: audit 2026-03-10T05:46:55.158814+0000 mon.c (mon.1) 47 : audit [INF] from='mgr.14409 192.168.123.102:0/4073702081' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"}]: dispatch 2026-03-10T05:46:56.119 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:46:55 vm05 bash[17864]: audit 2026-03-10T05:46:55.159000+0000 mon.a (mon.0) 590 : audit [INF] from='mgr.14409 ' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"}]: dispatch 2026-03-10T05:46:56.119 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:46:55 vm05 bash[17864]: audit 2026-03-10T05:46:55.160094+0000 mon.c (mon.1) 48 : audit [INF] from='mgr.14409 192.168.123.102:0/4073702081' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.6", "name": "osd_memory_target"}]: dispatch 2026-03-10T05:46:56.119 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:46:55 vm05 bash[17864]: audit 2026-03-10T05:46:55.160899+0000 mon.a (mon.0) 591 : audit [INF] from='mgr.14409 ' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.6", "name": "osd_memory_target"}]: dispatch 2026-03-10T05:46:56.119 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:46:55 vm05 bash[17864]: audit 2026-03-10T05:46:55.165446+0000 mon.c (mon.1) 49 : audit [INF] from='mgr.14409 192.168.123.102:0/4073702081' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.7", "name": "osd_memory_target"}]: dispatch 2026-03-10T05:46:56.119 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:46:55 vm05 bash[17864]: audit 2026-03-10T05:46:55.166077+0000 mon.a (mon.0) 592 : audit [INF] from='mgr.14409 ' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.7", "name": "osd_memory_target"}]: dispatch 2026-03-10T05:46:56.119 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:46:55 vm05 bash[17864]: cephadm 2026-03-10T05:46:55.168208+0000 mgr.y (mgr.14409) 11 : cephadm [INF] Adjusting osd_memory_target on vm05 to 113.9M 2026-03-10T05:46:56.119 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:46:55 vm05 bash[17864]: cephadm 2026-03-10T05:46:55.172689+0000 mgr.y (mgr.14409) 12 : cephadm [WRN] Unable to set osd_memory_target on vm05 to 119478988: error parsing value: Value '119478988' is below minimum 939524096 2026-03-10T05:46:56.119 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:46:55 vm05 bash[17864]: cephadm 2026-03-10T05:46:55.172728+0000 mgr.y (mgr.14409) 13 : cephadm [INF] Updating vm05:/etc/ceph/ceph.conf 2026-03-10T05:46:56.119 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:46:55 vm05 bash[17864]: cephadm 2026-03-10T05:46:55.207358+0000 mgr.y (mgr.14409) 14 : cephadm [INF] Updating vm02:/etc/ceph/ceph.client.admin.keyring 2026-03-10T05:46:56.119 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:46:55 vm05 bash[17864]: cephadm 2026-03-10T05:46:55.232542+0000 mgr.y (mgr.14409) 15 : cephadm [INF] Updating vm05:/etc/ceph/ceph.client.admin.keyring 2026-03-10T05:46:56.119 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:46:55 vm05 bash[17864]: audit 2026-03-10T05:46:55.262946+0000 mon.a (mon.0) 593 : audit [INF] from='mgr.14409 ' entity='mgr.y' 2026-03-10T05:46:56.119 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:46:55 vm05 bash[17864]: audit 2026-03-10T05:46:55.290683+0000 mon.a (mon.0) 594 : audit [INF] from='mgr.14409 ' entity='mgr.y' 2026-03-10T05:46:56.119 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:46:55 vm05 bash[17864]: audit 2026-03-10T05:46:55.294963+0000 mon.a (mon.0) 595 : audit [INF] from='mgr.14409 ' entity='mgr.y' 2026-03-10T05:46:56.119 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:46:55 vm05 bash[17864]: cephadm 2026-03-10T05:46:55.297681+0000 mgr.y (mgr.14409) 16 : cephadm [INF] Deploying daemon node-exporter.a on vm02 2026-03-10T05:46:56.119 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:46:55 vm05 bash[17864]: cluster 2026-03-10T05:46:55.680262+0000 mgr.y (mgr.14409) 17 : cluster [DBG] pgmap v5: 1 pgs: 1 active+clean; 449 KiB data, 48 MiB used, 160 GiB / 160 GiB avail 2026-03-10T05:46:56.119 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:46:55 vm05 bash[17864]: audit 2026-03-10T05:46:55.797803+0000 mon.a (mon.0) 596 : audit [INF] from='mgr.14409 ' entity='mgr.y' 2026-03-10T05:46:56.119 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:46:55 vm05 bash[17864]: cephadm 2026-03-10T05:46:55.802489+0000 mgr.y (mgr.14409) 18 : cephadm [INF] Deploying daemon node-exporter.b on vm05 2026-03-10T05:46:56.119 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:46:56 vm05 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:46:56.334 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:46:55 vm02 bash[17462]: audit 2026-03-10T05:46:54.898979+0000 mon.a (mon.0) 584 : audit [INF] from='mgr.14409 ' entity='mgr.y' 2026-03-10T05:46:56.334 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:46:55 vm02 bash[17462]: audit 2026-03-10T05:46:54.904772+0000 mon.a (mon.0) 585 : audit [INF] from='mgr.14409 ' entity='mgr.y' 2026-03-10T05:46:56.334 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:46:55 vm02 bash[17462]: audit 2026-03-10T05:46:55.141426+0000 mon.a (mon.0) 586 : audit [INF] from='mgr.14409 ' entity='mgr.y' 2026-03-10T05:46:56.334 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:46:55 vm02 bash[17462]: audit 2026-03-10T05:46:55.144339+0000 mon.c (mon.1) 45 : audit [INF] from='mgr.14409 192.168.123.102:0/4073702081' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm02", "name": "osd_memory_target"}]: dispatch 2026-03-10T05:46:56.334 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:46:55 vm02 bash[17462]: audit 2026-03-10T05:46:55.144552+0000 mon.a (mon.0) 587 : audit [INF] from='mgr.14409 ' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm02", "name": "osd_memory_target"}]: dispatch 2026-03-10T05:46:56.334 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:46:55 vm02 bash[17462]: cephadm 2026-03-10T05:46:55.145287+0000 mgr.y (mgr.14409) 10 : cephadm [INF] Updating vm02:/etc/ceph/ceph.conf 2026-03-10T05:46:56.334 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:46:55 vm02 bash[17462]: audit 2026-03-10T05:46:55.152540+0000 mon.a (mon.0) 588 : audit [INF] from='mgr.14409 ' entity='mgr.y' 2026-03-10T05:46:56.334 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:46:55 vm02 bash[17462]: audit 2026-03-10T05:46:55.157551+0000 mon.c (mon.1) 46 : audit [INF] from='mgr.14409 192.168.123.102:0/4073702081' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"}]: dispatch 2026-03-10T05:46:56.334 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:46:55 vm02 bash[17462]: audit 2026-03-10T05:46:55.157789+0000 mon.a (mon.0) 589 : audit [INF] from='mgr.14409 ' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"}]: dispatch 2026-03-10T05:46:56.334 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:46:55 vm02 bash[17462]: audit 2026-03-10T05:46:55.158814+0000 mon.c (mon.1) 47 : audit [INF] from='mgr.14409 192.168.123.102:0/4073702081' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"}]: dispatch 2026-03-10T05:46:56.334 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:46:55 vm02 bash[17462]: audit 2026-03-10T05:46:55.159000+0000 mon.a (mon.0) 590 : audit [INF] from='mgr.14409 ' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"}]: dispatch 2026-03-10T05:46:56.334 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:46:55 vm02 bash[17462]: audit 2026-03-10T05:46:55.160094+0000 mon.c (mon.1) 48 : audit [INF] from='mgr.14409 192.168.123.102:0/4073702081' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.6", "name": "osd_memory_target"}]: dispatch 2026-03-10T05:46:56.334 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:46:55 vm02 bash[17462]: audit 2026-03-10T05:46:55.160899+0000 mon.a (mon.0) 591 : audit [INF] from='mgr.14409 ' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.6", "name": "osd_memory_target"}]: dispatch 2026-03-10T05:46:56.334 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:46:55 vm02 bash[17462]: audit 2026-03-10T05:46:55.165446+0000 mon.c (mon.1) 49 : audit [INF] from='mgr.14409 192.168.123.102:0/4073702081' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.7", "name": "osd_memory_target"}]: dispatch 2026-03-10T05:46:56.334 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:46:55 vm02 bash[17462]: audit 2026-03-10T05:46:55.166077+0000 mon.a (mon.0) 592 : audit [INF] from='mgr.14409 ' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.7", "name": "osd_memory_target"}]: dispatch 2026-03-10T05:46:56.334 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:46:55 vm02 bash[17462]: cephadm 2026-03-10T05:46:55.168208+0000 mgr.y (mgr.14409) 11 : cephadm [INF] Adjusting osd_memory_target on vm05 to 113.9M 2026-03-10T05:46:56.334 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:46:55 vm02 bash[17462]: cephadm 2026-03-10T05:46:55.172689+0000 mgr.y (mgr.14409) 12 : cephadm [WRN] Unable to set osd_memory_target on vm05 to 119478988: error parsing value: Value '119478988' is below minimum 939524096 2026-03-10T05:46:56.334 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:46:55 vm02 bash[17462]: cephadm 2026-03-10T05:46:55.172728+0000 mgr.y (mgr.14409) 13 : cephadm [INF] Updating vm05:/etc/ceph/ceph.conf 2026-03-10T05:46:56.334 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:46:55 vm02 bash[17462]: cephadm 2026-03-10T05:46:55.207358+0000 mgr.y (mgr.14409) 14 : cephadm [INF] Updating vm02:/etc/ceph/ceph.client.admin.keyring 2026-03-10T05:46:56.334 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:46:55 vm02 bash[17462]: cephadm 2026-03-10T05:46:55.232542+0000 mgr.y (mgr.14409) 15 : cephadm [INF] Updating vm05:/etc/ceph/ceph.client.admin.keyring 2026-03-10T05:46:56.334 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:46:55 vm02 bash[17462]: audit 2026-03-10T05:46:55.262946+0000 mon.a (mon.0) 593 : audit [INF] from='mgr.14409 ' entity='mgr.y' 2026-03-10T05:46:56.334 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:46:55 vm02 bash[17462]: audit 2026-03-10T05:46:55.290683+0000 mon.a (mon.0) 594 : audit [INF] from='mgr.14409 ' entity='mgr.y' 2026-03-10T05:46:56.334 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:46:55 vm02 bash[17462]: audit 2026-03-10T05:46:55.294963+0000 mon.a (mon.0) 595 : audit [INF] from='mgr.14409 ' entity='mgr.y' 2026-03-10T05:46:56.334 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:46:55 vm02 bash[17462]: cephadm 2026-03-10T05:46:55.297681+0000 mgr.y (mgr.14409) 16 : cephadm [INF] Deploying daemon node-exporter.a on vm02 2026-03-10T05:46:56.334 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:46:55 vm02 bash[17462]: cluster 2026-03-10T05:46:55.680262+0000 mgr.y (mgr.14409) 17 : cluster [DBG] pgmap v5: 1 pgs: 1 active+clean; 449 KiB data, 48 MiB used, 160 GiB / 160 GiB avail 2026-03-10T05:46:56.334 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:46:55 vm02 bash[17462]: audit 2026-03-10T05:46:55.797803+0000 mon.a (mon.0) 596 : audit [INF] from='mgr.14409 ' entity='mgr.y' 2026-03-10T05:46:56.334 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:46:55 vm02 bash[17462]: cephadm 2026-03-10T05:46:55.802489+0000 mgr.y (mgr.14409) 18 : cephadm [INF] Deploying daemon node-exporter.b on vm05 2026-03-10T05:46:56.335 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:46:55 vm02 bash[22526]: audit 2026-03-10T05:46:54.898979+0000 mon.a (mon.0) 584 : audit [INF] from='mgr.14409 ' entity='mgr.y' 2026-03-10T05:46:56.335 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:46:55 vm02 bash[22526]: audit 2026-03-10T05:46:54.904772+0000 mon.a (mon.0) 585 : audit [INF] from='mgr.14409 ' entity='mgr.y' 2026-03-10T05:46:56.335 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:46:55 vm02 bash[22526]: audit 2026-03-10T05:46:55.141426+0000 mon.a (mon.0) 586 : audit [INF] from='mgr.14409 ' entity='mgr.y' 2026-03-10T05:46:56.335 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:46:55 vm02 bash[22526]: audit 2026-03-10T05:46:55.144339+0000 mon.c (mon.1) 45 : audit [INF] from='mgr.14409 192.168.123.102:0/4073702081' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm02", "name": "osd_memory_target"}]: dispatch 2026-03-10T05:46:56.335 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:46:55 vm02 bash[22526]: audit 2026-03-10T05:46:55.144552+0000 mon.a (mon.0) 587 : audit [INF] from='mgr.14409 ' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm02", "name": "osd_memory_target"}]: dispatch 2026-03-10T05:46:56.335 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:46:55 vm02 bash[22526]: cephadm 2026-03-10T05:46:55.145287+0000 mgr.y (mgr.14409) 10 : cephadm [INF] Updating vm02:/etc/ceph/ceph.conf 2026-03-10T05:46:56.335 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:46:55 vm02 bash[22526]: audit 2026-03-10T05:46:55.152540+0000 mon.a (mon.0) 588 : audit [INF] from='mgr.14409 ' entity='mgr.y' 2026-03-10T05:46:56.335 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:46:55 vm02 bash[22526]: audit 2026-03-10T05:46:55.157551+0000 mon.c (mon.1) 46 : audit [INF] from='mgr.14409 192.168.123.102:0/4073702081' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"}]: dispatch 2026-03-10T05:46:56.335 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:46:55 vm02 bash[22526]: audit 2026-03-10T05:46:55.157789+0000 mon.a (mon.0) 589 : audit [INF] from='mgr.14409 ' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"}]: dispatch 2026-03-10T05:46:56.335 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:46:55 vm02 bash[22526]: audit 2026-03-10T05:46:55.158814+0000 mon.c (mon.1) 47 : audit [INF] from='mgr.14409 192.168.123.102:0/4073702081' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"}]: dispatch 2026-03-10T05:46:56.335 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:46:55 vm02 bash[22526]: audit 2026-03-10T05:46:55.159000+0000 mon.a (mon.0) 590 : audit [INF] from='mgr.14409 ' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"}]: dispatch 2026-03-10T05:46:56.335 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:46:55 vm02 bash[22526]: audit 2026-03-10T05:46:55.160094+0000 mon.c (mon.1) 48 : audit [INF] from='mgr.14409 192.168.123.102:0/4073702081' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.6", "name": "osd_memory_target"}]: dispatch 2026-03-10T05:46:56.335 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:46:55 vm02 bash[22526]: audit 2026-03-10T05:46:55.160899+0000 mon.a (mon.0) 591 : audit [INF] from='mgr.14409 ' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.6", "name": "osd_memory_target"}]: dispatch 2026-03-10T05:46:56.335 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:46:55 vm02 bash[22526]: audit 2026-03-10T05:46:55.165446+0000 mon.c (mon.1) 49 : audit [INF] from='mgr.14409 192.168.123.102:0/4073702081' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.7", "name": "osd_memory_target"}]: dispatch 2026-03-10T05:46:56.335 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:46:55 vm02 bash[22526]: audit 2026-03-10T05:46:55.166077+0000 mon.a (mon.0) 592 : audit [INF] from='mgr.14409 ' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.7", "name": "osd_memory_target"}]: dispatch 2026-03-10T05:46:56.335 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:46:55 vm02 bash[22526]: cephadm 2026-03-10T05:46:55.168208+0000 mgr.y (mgr.14409) 11 : cephadm [INF] Adjusting osd_memory_target on vm05 to 113.9M 2026-03-10T05:46:56.335 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:46:55 vm02 bash[22526]: cephadm 2026-03-10T05:46:55.172689+0000 mgr.y (mgr.14409) 12 : cephadm [WRN] Unable to set osd_memory_target on vm05 to 119478988: error parsing value: Value '119478988' is below minimum 939524096 2026-03-10T05:46:56.335 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:46:55 vm02 bash[22526]: cephadm 2026-03-10T05:46:55.172728+0000 mgr.y (mgr.14409) 13 : cephadm [INF] Updating vm05:/etc/ceph/ceph.conf 2026-03-10T05:46:56.335 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:46:55 vm02 bash[22526]: cephadm 2026-03-10T05:46:55.207358+0000 mgr.y (mgr.14409) 14 : cephadm [INF] Updating vm02:/etc/ceph/ceph.client.admin.keyring 2026-03-10T05:46:56.335 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:46:55 vm02 bash[22526]: cephadm 2026-03-10T05:46:55.232542+0000 mgr.y (mgr.14409) 15 : cephadm [INF] Updating vm05:/etc/ceph/ceph.client.admin.keyring 2026-03-10T05:46:56.335 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:46:55 vm02 bash[22526]: audit 2026-03-10T05:46:55.262946+0000 mon.a (mon.0) 593 : audit [INF] from='mgr.14409 ' entity='mgr.y' 2026-03-10T05:46:56.335 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:46:55 vm02 bash[22526]: audit 2026-03-10T05:46:55.290683+0000 mon.a (mon.0) 594 : audit [INF] from='mgr.14409 ' entity='mgr.y' 2026-03-10T05:46:56.335 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:46:55 vm02 bash[22526]: audit 2026-03-10T05:46:55.294963+0000 mon.a (mon.0) 595 : audit [INF] from='mgr.14409 ' entity='mgr.y' 2026-03-10T05:46:56.335 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:46:55 vm02 bash[22526]: cephadm 2026-03-10T05:46:55.297681+0000 mgr.y (mgr.14409) 16 : cephadm [INF] Deploying daemon node-exporter.a on vm02 2026-03-10T05:46:56.335 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:46:55 vm02 bash[22526]: cluster 2026-03-10T05:46:55.680262+0000 mgr.y (mgr.14409) 17 : cluster [DBG] pgmap v5: 1 pgs: 1 active+clean; 449 KiB data, 48 MiB used, 160 GiB / 160 GiB avail 2026-03-10T05:46:56.335 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:46:55 vm02 bash[22526]: audit 2026-03-10T05:46:55.797803+0000 mon.a (mon.0) 596 : audit [INF] from='mgr.14409 ' entity='mgr.y' 2026-03-10T05:46:56.335 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:46:55 vm02 bash[22526]: cephadm 2026-03-10T05:46:55.802489+0000 mgr.y (mgr.14409) 18 : cephadm [INF] Deploying daemon node-exporter.b on vm05 2026-03-10T05:46:56.421 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:46:56 vm05 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:46:56.421 INFO:journalctl@ceph.mgr.x.vm05.stdout:Mar 10 05:46:56 vm05 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:46:56.421 INFO:journalctl@ceph.osd.4.vm05.stdout:Mar 10 05:46:56 vm05 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:46:56.421 INFO:journalctl@ceph.osd.5.vm05.stdout:Mar 10 05:46:56 vm05 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:46:56.421 INFO:journalctl@ceph.osd.6.vm05.stdout:Mar 10 05:46:56 vm05 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:46:56.421 INFO:journalctl@ceph.osd.7.vm05.stdout:Mar 10 05:46:56 vm05 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:46:56.421 INFO:journalctl@ceph.prometheus.a.vm05.stdout:Mar 10 05:46:56 vm05 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:46:56.422 INFO:journalctl@ceph.node-exporter.b.vm05.stdout:Mar 10 05:46:56 vm05 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:46:56.422 INFO:journalctl@ceph.node-exporter.b.vm05.stdout:Mar 10 05:46:56 vm05 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:46:56.422 INFO:journalctl@ceph.node-exporter.b.vm05.stdout:Mar 10 05:46:56 vm05 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:46:56.422 INFO:journalctl@ceph.node-exporter.b.vm05.stdout:Mar 10 05:46:56 vm05 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:46:56.422 INFO:journalctl@ceph.node-exporter.b.vm05.stdout:Mar 10 05:46:56 vm05 systemd[1]: Started Ceph node-exporter.b for 107483ae-1c44-11f1-b530-c1172cd6122a. 2026-03-10T05:46:56.422 INFO:journalctl@ceph.node-exporter.b.vm05.stdout:Mar 10 05:46:56 vm05 bash[32679]: Unable to find image 'quay.io/prometheus/node-exporter:v1.3.1' locally 2026-03-10T05:46:56.700 INFO:teuthology.orchestra.run.vm02.stderr:Inferring config /var/lib/ceph/107483ae-1c44-11f1-b530-c1172cd6122a/mon.c/config 2026-03-10T05:46:57.038 INFO:teuthology.orchestra.run.vm02.stdout: 2026-03-10T05:46:57.095 INFO:teuthology.orchestra.run.vm02.stdout:{"epoch":19,"active_gid":14409,"active_name":"y","active_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.102:6800","nonce":3443698967},{"type":"v1","addr":"192.168.123.102:6801","nonce":3443698967}]},"active_addr":"192.168.123.102:6801/3443698967","active_change":"2026-03-10T05:46:51.607300+0000","active_mgr_features":4540138303579357183,"available":true,"standbys":[{"gid":24317,"name":"x","mgr_features":4540138303579357183,"available_modules":[{"name":"alerts","can_run":true,"error_string":"","module_options":{"interval":{"name":"interval","type":"secs","level":"advanced","flags":1,"default_value":"60","min":"","max":"","enum_allowed":[],"desc":"How frequently to reexamine health status","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"smtp_destination":{"name":"smtp_destination","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"Email address to send alerts to","long_desc":"","tags":[],"see_also":[]},"smtp_from_name":{"name":"smtp_from_name","type":"str","level":"advanced","flags":1,"default_value":"Ceph","min":"","max":"","enum_allowed":[],"desc":"Email From: name","long_desc":"","tags":[],"see_also":[]},"smtp_host":{"name":"smtp_host","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"SMTP server","long_desc":"","tags":[],"see_also":[]},"smtp_password":{"name":"smtp_password","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"Password to authenticate with","long_desc":"","tags":[],"see_also":[]},"smtp_port":{"name":"smtp_port","type":"int","level":"advanced","flags":1,"default_value":"465","min":"","max":"","enum_allowed":[],"desc":"SMTP port","long_desc":"","tags":[],"see_also":[]},"smtp_sender":{"name":"smtp_sender","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"SMTP envelope sender","long_desc":"","tags":[],"see_also":[]},"smtp_ssl":{"name":"smtp_ssl","type":"bool","level":"advanced","flags":1,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Use SSL to connect to SMTP server","long_desc":"","tags":[],"see_also":[]},"smtp_user":{"name":"smtp_user","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"User to authenticate as","long_desc":"","tags":[],"see_also":[]}}},{"name":"balancer","can_run":true,"error_string":"","module_options":{"active":{"name":"active","type":"bool","level":"advanced","flags":1,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"automatically balance PGs across cluster","long_desc":"","tags":[],"see_also":[]},"begin_time":{"name":"begin_time","type":"str","level":"advanced","flags":1,"default_value":"0000","min":"","max":"","enum_allowed":[],"desc":"beginning time of day to automatically balance","long_desc":"This is a time of day in the format HHMM.","tags":[],"see_also":[]},"begin_weekday":{"name":"begin_weekday","type":"uint","level":"advanced","flags":1,"default_value":"0","min":"0","max":"7","enum_allowed":[],"desc":"Restrict automatic balancing to this day of the week or later","long_desc":"0 or 7 = Sunday, 1 = Monday, etc.","tags":[],"see_also":[]},"crush_compat_max_iterations":{"name":"crush_compat_max_iterations","type":"uint","level":"advanced","flags":1,"default_value":"25","min":"1","max":"250","enum_allowed":[],"desc":"maximum number of iterations to attempt optimization","long_desc":"","tags":[],"see_also":[]},"crush_compat_metrics":{"name":"crush_compat_metrics","type":"str","level":"advanced","flags":1,"default_value":"pgs,objects,bytes","min":"","max":"","enum_allowed":[],"desc":"metrics with which to calculate OSD utilization","long_desc":"Value is a list of one or more of \"pgs\", \"objects\", or \"bytes\", and indicates which metrics to use to balance utilization.","tags":[],"see_also":[]},"crush_compat_step":{"name":"crush_compat_step","type":"float","level":"advanced","flags":1,"default_value":"0.5","min":"0.001","max":"0.999","enum_allowed":[],"desc":"aggressiveness of optimization","long_desc":".99 is very aggressive, .01 is less aggressive","tags":[],"see_also":[]},"end_time":{"name":"end_time","type":"str","level":"advanced","flags":1,"default_value":"2400","min":"","max":"","enum_allowed":[],"desc":"ending time of day to automatically balance","long_desc":"This is a time of day in the format HHMM.","tags":[],"see_also":[]},"end_weekday":{"name":"end_weekday","type":"uint","level":"advanced","flags":1,"default_value":"7","min":"0","max":"7","enum_allowed":[],"desc":"Restrict automatic balancing to days of the week earlier than this","long_desc":"0 or 7 = Sunday, 1 = Monday, etc.","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"min_score":{"name":"min_score","type":"float","level":"advanced","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"minimum score, below which no optimization is attempted","long_desc":"","tags":[],"see_also":[]},"mode":{"name":"mode","type":"str","level":"advanced","flags":1,"default_value":"upmap","min":"","max":"","enum_allowed":["crush-compat","none","upmap"],"desc":"Balancer mode","long_desc":"","tags":[],"see_also":[]},"pool_ids":{"name":"pool_ids","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"pools which the automatic balancing will be limited to","long_desc":"","tags":[],"see_also":[]},"sleep_interval":{"name":"sleep_interval","type":"secs","level":"advanced","flags":1,"default_value":"60","min":"","max":"","enum_allowed":[],"desc":"how frequently to wake up and attempt optimization","long_desc":"","tags":[],"see_also":[]},"upmap_max_deviation":{"name":"upmap_max_deviation","type":"int","level":"advanced","flags":1,"default_value":"5","min":"1","max":"","enum_allowed":[],"desc":"deviation below which no optimization is attempted","long_desc":"If the number of PGs are within this count then no optimization is attempted","tags":[],"see_also":[]},"upmap_max_optimizations":{"name":"upmap_max_optimizations","type":"uint","level":"advanced","flags":1,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"maximum upmap optimizations to make per attempt","long_desc":"","tags":[],"see_also":[]}}},{"name":"cephadm","can_run":true,"error_string":"","module_options":{"agent_down_multiplier":{"name":"agent_down_multiplier","type":"float","level":"advanced","flags":0,"default_value":"3.0","min":"","max":"","enum_allowed":[],"desc":"Multiplied by agent refresh rate to calculate how long agent must not report before being marked down","long_desc":"","tags":[],"see_also":[]},"agent_refresh_rate":{"name":"agent_refresh_rate","type":"secs","level":"advanced","flags":0,"default_value":"20","min":"","max":"","enum_allowed":[],"desc":"How often agent on each host will try to gather and send metadata","long_desc":"","tags":[],"see_also":[]},"agent_starting_port":{"name":"agent_starting_port","type":"int","level":"advanced","flags":0,"default_value":"4721","min":"","max":"","enum_allowed":[],"desc":"First port agent will try to bind to (will also try up to next 1000 subsequent ports if blocked)","long_desc":"","tags":[],"see_also":[]},"allow_ptrace":{"name":"allow_ptrace","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"allow SYS_PTRACE capability on ceph containers","long_desc":"The SYS_PTRACE capability is needed to attach to a process with gdb or strace. Enabling this options can allow debugging daemons that encounter problems at runtime.","tags":[],"see_also":[]},"autotune_interval":{"name":"autotune_interval","type":"secs","level":"advanced","flags":0,"default_value":"600","min":"","max":"","enum_allowed":[],"desc":"how frequently to autotune daemon memory","long_desc":"","tags":[],"see_also":[]},"autotune_memory_target_ratio":{"name":"autotune_memory_target_ratio","type":"float","level":"advanced","flags":0,"default_value":"0.7","min":"","max":"","enum_allowed":[],"desc":"ratio of total system memory to divide amongst autotuned daemons","long_desc":"","tags":[],"see_also":[]},"config_checks_enabled":{"name":"config_checks_enabled","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Enable or disable the cephadm configuration analysis","long_desc":"","tags":[],"see_also":[]},"config_dashboard":{"name":"config_dashboard","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"manage configs like API endpoints in Dashboard.","long_desc":"","tags":[],"see_also":[]},"container_image_alertmanager":{"name":"container_image_alertmanager","type":"str","level":"advanced","flags":0,"default_value":"quay.io/prometheus/alertmanager:v0.23.0","min":"","max":"","enum_allowed":[],"desc":"Prometheus container image","long_desc":"","tags":[],"see_also":[]},"container_image_base":{"name":"container_image_base","type":"str","level":"advanced","flags":1,"default_value":"quay.io/ceph/ceph","min":"","max":"","enum_allowed":[],"desc":"Container image name, without the tag","long_desc":"","tags":[],"see_also":[]},"container_image_grafana":{"name":"container_image_grafana","type":"str","level":"advanced","flags":0,"default_value":"quay.io/ceph/ceph-grafana:8.3.5","min":"","max":"","enum_allowed":[],"desc":"Prometheus container image","long_desc":"","tags":[],"see_also":[]},"container_image_haproxy":{"name":"container_image_haproxy","type":"str","level":"advanced","flags":0,"default_value":"docker.io/library/haproxy:2.3","min":"","max":"","enum_allowed":[],"desc":"HAproxy container image","long_desc":"","tags":[],"see_also":[]},"container_image_keepalived":{"name":"container_image_keepalived","type":"str","level":"advanced","flags":0,"default_value":"docker.io/arcts/keepalived","min":"","max":"","enum_allowed":[],"desc":"Keepalived container image","long_desc":"","tags":[],"see_also":[]},"container_image_node_exporter":{"name":"container_image_node_exporter","type":"str","level":"advanced","flags":0,"default_value":"quay.io/prometheus/node-exporter:v1.3.1","min":"","max":"","enum_allowed":[],"desc":"Prometheus container image","long_desc":"","tags":[],"see_also":[]},"container_image_prometheus":{"name":"container_image_prometheus","type":"str","level":"advanced","flags":0,"default_value":"quay.io/prometheus/prometheus:v2.33.4","min":"","max":"","enum_allowed":[],"desc":"Prometheus container image","long_desc":"","tags":[],"see_also":[]},"container_image_snmp_gateway":{"name":"container_image_snmp_gateway","type":"str","level":"advanced","flags":0,"default_value":"docker.io/maxwo/snmp-notifier:v1.2.1","min":"","max":"","enum_allowed":[],"desc":"SNMP Gateway container image","long_desc":"","tags":[],"see_also":[]},"container_init":{"name":"container_init","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Run podman/docker with `--init`","long_desc":"","tags":[],"see_also":[]},"daemon_cache_timeout":{"name":"daemon_cache_timeout","type":"secs","level":"advanced","flags":0,"default_value":"600","min":"","max":"","enum_allowed":[],"desc":"seconds to cache service (daemon) inventory","long_desc":"","tags":[],"see_also":[]},"default_registry":{"name":"default_registry","type":"str","level":"advanced","flags":0,"default_value":"docker.io","min":"","max":"","enum_allowed":[],"desc":"Search-registry to which we should normalize unqualified image names. This is not the default registry","long_desc":"","tags":[],"see_also":[]},"device_cache_timeout":{"name":"device_cache_timeout","type":"secs","level":"advanced","flags":0,"default_value":"1800","min":"","max":"","enum_allowed":[],"desc":"seconds to cache device inventory","long_desc":"","tags":[],"see_also":[]},"device_enhanced_scan":{"name":"device_enhanced_scan","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Use libstoragemgmt during device scans","long_desc":"","tags":[],"see_also":[]},"facts_cache_timeout":{"name":"facts_cache_timeout","type":"secs","level":"advanced","flags":0,"default_value":"60","min":"","max":"","enum_allowed":[],"desc":"seconds to cache host facts data","long_desc":"","tags":[],"see_also":[]},"host_check_interval":{"name":"host_check_interval","type":"secs","level":"advanced","flags":0,"default_value":"600","min":"","max":"","enum_allowed":[],"desc":"how frequently to perform a host check","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"log to the \"cephadm\" cluster log channel\"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"manage_etc_ceph_ceph_conf":{"name":"manage_etc_ceph_ceph_conf","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Manage and own /etc/ceph/ceph.conf on the hosts.","long_desc":"","tags":[],"see_also":[]},"manage_etc_ceph_ceph_conf_hosts":{"name":"manage_etc_ceph_ceph_conf_hosts","type":"str","level":"advanced","flags":0,"default_value":"*","min":"","max":"","enum_allowed":[],"desc":"PlacementSpec describing on which hosts to manage /etc/ceph/ceph.conf","long_desc":"","tags":[],"see_also":[]},"max_count_per_host":{"name":"max_count_per_host","type":"int","level":"advanced","flags":0,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"max number of daemons per service per host","long_desc":"","tags":[],"see_also":[]},"max_osd_draining_count":{"name":"max_osd_draining_count","type":"int","level":"advanced","flags":0,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"max number of osds that will be drained simultaneously when osds are removed","long_desc":"","tags":[],"see_also":[]},"migration_current":{"name":"migration_current","type":"int","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"internal - do not modify","long_desc":"","tags":[],"see_also":[]},"mode":{"name":"mode","type":"str","level":"advanced","flags":0,"default_value":"root","min":"","max":"","enum_allowed":["cephadm-package","root"],"desc":"mode for remote execution of cephadm","long_desc":"","tags":[],"see_also":[]},"prometheus_alerts_path":{"name":"prometheus_alerts_path","type":"str","level":"advanced","flags":0,"default_value":"/etc/prometheus/ceph/ceph_default_alerts.yml","min":"","max":"","enum_allowed":[],"desc":"location of alerts to include in prometheus deployments","long_desc":"","tags":[],"see_also":[]},"registry_insecure":{"name":"registry_insecure","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Registry is to be considered insecure (no TLS available). Only for development purposes.","long_desc":"","tags":[],"see_also":[]},"registry_password":{"name":"registry_password","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"Custom repository password. Only used for logging into a registry.","long_desc":"","tags":[],"see_also":[]},"registry_url":{"name":"registry_url","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"Registry url for login purposes. This is not the default registry","long_desc":"","tags":[],"see_also":[]},"registry_username":{"name":"registry_username","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"Custom repository username. Only used for logging into a registry.","long_desc":"","tags":[],"see_also":[]},"ssh_config_file":{"name":"ssh_config_file","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"customized SSH config file to connect to managed hosts","long_desc":"","tags":[],"see_also":[]},"use_agent":{"name":"use_agent","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Use cephadm agent on each host to gather and send metadata","long_desc":"","tags":[],"see_also":[]},"use_repo_digest":{"name":"use_repo_digest","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Automatically convert image tags to image digest. Make sure all daemons use the same image","long_desc":"","tags":[],"see_also":[]},"warn_on_failed_host_check":{"name":"warn_on_failed_host_check","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"raise a health warning if the host check fails","long_desc":"","tags":[],"see_also":[]},"warn_on_stray_daemons":{"name":"warn_on_stray_daemons","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"raise a health warning if daemons are detected that are not managed by cephadm","long_desc":"","tags":[],"see_also":[]},"warn_on_stray_hosts":{"name":"warn_on_stray_hosts","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"raise a health warning if daemons are detected on a host that is not managed by cephadm","long_desc":"","tags":[],"see_also":[]}}},{"name":"crash","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"retain_interval":{"name":"retain_interval","type":"secs","level":"advanced","flags":1,"default_value":"31536000","min":"","max":"","enum_allowed":[],"desc":"how long to retain crashes before pruning them","long_desc":"","tags":[],"see_also":[]},"warn_recent_interval":{"name":"warn_recent_interval","type":"secs","level":"advanced","flags":1,"default_value":"1209600","min":"","max":"","enum_allowed":[],"desc":"time interval in which to warn about recent crashes","long_desc":"","tags":[],"see_also":[]}}},{"name":"dashboard","can_run":true,"error_string":"","module_options":{"ACCOUNT_LOCKOUT_ATTEMPTS":{"name":"ACCOUNT_LOCKOUT_ATTEMPTS","type":"int","level":"advanced","flags":0,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ALERTMANAGER_API_HOST":{"name":"ALERTMANAGER_API_HOST","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ALERTMANAGER_API_SSL_VERIFY":{"name":"ALERTMANAGER_API_SSL_VERIFY","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"AUDIT_API_ENABLED":{"name":"AUDIT_API_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"AUDIT_API_LOG_PAYLOAD":{"name":"AUDIT_API_LOG_PAYLOAD","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ENABLE_BROWSABLE_API":{"name":"ENABLE_BROWSABLE_API","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_CEPHFS":{"name":"FEATURE_TOGGLE_CEPHFS","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_ISCSI":{"name":"FEATURE_TOGGLE_ISCSI","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_MIRRORING":{"name":"FEATURE_TOGGLE_MIRRORING","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_NFS":{"name":"FEATURE_TOGGLE_NFS","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_RBD":{"name":"FEATURE_TOGGLE_RBD","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_RGW":{"name":"FEATURE_TOGGLE_RGW","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GANESHA_CLUSTERS_RADOS_POOL_NAMESPACE":{"name":"GANESHA_CLUSTERS_RADOS_POOL_NAMESPACE","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GRAFANA_API_PASSWORD":{"name":"GRAFANA_API_PASSWORD","type":"str","level":"advanced","flags":0,"default_value":"admin","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GRAFANA_API_SSL_VERIFY":{"name":"GRAFANA_API_SSL_VERIFY","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GRAFANA_API_URL":{"name":"GRAFANA_API_URL","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GRAFANA_API_USERNAME":{"name":"GRAFANA_API_USERNAME","type":"str","level":"advanced","flags":0,"default_value":"admin","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GRAFANA_FRONTEND_API_URL":{"name":"GRAFANA_FRONTEND_API_URL","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GRAFANA_UPDATE_DASHBOARDS":{"name":"GRAFANA_UPDATE_DASHBOARDS","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ISCSI_API_SSL_VERIFICATION":{"name":"ISCSI_API_SSL_VERIFICATION","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ISSUE_TRACKER_API_KEY":{"name":"ISSUE_TRACKER_API_KEY","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PROMETHEUS_API_HOST":{"name":"PROMETHEUS_API_HOST","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PROMETHEUS_API_SSL_VERIFY":{"name":"PROMETHEUS_API_SSL_VERIFY","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_COMPLEXITY_ENABLED":{"name":"PWD_POLICY_CHECK_COMPLEXITY_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_EXCLUSION_LIST_ENABLED":{"name":"PWD_POLICY_CHECK_EXCLUSION_LIST_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_LENGTH_ENABLED":{"name":"PWD_POLICY_CHECK_LENGTH_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_OLDPWD_ENABLED":{"name":"PWD_POLICY_CHECK_OLDPWD_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_REPETITIVE_CHARS_ENABLED":{"name":"PWD_POLICY_CHECK_REPETITIVE_CHARS_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_SEQUENTIAL_CHARS_ENABLED":{"name":"PWD_POLICY_CHECK_SEQUENTIAL_CHARS_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_USERNAME_ENABLED":{"name":"PWD_POLICY_CHECK_USERNAME_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_ENABLED":{"name":"PWD_POLICY_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_EXCLUSION_LIST":{"name":"PWD_POLICY_EXCLUSION_LIST","type":"str","level":"advanced","flags":0,"default_value":"osd,host,dashboard,pool,block,nfs,ceph,monitors,gateway,logs,crush,maps","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_MIN_COMPLEXITY":{"name":"PWD_POLICY_MIN_COMPLEXITY","type":"int","level":"advanced","flags":0,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_MIN_LENGTH":{"name":"PWD_POLICY_MIN_LENGTH","type":"int","level":"advanced","flags":0,"default_value":"8","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"REST_REQUESTS_TIMEOUT":{"name":"REST_REQUESTS_TIMEOUT","type":"int","level":"advanced","flags":0,"default_value":"45","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"RGW_API_ACCESS_KEY":{"name":"RGW_API_ACCESS_KEY","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"RGW_API_ADMIN_RESOURCE":{"name":"RGW_API_ADMIN_RESOURCE","type":"str","level":"advanced","flags":0,"default_value":"admin","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"RGW_API_SECRET_KEY":{"name":"RGW_API_SECRET_KEY","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"RGW_API_SSL_VERIFY":{"name":"RGW_API_SSL_VERIFY","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"USER_PWD_EXPIRATION_SPAN":{"name":"USER_PWD_EXPIRATION_SPAN","type":"int","level":"advanced","flags":0,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"USER_PWD_EXPIRATION_WARNING_1":{"name":"USER_PWD_EXPIRATION_WARNING_1","type":"int","level":"advanced","flags":0,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"USER_PWD_EXPIRATION_WARNING_2":{"name":"USER_PWD_EXPIRATION_WARNING_2","type":"int","level":"advanced","flags":0,"default_value":"5","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"crt_file":{"name":"crt_file","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"debug":{"name":"debug","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Enable/disable debug options","long_desc":"","tags":[],"see_also":[]},"jwt_token_ttl":{"name":"jwt_token_ttl","type":"int","level":"advanced","flags":0,"default_value":"28800","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"key_file":{"name":"key_file","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"motd":{"name":"motd","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"The message of the day","long_desc":"","tags":[],"see_also":[]},"server_addr":{"name":"server_addr","type":"str","level":"advanced","flags":0,"default_value":"::","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"server_port":{"name":"server_port","type":"int","level":"advanced","flags":0,"default_value":"8080","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ssl":{"name":"ssl","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ssl_server_port":{"name":"ssl_server_port","type":"int","level":"advanced","flags":0,"default_value":"8443","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"standby_behaviour":{"name":"standby_behaviour","type":"str","level":"advanced","flags":0,"default_value":"redirect","min":"","max":"","enum_allowed":["error","redirect"],"desc":"","long_desc":"","tags":[],"see_also":[]},"standby_error_status_code":{"name":"standby_error_status_code","type":"int","level":"advanced","flags":0,"default_value":"500","min":"400","max":"599","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"url_prefix":{"name":"url_prefix","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"devicehealth","can_run":true,"error_string":"","module_options":{"enable_monitoring":{"name":"enable_monitoring","type":"bool","level":"advanced","flags":1,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"monitor device health metrics","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"mark_out_threshold":{"name":"mark_out_threshold","type":"secs","level":"advanced","flags":1,"default_value":"2419200","min":"","max":"","enum_allowed":[],"desc":"automatically mark OSD if it may fail before this long","long_desc":"","tags":[],"see_also":[]},"pool_name":{"name":"pool_name","type":"str","level":"advanced","flags":1,"default_value":"device_health_metrics","min":"","max":"","enum_allowed":[],"desc":"name of pool in which to store device health metrics","long_desc":"","tags":[],"see_also":[]},"retention_period":{"name":"retention_period","type":"secs","level":"advanced","flags":1,"default_value":"15552000","min":"","max":"","enum_allowed":[],"desc":"how long to retain device health metrics","long_desc":"","tags":[],"see_also":[]},"scrape_frequency":{"name":"scrape_frequency","type":"secs","level":"advanced","flags":1,"default_value":"86400","min":"","max":"","enum_allowed":[],"desc":"how frequently to scrape device health metrics","long_desc":"","tags":[],"see_also":[]},"self_heal":{"name":"self_heal","type":"bool","level":"advanced","flags":1,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"preemptively heal cluster around devices that may fail","long_desc":"","tags":[],"see_also":[]},"sleep_interval":{"name":"sleep_interval","type":"secs","level":"advanced","flags":1,"default_value":"600","min":"","max":"","enum_allowed":[],"desc":"how frequently to wake up and check device health","long_desc":"","tags":[],"see_also":[]},"warn_threshold":{"name":"warn_threshold","type":"secs","level":"advanced","flags":1,"default_value":"7257600","min":"","max":"","enum_allowed":[],"desc":"raise health warning if OSD may fail before this long","long_desc":"","tags":[],"see_also":[]}}},{"name":"diskprediction_local","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"predict_interval":{"name":"predict_interval","type":"str","level":"advanced","flags":0,"default_value":"86400","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"predictor_model":{"name":"predictor_model","type":"str","level":"advanced","flags":0,"default_value":"prophetstor","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sleep_interval":{"name":"sleep_interval","type":"str","level":"advanced","flags":0,"default_value":"600","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"influx","can_run":false,"error_string":"influxdb python module not found","module_options":{"batch_size":{"name":"batch_size","type":"int","level":"advanced","flags":0,"default_value":"5000","min":"","max":"","enum_allowed":[],"desc":"How big batches of data points should be when sending to InfluxDB.","long_desc":"","tags":[],"see_also":[]},"database":{"name":"database","type":"str","level":"advanced","flags":0,"default_value":"ceph","min":"","max":"","enum_allowed":[],"desc":"InfluxDB database name. You will need to create this database and grant write privileges to the configured username or the username must have admin privileges to create it.","long_desc":"","tags":[],"see_also":[]},"hostname":{"name":"hostname","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"InfluxDB server hostname","long_desc":"","tags":[],"see_also":[]},"interval":{"name":"interval","type":"secs","level":"advanced","flags":0,"default_value":"30","min":"5","max":"","enum_allowed":[],"desc":"Time between reports to InfluxDB. Default 30 seconds.","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"password":{"name":"password","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"password of InfluxDB server user","long_desc":"","tags":[],"see_also":[]},"port":{"name":"port","type":"int","level":"advanced","flags":0,"default_value":"8086","min":"","max":"","enum_allowed":[],"desc":"InfluxDB server port","long_desc":"","tags":[],"see_also":[]},"ssl":{"name":"ssl","type":"str","level":"advanced","flags":0,"default_value":"false","min":"","max":"","enum_allowed":[],"desc":"Use https connection for InfluxDB server. Use \"true\" or \"false\".","long_desc":"","tags":[],"see_also":[]},"threads":{"name":"threads","type":"int","level":"advanced","flags":0,"default_value":"5","min":"1","max":"32","enum_allowed":[],"desc":"How many worker threads should be spawned for sending data to InfluxDB.","long_desc":"","tags":[],"see_also":[]},"username":{"name":"username","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"username of InfluxDB server user","long_desc":"","tags":[],"see_also":[]},"verify_ssl":{"name":"verify_ssl","type":"str","level":"advanced","flags":0,"default_value":"true","min":"","max":"","enum_allowed":[],"desc":"Verify https cert for InfluxDB server. Use \"true\" or \"false\".","long_desc":"","tags":[],"see_also":[]}}},{"name":"insights","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"iostat","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"k8sevents","can_run":true,"error_string":"","module_options":{"ceph_event_retention_days":{"name":"ceph_event_retention_days","type":"int","level":"advanced","flags":0,"default_value":"7","min":"","max":"","enum_allowed":[],"desc":"Days to hold ceph event information within local cache","long_desc":"","tags":[],"see_also":[]},"config_check_secs":{"name":"config_check_secs","type":"int","level":"advanced","flags":0,"default_value":"10","min":"10","max":"","enum_allowed":[],"desc":"interval (secs) to check for cluster configuration changes","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"localpool","can_run":true,"error_string":"","module_options":{"failure_domain":{"name":"failure_domain","type":"str","level":"advanced","flags":1,"default_value":"host","min":"","max":"","enum_allowed":[],"desc":"failure domain for any created local pool","long_desc":"what failure domain we should separate data replicas across.","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"min_size":{"name":"min_size","type":"int","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"default min_size for any created local pool","long_desc":"value to set min_size to (unchanged from Ceph's default if this option is not set)","tags":[],"see_also":[]},"num_rep":{"name":"num_rep","type":"int","level":"advanced","flags":1,"default_value":"3","min":"","max":"","enum_allowed":[],"desc":"default replica count for any created local pool","long_desc":"","tags":[],"see_also":[]},"pg_num":{"name":"pg_num","type":"int","level":"advanced","flags":1,"default_value":"128","min":"","max":"","enum_allowed":[],"desc":"default pg_num for any created local pool","long_desc":"","tags":[],"see_also":[]},"prefix":{"name":"prefix","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"name prefix for any created local pool","long_desc":"","tags":[],"see_also":[]},"subtree":{"name":"subtree","type":"str","level":"advanced","flags":1,"default_value":"rack","min":"","max":"","enum_allowed":[],"desc":"CRUSH level for which to create a local pool","long_desc":"which CRUSH subtree type the module should create a pool for.","tags":[],"see_also":[]}}},{"name":"mds_autoscaler","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"mirroring","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"nfs","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"orchestrator","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"orchestrator":{"name":"orchestrator","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["cephadm","rook","test_orchestrator"],"desc":"Orchestrator backend","long_desc":"","tags":[],"see_also":[]}}},{"name":"osd_perf_query","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"osd_support","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"pg_autoscaler","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"noautoscale":{"name":"noautoscale","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"global autoscale flag","long_desc":"Option to turn on/off the autoscaler for all pools","tags":[],"see_also":[]},"sleep_interval":{"name":"sleep_interval","type":"secs","level":"advanced","flags":0,"default_value":"60","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"threshold":{"name":"threshold","type":"float","level":"advanced","flags":0,"default_value":"3.0","min":"1.0","max":"","enum_allowed":[],"desc":"scaling threshold","long_desc":"The factor by which the `NEW PG_NUM` must vary from the current`PG_NUM` before being accepted. Cannot be less than 1.0","tags":[],"see_also":[]}}},{"name":"progress","can_run":true,"error_string":"","module_options":{"allow_pg_recovery_event":{"name":"allow_pg_recovery_event","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"allow the module to show pg recovery progress","long_desc":"","tags":[],"see_also":[]},"enabled":{"name":"enabled","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"max_completed_events":{"name":"max_completed_events","type":"int","level":"advanced","flags":1,"default_value":"50","min":"","max":"","enum_allowed":[],"desc":"number of past completed events to remember","long_desc":"","tags":[],"see_also":[]},"sleep_interval":{"name":"sleep_interval","type":"secs","level":"advanced","flags":1,"default_value":"5","min":"","max":"","enum_allowed":[],"desc":"how long the module is going to sleep","long_desc":"","tags":[],"see_also":[]}}},{"name":"prometheus","can_run":true,"error_string":"","module_options":{"cache":{"name":"cache","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rbd_stats_pools":{"name":"rbd_stats_pools","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rbd_stats_pools_refresh_interval":{"name":"rbd_stats_pools_refresh_interval","type":"int","level":"advanced","flags":0,"default_value":"300","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"scrape_interval":{"name":"scrape_interval","type":"float","level":"advanced","flags":0,"default_value":"15.0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"server_addr":{"name":"server_addr","type":"str","level":"advanced","flags":0,"default_value":"::","min":"","max":"","enum_allowed":[],"desc":"the IPv4 or IPv6 address on which the module listens for HTTP requests","long_desc":"","tags":[],"see_also":[]},"server_port":{"name":"server_port","type":"int","level":"advanced","flags":0,"default_value":"9283","min":"","max":"","enum_allowed":[],"desc":"the port on which the module listens for HTTP requests","long_desc":"","tags":[],"see_also":[]},"stale_cache_strategy":{"name":"stale_cache_strategy","type":"str","level":"advanced","flags":0,"default_value":"log","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"standby_behaviour":{"name":"standby_behaviour","type":"str","level":"advanced","flags":1,"default_value":"default","min":"","max":"","enum_allowed":["default","error"],"desc":"","long_desc":"","tags":[],"see_also":[]},"standby_error_status_code":{"name":"standby_error_status_code","type":"int","level":"advanced","flags":1,"default_value":"500","min":"400","max":"599","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"rbd_support","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"max_concurrent_snap_create":{"name":"max_concurrent_snap_create","type":"int","level":"advanced","flags":0,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"mirror_snapshot_schedule":{"name":"mirror_snapshot_schedule","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"trash_purge_schedule":{"name":"trash_purge_schedule","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"restful","can_run":true,"error_string":"","module_options":{"enable_auth":{"name":"enable_auth","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"key_file":{"name":"key_file","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"server_addr":{"name":"server_addr","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"server_port":{"name":"server_port","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"rook","can_run":true,"error_string":"","module_options":{"drive_group_interval":{"name":"drive_group_interval","type":"float","level":"advanced","flags":0,"default_value":"300.0","min":"","max":"","enum_allowed":[],"desc":"interval in seconds between re-application of applied drive_groups","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"storage_class":{"name":"storage_class","type":"str","level":"advanced","flags":0,"default_value":"local","min":"","max":"","enum_allowed":[],"desc":"storage class name for LSO-discovered PVs","long_desc":"","tags":[],"see_also":[]}}},{"name":"selftest","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"roption1":{"name":"roption1","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"roption2":{"name":"roption2","type":"str","level":"advanced","flags":0,"default_value":"xyz","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption1":{"name":"rwoption1","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption2":{"name":"rwoption2","type":"int","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption3":{"name":"rwoption3","type":"float","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption4":{"name":"rwoption4","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption5":{"name":"rwoption5","type":"bool","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption6":{"name":"rwoption6","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption7":{"name":"rwoption7","type":"int","level":"advanced","flags":0,"default_value":"","min":"1","max":"42","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"testkey":{"name":"testkey","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"testlkey":{"name":"testlkey","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"testnewline":{"name":"testnewline","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"snap_schedule","can_run":true,"error_string":"","module_options":{"allow_m_granularity":{"name":"allow_m_granularity","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"allow minute scheduled snapshots","long_desc":"","tags":[],"see_also":[]},"dump_on_update":{"name":"dump_on_update","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"dump database to debug log on update","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"stats","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"status","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"telegraf","can_run":true,"error_string":"","module_options":{"address":{"name":"address","type":"str","level":"advanced","flags":0,"default_value":"unixgram:///tmp/telegraf.sock","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"interval":{"name":"interval","type":"secs","level":"advanced","flags":0,"default_value":"15","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"telemetry","can_run":true,"error_string":"","module_options":{"channel_basic":{"name":"channel_basic","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Share basic cluster information (size, version)","long_desc":"","tags":[],"see_also":[]},"channel_crash":{"name":"channel_crash","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Share metadata about Ceph daemon crashes (version, stack straces, etc)","long_desc":"","tags":[],"see_also":[]},"channel_device":{"name":"channel_device","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Share device health metrics (e.g., SMART data, minus potentially identifying info like serial numbers)","long_desc":"","tags":[],"see_also":[]},"channel_ident":{"name":"channel_ident","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Share a user-provided description and/or contact email for the cluster","long_desc":"","tags":[],"see_also":[]},"channel_perf":{"name":"channel_perf","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Share various performance metrics of a cluster","long_desc":"","tags":[],"see_also":[]},"contact":{"name":"contact","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"description":{"name":"description","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"device_url":{"name":"device_url","type":"str","level":"advanced","flags":0,"default_value":"https://telemetry.ceph.com/device","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"enabled":{"name":"enabled","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"interval":{"name":"interval","type":"int","level":"advanced","flags":0,"default_value":"24","min":"8","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"last_opt_revision":{"name":"last_opt_revision","type":"int","level":"advanced","flags":0,"default_value":"1","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"leaderboard":{"name":"leaderboard","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"organization":{"name":"organization","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"proxy":{"name":"proxy","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"url":{"name":"url","type":"str","level":"advanced","flags":0,"default_value":"https://telemetry.ceph.com/report","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"test_orchestrator","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"volumes","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"max_concurrent_clones":{"name":"max_concurrent_clones","type":"int","level":"advanced","flags":0,"default_value":"4","min":"","max":"","enum_allowed":[],"desc":"Number of asynchronous cloner threads","long_desc":"","tags":[],"see_also":[]},"snapshot_clone_delay":{"name":"snapshot_clone_delay","type":"int","level":"advanced","flags":0,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"Delay clone begin operation by snapshot_clone_delay seconds","long_desc":"","tags":[],"see_also":[]}}},{"name":"zabbix","can_run":true,"error_string":"","module_options":{"discovery_interval":{"name":"discovery_interval","type":"uint","level":"advanced","flags":0,"default_value":"100","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"identifier":{"name":"identifier","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"interval":{"name":"interval","type":"secs","level":"advanced","flags":0,"default_value":"60","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"zabbix_host":{"name":"zabbix_host","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"zabbix_port":{"name":"zabbix_port","type":"int","level":"advanced","flags":0,"default_value":"10051","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"zabbix_sender":{"name":"zabbix_sender","type":"str","level":"advanced","flags":0,"default_value":"/usr/bin/zabbix_sender","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}}]}],"modules":["cephadm","dashboard","iostat","nfs","prometheus","restful"],"available_modules":[{"name":"alerts","can_run":true,"error_string":"","module_options":{"interval":{"name":"interval","type":"secs","level":"advanced","flags":1,"default_value":"60","min":"","max":"","enum_allowed":[],"desc":"How frequently to reexamine health status","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"smtp_destination":{"name":"smtp_destination","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"Email address to send alerts to","long_desc":"","tags":[],"see_also":[]},"smtp_from_name":{"name":"smtp_from_name","type":"str","level":"advanced","flags":1,"default_value":"Ceph","min":"","max":"","enum_allowed":[],"desc":"Email From: name","long_desc":"","tags":[],"see_also":[]},"smtp_host":{"name":"smtp_host","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"SMTP server","long_desc":"","tags":[],"see_also":[]},"smtp_password":{"name":"smtp_password","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"Password to authenticate with","long_desc":"","tags":[],"see_also":[]},"smtp_port":{"name":"smtp_port","type":"int","level":"advanced","flags":1,"default_value":"465","min":"","max":"","enum_allowed":[],"desc":"SMTP port","long_desc":"","tags":[],"see_also":[]},"smtp_sender":{"name":"smtp_sender","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"SMTP envelope sender","long_desc":"","tags":[],"see_also":[]},"smtp_ssl":{"name":"smtp_ssl","type":"bool","level":"advanced","flags":1,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Use SSL to connect to SMTP server","long_desc":"","tags":[],"see_also":[]},"smtp_user":{"name":"smtp_user","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"User to authenticate as","long_desc":"","tags":[],"see_also":[]}}},{"name":"balancer","can_run":true,"error_string":"","module_options":{"active":{"name":"active","type":"bool","level":"advanced","flags":1,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"automatically balance PGs across cluster","long_desc":"","tags":[],"see_also":[]},"begin_time":{"name":"begin_time","type":"str","level":"advanced","flags":1,"default_value":"0000","min":"","max":"","enum_allowed":[],"desc":"beginning time of day to automatically balance","long_desc":"This is a time of day in the format HHMM.","tags":[],"see_also":[]},"begin_weekday":{"name":"begin_weekday","type":"uint","level":"advanced","flags":1,"default_value":"0","min":"0","max":"7","enum_allowed":[],"desc":"Restrict automatic balancing to this day of the week or later","long_desc":"0 or 7 = Sunday, 1 = Monday, etc.","tags":[],"see_also":[]},"crush_compat_max_iterations":{"name":"crush_compat_max_iterations","type":"uint","level":"advanced","flags":1,"default_value":"25","min":"1","max":"250","enum_allowed":[],"desc":"maximum number of iterations to attempt optimization","long_desc":"","tags":[],"see_also":[]},"crush_compat_metrics":{"name":"crush_compat_metrics","type":"str","level":"advanced","flags":1,"default_value":"pgs,objects,bytes","min":"","max":"","enum_allowed":[],"desc":"metrics with which to calculate OSD utilization","long_desc":"Value is a list of one or more of \"pgs\", \"objects\", or \"bytes\", and indicates which metrics to use to balance utilization.","tags":[],"see_also":[]},"crush_compat_step":{"name":"crush_compat_step","type":"float","level":"advanced","flags":1,"default_value":"0.5","min":"0.001","max":"0.999","enum_allowed":[],"desc":"aggressiveness of optimization","long_desc":".99 is very aggressive, .01 is less aggressive","tags":[],"see_also":[]},"end_time":{"name":"end_time","type":"str","level":"advanced","flags":1,"default_value":"2400","min":"","max":"","enum_allowed":[],"desc":"ending time of day to automatically balance","long_desc":"This is a time of day in the format HHMM.","tags":[],"see_also":[]},"end_weekday":{"name":"end_weekday","type":"uint","level":"advanced","flags":1,"default_value":"7","min":"0","max":"7","enum_allowed":[],"desc":"Restrict automatic balancing to days of the week earlier than this","long_desc":"0 or 7 = Sunday, 1 = Monday, etc.","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"min_score":{"name":"min_score","type":"float","level":"advanced","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"minimum score, below which no optimization is attempted","long_desc":"","tags":[],"see_also":[]},"mode":{"name":"mode","type":"str","level":"advanced","flags":1,"default_value":"upmap","min":"","max":"","enum_allowed":["crush-compat","none","upmap"],"desc":"Balancer mode","long_desc":"","tags":[],"see_also":[]},"pool_ids":{"name":"pool_ids","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"pools which the automatic balancing will be limited to","long_desc":"","tags":[],"see_also":[]},"sleep_interval":{"name":"sleep_interval","type":"secs","level":"advanced","flags":1,"default_value":"60","min":"","max":"","enum_allowed":[],"desc":"how frequently to wake up and attempt optimization","long_desc":"","tags":[],"see_also":[]},"upmap_max_deviation":{"name":"upmap_max_deviation","type":"int","level":"advanced","flags":1,"default_value":"5","min":"1","max":"","enum_allowed":[],"desc":"deviation below which no optimization is attempted","long_desc":"If the number of PGs are within this count then no optimization is attempted","tags":[],"see_also":[]},"upmap_max_optimizations":{"name":"upmap_max_optimizations","type":"uint","level":"advanced","flags":1,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"maximum upmap optimizations to make per attempt","long_desc":"","tags":[],"see_also":[]}}},{"name":"cephadm","can_run":true,"error_string":"","module_options":{"agent_down_multiplier":{"name":"agent_down_multiplier","type":"float","level":"advanced","flags":0,"default_value":"3.0","min":"","max":"","enum_allowed":[],"desc":"Multiplied by agent refresh rate to calculate how long agent must not report before being marked down","long_desc":"","tags":[],"see_also":[]},"agent_refresh_rate":{"name":"agent_refresh_rate","type":"secs","level":"advanced","flags":0,"default_value":"20","min":"","max":"","enum_allowed":[],"desc":"How often agent on each host will try to gather and send metadata","long_desc":"","tags":[],"see_also":[]},"agent_starting_port":{"name":"agent_starting_port","type":"int","level":"advanced","flags":0,"default_value":"4721","min":"","max":"","enum_allowed":[],"desc":"First port agent will try to bind to (will also try up to next 1000 subsequent ports if blocked)","long_desc":"","tags":[],"see_also":[]},"allow_ptrace":{"name":"allow_ptrace","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"allow SYS_PTRACE capability on ceph containers","long_desc":"The SYS_PTRACE capability is needed to attach to a process with gdb or strace. Enabling this options can allow debugging daemons that encounter problems at runtime.","tags":[],"see_also":[]},"autotune_interval":{"name":"autotune_interval","type":"secs","level":"advanced","flags":0,"default_value":"600","min":"","max":"","enum_allowed":[],"desc":"how frequently to autotune daemon memory","long_desc":"","tags":[],"see_also":[]},"autotune_memory_target_ratio":{"name":"autotune_memory_target_ratio","type":"float","level":"advanced","flags":0,"default_value":"0.7","min":"","max":"","enum_allowed":[],"desc":"ratio of total system memory to divide amongst autotuned daemons","long_desc":"","tags":[],"see_also":[]},"config_checks_enabled":{"name":"config_checks_enabled","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Enable or disable the cephadm configuration analysis","long_desc":"","tags":[],"see_also":[]},"config_dashboard":{"name":"config_dashboard","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"manage configs like API endpoints in Dashboard.","long_desc":"","tags":[],"see_also":[]},"container_image_alertmanager":{"name":"container_image_alertmanager","type":"str","level":"advanced","flags":0,"default_value":"quay.io/prometheus/alertmanager:v0.23.0","min":"","max":"","enum_allowed":[],"desc":"Prometheus container image","long_desc":"","tags":[],"see_also":[]},"container_image_base":{"name":"container_image_base","type":"str","level":"advanced","flags":1,"default_value":"quay.io/ceph/ceph","min":"","max":"","enum_allowed":[],"desc":"Container image name, without the tag","long_desc":"","tags":[],"see_also":[]},"container_image_grafana":{"name":"container_image_grafana","type":"str","level":"advanced","flags":0,"default_value":"quay.io/ceph/ceph-grafana:8.3.5","min":"","max":"","enum_allowed":[],"desc":"Prometheus container image","long_desc":"","tags":[],"see_also":[]},"container_image_haproxy":{"name":"container_image_haproxy","type":"str","level":"advanced","flags":0,"default_value":"docker.io/library/haproxy:2.3","min":"","max":"","enum_allowed":[],"desc":"HAproxy container image","long_desc":"","tags":[],"see_also":[]},"container_image_keepalived":{"name":"container_image_keepalived","type":"str","level":"advanced","flags":0,"default_value":"docker.io/arcts/keepalived","min":"","max":"","enum_allowed":[],"desc":"Keepalived container image","long_desc":"","tags":[],"see_also":[]},"container_image_node_exporter":{"name":"container_image_node_exporter","type":"str","level":"advanced","flags":0,"default_value":"quay.io/prometheus/node-exporter:v1.3.1","min":"","max":"","enum_allowed":[],"desc":"Prometheus container image","long_desc":"","tags":[],"see_also":[]},"container_image_prometheus":{"name":"container_image_prometheus","type":"str","level":"advanced","flags":0,"default_value":"quay.io/prometheus/prometheus:v2.33.4","min":"","max":"","enum_allowed":[],"desc":"Prometheus container image","long_desc":"","tags":[],"see_also":[]},"container_image_snmp_gateway":{"name":"container_image_snmp_gateway","type":"str","level":"advanced","flags":0,"default_value":"docker.io/maxwo/snmp-notifier:v1.2.1","min":"","max":"","enum_allowed":[],"desc":"SNMP Gateway container image","long_desc":"","tags":[],"see_also":[]},"container_init":{"name":"container_init","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Run podman/docker with `--init`","long_desc":"","tags":[],"see_also":[]},"daemon_cache_timeout":{"name":"daemon_cache_timeout","type":"secs","level":"advanced","flags":0,"default_value":"600","min":"","max":"","enum_allowed":[],"desc":"seconds to cache service (daemon) inventory","long_desc":"","tags":[],"see_also":[]},"default_registry":{"name":"default_registry","type":"str","level":"advanced","flags":0,"default_value":"docker.io","min":"","max":"","enum_allowed":[],"desc":"Search-registry to which we should normalize unqualified image names. This is not the default registry","long_desc":"","tags":[],"see_also":[]},"device_cache_timeout":{"name":"device_cache_timeout","type":"secs","level":"advanced","flags":0,"default_value":"1800","min":"","max":"","enum_allowed":[],"desc":"seconds to cache device inventory","long_desc":"","tags":[],"see_also":[]},"device_enhanced_scan":{"name":"device_enhanced_scan","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Use libstoragemgmt during device scans","long_desc":"","tags":[],"see_also":[]},"facts_cache_timeout":{"name":"facts_cache_timeout","type":"secs","level":"advanced","flags":0,"default_value":"60","min":"","max":"","enum_allowed":[],"desc":"seconds to cache host facts data","long_desc":"","tags":[],"see_also":[]},"host_check_interval":{"name":"host_check_interval","type":"secs","level":"advanced","flags":0,"default_value":"600","min":"","max":"","enum_allowed":[],"desc":"how frequently to perform a host check","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"log to the \"cephadm\" cluster log channel\"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"manage_etc_ceph_ceph_conf":{"name":"manage_etc_ceph_ceph_conf","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Manage and own /etc/ceph/ceph.conf on the hosts.","long_desc":"","tags":[],"see_also":[]},"manage_etc_ceph_ceph_conf_hosts":{"name":"manage_etc_ceph_ceph_conf_hosts","type":"str","level":"advanced","flags":0,"default_value":"*","min":"","max":"","enum_allowed":[],"desc":"PlacementSpec describing on which hosts to manage /etc/ceph/ceph.conf","long_desc":"","tags":[],"see_also":[]},"max_count_per_host":{"name":"max_count_per_host","type":"int","level":"advanced","flags":0,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"max number of daemons per service per host","long_desc":"","tags":[],"see_also":[]},"max_osd_draining_count":{"name":"max_osd_draining_count","type":"int","level":"advanced","flags":0,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"max number of osds that will be drained simultaneously when osds are removed","long_desc":"","tags":[],"see_also":[]},"migration_current":{"name":"migration_current","type":"int","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"internal - do not modify","long_desc":"","tags":[],"see_also":[]},"mode":{"name":"mode","type":"str","level":"advanced","flags":0,"default_value":"root","min":"","max":"","enum_allowed":["cephadm-package","root"],"desc":"mode for remote execution of cephadm","long_desc":"","tags":[],"see_also":[]},"prometheus_alerts_path":{"name":"prometheus_alerts_path","type":"str","level":"advanced","flags":0,"default_value":"/etc/prometheus/ceph/ceph_default_alerts.yml","min":"","max":"","enum_allowed":[],"desc":"location of alerts to include in prometheus deployments","long_desc":"","tags":[],"see_also":[]},"registry_insecure":{"name":"registry_insecure","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Registry is to be considered insecure (no TLS available). Only for development purposes.","long_desc":"","tags":[],"see_also":[]},"registry_password":{"name":"registry_password","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"Custom repository password. Only used for logging into a registry.","long_desc":"","tags":[],"see_also":[]},"registry_url":{"name":"registry_url","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"Registry url for login purposes. This is not the default registry","long_desc":"","tags":[],"see_also":[]},"registry_username":{"name":"registry_username","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"Custom repository username. Only used for logging into a registry.","long_desc":"","tags":[],"see_also":[]},"ssh_config_file":{"name":"ssh_config_file","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"customized SSH config file to connect to managed hosts","long_desc":"","tags":[],"see_also":[]},"use_agent":{"name":"use_agent","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Use cephadm agent on each host to gather and send metadata","long_desc":"","tags":[],"see_also":[]},"use_repo_digest":{"name":"use_repo_digest","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Automatically convert image tags to image digest. Make sure all daemons use the same image","long_desc":"","tags":[],"see_also":[]},"warn_on_failed_host_check":{"name":"warn_on_failed_host_check","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"raise a health warning if the host check fails","long_desc":"","tags":[],"see_also":[]},"warn_on_stray_daemons":{"name":"warn_on_stray_daemons","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"raise a health warning if daemons are detected that are not managed by cephadm","long_desc":"","tags":[],"see_also":[]},"warn_on_stray_hosts":{"name":"warn_on_stray_hosts","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"raise a health warning if daemons are detected on a host that is not managed by cephadm","long_desc":"","tags":[],"see_also":[]}}},{"name":"crash","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"retain_interval":{"name":"retain_interval","type":"secs","level":"advanced","flags":1,"default_value":"31536000","min":"","max":"","enum_allowed":[],"desc":"how long to retain crashes before pruning them","long_desc":"","tags":[],"see_also":[]},"warn_recent_interval":{"name":"warn_recent_interval","type":"secs","level":"advanced","flags":1,"default_value":"1209600","min":"","max":"","enum_allowed":[],"desc":"time interval in which to warn about recent crashes","long_desc":"","tags":[],"see_also":[]}}},{"name":"dashboard","can_run":true,"error_string":"","module_options":{"ACCOUNT_LOCKOUT_ATTEMPTS":{"name":"ACCOUNT_LOCKOUT_ATTEMPTS","type":"int","level":"advanced","flags":0,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ALERTMANAGER_API_HOST":{"name":"ALERTMANAGER_API_HOST","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ALERTMANAGER_API_SSL_VERIFY":{"name":"ALERTMANAGER_API_SSL_VERIFY","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"AUDIT_API_ENABLED":{"name":"AUDIT_API_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"AUDIT_API_LOG_PAYLOAD":{"name":"AUDIT_API_LOG_PAYLOAD","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ENABLE_BROWSABLE_API":{"name":"ENABLE_BROWSABLE_API","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_CEPHFS":{"name":"FEATURE_TOGGLE_CEPHFS","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_ISCSI":{"name":"FEATURE_TOGGLE_ISCSI","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_MIRRORING":{"name":"FEATURE_TOGGLE_MIRRORING","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_NFS":{"name":"FEATURE_TOGGLE_NFS","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_RBD":{"name":"FEATURE_TOGGLE_RBD","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_RGW":{"name":"FEATURE_TOGGLE_RGW","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GANESHA_CLUSTERS_RADOS_POOL_NAMESPACE":{"name":"GANESHA_CLUSTERS_RADOS_POOL_NAMESPACE","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GRAFANA_API_PASSWORD":{"name":"GRAFANA_API_PASSWORD","type":"str","level":"advanced","flags":0,"default_value":"admin","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GRAFANA_API_SSL_VERIFY":{"name":"GRAFANA_API_SSL_VERIFY","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GRAFANA_API_URL":{"name":"GRAFANA_API_URL","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GRAFANA_API_USERNAME":{"name":"GRAFANA_API_USERNAME","type":"str","level":"advanced","flags":0,"default_value":"admin","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GRAFANA_FRONTEND_API_URL":{"name":"GRAFANA_FRONTEND_API_URL","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GRAFANA_UPDATE_DASHBOARDS":{"name":"GRAFANA_UPDATE_DASHBOARDS","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ISCSI_API_SSL_VERIFICATION":{"name":"ISCSI_API_SSL_VERIFICATION","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ISSUE_TRACKER_API_KEY":{"name":"ISSUE_TRACKER_API_KEY","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PROMETHEUS_API_HOST":{"name":"PROMETHEUS_API_HOST","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PROMETHEUS_API_SSL_VERIFY":{"name":"PROMETHEUS_API_SSL_VERIFY","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_COMPLEXITY_ENABLED":{"name":"PWD_POLICY_CHECK_COMPLEXITY_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_EXCLUSION_LIST_ENABLED":{"name":"PWD_POLICY_CHECK_EXCLUSION_LIST_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_LENGTH_ENABLED":{"name":"PWD_POLICY_CHECK_LENGTH_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_OLDPWD_ENABLED":{"name":"PWD_POLICY_CHECK_OLDPWD_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_REPETITIVE_CHARS_ENABLED":{"name":"PWD_POLICY_CHECK_REPETITIVE_CHARS_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_SEQUENTIAL_CHARS_ENABLED":{"name":"PWD_POLICY_CHECK_SEQUENTIAL_CHARS_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_USERNAME_ENABLED":{"name":"PWD_POLICY_CHECK_USERNAME_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_ENABLED":{"name":"PWD_POLICY_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_EXCLUSION_LIST":{"name":"PWD_POLICY_EXCLUSION_LIST","type":"str","level":"advanced","flags":0,"default_value":"osd,host,dashboard,pool,block,nfs,ceph,monitors,gateway,logs,crush,maps","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_MIN_COMPLEXITY":{"name":"PWD_POLICY_MIN_COMPLEXITY","type":"int","level":"advanced","flags":0,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_MIN_LENGTH":{"name":"PWD_POLICY_MIN_LENGTH","type":"int","level":"advanced","flags":0,"default_value":"8","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"REST_REQUESTS_TIMEOUT":{"name":"REST_REQUESTS_TIMEOUT","type":"int","level":"advanced","flags":0,"default_value":"45","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"RGW_API_ACCESS_KEY":{"name":"RGW_API_ACCESS_KEY","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"RGW_API_ADMIN_RESOURCE":{"name":"RGW_API_ADMIN_RESOURCE","type":"str","level":"advanced","flags":0,"default_value":"admin","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"RGW_API_SECRET_KEY":{"name":"RGW_API_SECRET_KEY","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"RGW_API_SSL_VERIFY":{"name":"RGW_API_SSL_VERIFY","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"USER_PWD_EXPIRATION_SPAN":{"name":"USER_PWD_EXPIRATION_SPAN","type":"int","level":"advanced","flags":0,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"USER_PWD_EXPIRATION_WARNING_1":{"name":"USER_PWD_EXPIRATION_WARNING_1","type":"int","level":"advanced","flags":0,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"USER_PWD_EXPIRATION_WARNING_2":{"name":"USER_PWD_EXPIRATION_WARNING_2","type":"int","level":"advanced","flags":0,"default_value":"5","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"crt_file":{"name":"crt_file","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"debug":{"name":"debug","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Enable/disable debug options","long_desc":"","tags":[],"see_also":[]},"jwt_token_ttl":{"name":"jwt_token_ttl","type":"int","level":"advanced","flags":0,"default_value":"28800","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"key_file":{"name":"key_file","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"motd":{"name":"motd","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"The message of the day","long_desc":"","tags":[],"see_also":[]},"server_addr":{"name":"server_addr","type":"str","level":"advanced","flags":0,"default_value":"::","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"server_port":{"name":"server_port","type":"int","level":"advanced","flags":0,"default_value":"8080","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ssl":{"name":"ssl","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ssl_server_port":{"name":"ssl_server_port","type":"int","level":"advanced","flags":0,"default_value":"8443","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"standby_behaviour":{"name":"standby_behaviour","type":"str","level":"advanced","flags":0,"default_value":"redirect","min":"","max":"","enum_allowed":["error","redirect"],"desc":"","long_desc":"","tags":[],"see_also":[]},"standby_error_status_code":{"name":"standby_error_status_code","type":"int","level":"advanced","flags":0,"default_value":"500","min":"400","max":"599","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"url_prefix":{"name":"url_prefix","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"devicehealth","can_run":true,"error_string":"","module_options":{"enable_monitoring":{"name":"enable_monitoring","type":"bool","level":"advanced","flags":1,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"monitor device health metrics","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"mark_out_threshold":{"name":"mark_out_threshold","type":"secs","level":"advanced","flags":1,"default_value":"2419200","min":"","max":"","enum_allowed":[],"desc":"automatically mark OSD if it may fail before this long","long_desc":"","tags":[],"see_also":[]},"pool_name":{"name":"pool_name","type":"str","level":"advanced","flags":1,"default_value":"device_health_metrics","min":"","max":"","enum_allowed":[],"desc":"name of pool in which to store device health metrics","long_desc":"","tags":[],"see_also":[]},"retention_period":{"name":"retention_period","type":"secs","level":"advanced","flags":1,"default_value":"15552000","min":"","max":"","enum_allowed":[],"desc":"how long to retain device health metrics","long_desc":"","tags":[],"see_also":[]},"scrape_frequency":{"name":"scrape_frequency","type":"secs","level":"advanced","flags":1,"default_value":"86400","min":"","max":"","enum_allowed":[],"desc":"how frequently to scrape device health metrics","long_desc":"","tags":[],"see_also":[]},"self_heal":{"name":"self_heal","type":"bool","level":"advanced","flags":1,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"preemptively heal cluster around devices that may fail","long_desc":"","tags":[],"see_also":[]},"sleep_interval":{"name":"sleep_interval","type":"secs","level":"advanced","flags":1,"default_value":"600","min":"","max":"","enum_allowed":[],"desc":"how frequently to wake up and check device health","long_desc":"","tags":[],"see_also":[]},"warn_threshold":{"name":"warn_threshold","type":"secs","level":"advanced","flags":1,"default_value":"7257600","min":"","max":"","enum_allowed":[],"desc":"raise health warning if OSD may fail before this long","long_desc":"","tags":[],"see_also":[]}}},{"name":"diskprediction_local","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"predict_interval":{"name":"predict_interval","type":"str","level":"advanced","flags":0,"default_value":"86400","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"predictor_model":{"name":"predictor_model","type":"str","level":"advanced","flags":0,"default_value":"prophetstor","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sleep_interval":{"name":"sleep_interval","type":"str","level":"advanced","flags":0,"default_value":"600","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"influx","can_run":false,"error_string":"influxdb python module not found","module_options":{"batch_size":{"name":"batch_size","type":"int","level":"advanced","flags":0,"default_value":"5000","min":"","max":"","enum_allowed":[],"desc":"How big batches of data points should be when sending to InfluxDB.","long_desc":"","tags":[],"see_also":[]},"database":{"name":"database","type":"str","level":"advanced","flags":0,"default_value":"ceph","min":"","max":"","enum_allowed":[],"desc":"InfluxDB database name. You will need to create this database and grant write privileges to the configured username or the username must have admin privileges to create it.","long_desc":"","tags":[],"see_also":[]},"hostname":{"name":"hostname","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"InfluxDB server hostname","long_desc":"","tags":[],"see_also":[]},"interval":{"name":"interval","type":"secs","level":"advanced","flags":0,"default_value":"30","min":"5","max":"","enum_allowed":[],"desc":"Time between reports to InfluxDB. Default 30 seconds.","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"password":{"name":"password","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"password of InfluxDB server user","long_desc":"","tags":[],"see_also":[]},"port":{"name":"port","type":"int","level":"advanced","flags":0,"default_value":"8086","min":"","max":"","enum_allowed":[],"desc":"InfluxDB server port","long_desc":"","tags":[],"see_also":[]},"ssl":{"name":"ssl","type":"str","level":"advanced","flags":0,"default_value":"false","min":"","max":"","enum_allowed":[],"desc":"Use https connection for InfluxDB server. Use \"true\" or \"false\".","long_desc":"","tags":[],"see_also":[]},"threads":{"name":"threads","type":"int","level":"advanced","flags":0,"default_value":"5","min":"1","max":"32","enum_allowed":[],"desc":"How many worker threads should be spawned for sending data to InfluxDB.","long_desc":"","tags":[],"see_also":[]},"username":{"name":"username","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"username of InfluxDB server user","long_desc":"","tags":[],"see_also":[]},"verify_ssl":{"name":"verify_ssl","type":"str","level":"advanced","flags":0,"default_value":"true","min":"","max":"","enum_allowed":[],"desc":"Verify https cert for InfluxDB server. Use \"true\" or \"false\".","long_desc":"","tags":[],"see_also":[]}}},{"name":"insights","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"iostat","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"k8sevents","can_run":true,"error_string":"","module_options":{"ceph_event_retention_days":{"name":"ceph_event_retention_days","type":"int","level":"advanced","flags":0,"default_value":"7","min":"","max":"","enum_allowed":[],"desc":"Days to hold ceph event information within local cache","long_desc":"","tags":[],"see_also":[]},"config_check_secs":{"name":"config_check_secs","type":"int","level":"advanced","flags":0,"default_value":"10","min":"10","max":"","enum_allowed":[],"desc":"interval (secs) to check for cluster configuration changes","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"localpool","can_run":true,"error_string":"","module_options":{"failure_domain":{"name":"failure_domain","type":"str","level":"advanced","flags":1,"default_value":"host","min":"","max":"","enum_allowed":[],"desc":"failure domain for any created local pool","long_desc":"what failure domain we should separate data replicas across.","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"min_size":{"name":"min_size","type":"int","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"default min_size for any created local pool","long_desc":"value to set min_size to (unchanged from Ceph's default if this option is not set)","tags":[],"see_also":[]},"num_rep":{"name":"num_rep","type":"int","level":"advanced","flags":1,"default_value":"3","min":"","max":"","enum_allowed":[],"desc":"default replica count for any created local pool","long_desc":"","tags":[],"see_also":[]},"pg_num":{"name":"pg_num","type":"int","level":"advanced","flags":1,"default_value":"128","min":"","max":"","enum_allowed":[],"desc":"default pg_num for any created local pool","long_desc":"","tags":[],"see_also":[]},"prefix":{"name":"prefix","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"name prefix for any created local pool","long_desc":"","tags":[],"see_also":[]},"subtree":{"name":"subtree","type":"str","level":"advanced","flags":1,"default_value":"rack","min":"","max":"","enum_allowed":[],"desc":"CRUSH level for which to create a local pool","long_desc":"which CRUSH subtree type the module should create a pool for.","tags":[],"see_also":[]}}},{"name":"mds_autoscaler","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"mirroring","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"nfs","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"orchestrator","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"orchestrator":{"name":"orchestrator","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["cephadm","rook","test_orchestrator"],"desc":"Orchestrator backend","long_desc":"","tags":[],"see_also":[]}}},{"name":"osd_perf_query","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"osd_support","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"pg_autoscaler","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"noautoscale":{"name":"noautoscale","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"global autoscale flag","long_desc":"Option to turn on/off the autoscaler for all pools","tags":[],"see_also":[]},"sleep_interval":{"name":"sleep_interval","type":"secs","level":"advanced","flags":0,"default_value":"60","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"threshold":{"name":"threshold","type":"float","level":"advanced","flags":0,"default_value":"3.0","min":"1.0","max":"","enum_allowed":[],"desc":"scaling threshold","long_desc":"The factor by which the `NEW PG_NUM` must vary from the current`PG_NUM` before being accepted. Cannot be less than 1.0","tags":[],"see_also":[]}}},{"name":"progress","can_run":true,"error_string":"","module_options":{"allow_pg_recovery_event":{"name":"allow_pg_recovery_event","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"allow the module to show pg recovery progress","long_desc":"","tags":[],"see_also":[]},"enabled":{"name":"enabled","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"max_completed_events":{"name":"max_completed_events","type":"int","level":"advanced","flags":1,"default_value":"50","min":"","max":"","enum_allowed":[],"desc":"number of past completed events to remember","long_desc":"","tags":[],"see_also":[]},"sleep_interval":{"name":"sleep_interval","type":"secs","level":"advanced","flags":1,"default_value":"5","min":"","max":"","enum_allowed":[],"desc":"how long the module is going to sleep","long_desc":"","tags":[],"see_also":[]}}},{"name":"prometheus","can_run":true,"error_string":"","module_options":{"cache":{"name":"cache","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rbd_stats_pools":{"name":"rbd_stats_pools","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rbd_stats_pools_refresh_interval":{"name":"rbd_stats_pools_refresh_interval","type":"int","level":"advanced","flags":0,"default_value":"300","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"scrape_interval":{"name":"scrape_interval","type":"float","level":"advanced","flags":0,"default_value":"15.0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"server_addr":{"name":"server_addr","type":"str","level":"advanced","flags":0,"default_value":"::","min":"","max":"","enum_allowed":[],"desc":"the IPv4 or IPv6 address on which the module listens for HTTP requests","long_desc":"","tags":[],"see_also":[]},"server_port":{"name":"server_port","type":"int","level":"advanced","flags":0,"default_value":"9283","min":"","max":"","enum_allowed":[],"desc":"the port on which the module listens for HTTP requests","long_desc":"","tags":[],"see_also":[]},"stale_cache_strategy":{"name":"stale_cache_strategy","type":"str","level":"advanced","flags":0,"default_value":"log","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"standby_behaviour":{"name":"standby_behaviour","type":"str","level":"advanced","flags":1,"default_value":"default","min":"","max":"","enum_allowed":["default","error"],"desc":"","long_desc":"","tags":[],"see_also":[]},"standby_error_status_code":{"name":"standby_error_status_code","type":"int","level":"advanced","flags":1,"default_value":"500","min":"400","max":"599","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"rbd_support","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"max_concurrent_snap_create":{"name":"max_concurrent_snap_create","type":"int","level":"advanced","flags":0,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"mirror_snapshot_schedule":{"name":"mirror_snapshot_schedule","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"trash_purge_schedule":{"name":"trash_purge_schedule","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"restful","can_run":true,"error_string":"","module_options":{"enable_auth":{"name":"enable_auth","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"key_file":{"name":"key_file","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"server_addr":{"name":"server_addr","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"server_port":{"name":"server_port","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"rook","can_run":true,"error_string":"","module_options":{"drive_group_interval":{"name":"drive_group_interval","type":"float","level":"advanced","flags":0,"default_value":"300.0","min":"","max":"","enum_allowed":[],"desc":"interval in seconds between re-application of applied drive_groups","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"storage_class":{"name":"storage_class","type":"str","level":"advanced","flags":0,"default_value":"local","min":"","max":"","enum_allowed":[],"desc":"storage class name for LSO-discovered PVs","long_desc":"","tags":[],"see_also":[]}}},{"name":"selftest","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"roption1":{"name":"roption1","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"roption2":{"name":"roption2","type":"str","level":"advanced","flags":0,"default_value":"xyz","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption1":{"name":"rwoption1","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption2":{"name":"rwoption2","type":"int","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption3":{"name":"rwoption3","type":"float","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption4":{"name":"rwoption4","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption5":{"name":"rwoption5","type":"bool","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption6":{"name":"rwoption6","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption7":{"name":"rwoption7","type":"int","level":"advanced","flags":0,"default_value":"","min":"1","max":"42","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"testkey":{"name":"testkey","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"testlkey":{"name":"testlkey","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"testnewline":{"name":"testnewline","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"snap_schedule","can_run":true,"error_string":"","module_options":{"allow_m_granularity":{"name":"allow_m_granularity","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"allow minute scheduled snapshots","long_desc":"","tags":[],"see_also":[]},"dump_on_update":{"name":"dump_on_update","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"dump database to debug log on update","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"stats","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"status","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"telegraf","can_run":true,"error_string":"","module_options":{"address":{"name":"address","type":"str","level":"advanced","flags":0,"default_value":"unixgram:///tmp/telegraf.sock","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"interval":{"name":"interval","type":"secs","level":"advanced","flags":0,"default_value":"15","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"telemetry","can_run":true,"error_string":"","module_options":{"channel_basic":{"name":"channel_basic","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Share basic cluster information (size, version)","long_desc":"","tags":[],"see_also":[]},"channel_crash":{"name":"channel_crash","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Share metadata about Ceph daemon crashes (version, stack straces, etc)","long_desc":"","tags":[],"see_also":[]},"channel_device":{"name":"channel_device","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Share device health metrics (e.g., SMART data, minus potentially identifying info like serial numbers)","long_desc":"","tags":[],"see_also":[]},"channel_ident":{"name":"channel_ident","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Share a user-provided description and/or contact email for the cluster","long_desc":"","tags":[],"see_also":[]},"channel_perf":{"name":"channel_perf","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Share various performance metrics of a cluster","long_desc":"","tags":[],"see_also":[]},"contact":{"name":"contact","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"description":{"name":"description","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"device_url":{"name":"device_url","type":"str","level":"advanced","flags":0,"default_value":"https://telemetry.ceph.com/device","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"enabled":{"name":"enabled","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"interval":{"name":"interval","type":"int","level":"advanced","flags":0,"default_value":"24","min":"8","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"last_opt_revision":{"name":"last_opt_revision","type":"int","level":"advanced","flags":0,"default_value":"1","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"leaderboard":{"name":"leaderboard","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"organization":{"name":"organization","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"proxy":{"name":"proxy","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"url":{"name":"url","type":"str","level":"advanced","flags":0,"default_value":"https://telemetry.ceph.com/report","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"test_orchestrator","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"volumes","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"max_concurrent_clones":{"name":"max_concurrent_clones","type":"int","level":"advanced","flags":0,"default_value":"4","min":"","max":"","enum_allowed":[],"desc":"Number of asynchronous cloner threads","long_desc":"","tags":[],"see_also":[]},"snapshot_clone_delay":{"name":"snapshot_clone_delay","type":"int","level":"advanced","flags":0,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"Delay clone begin operation by snapshot_clone_delay seconds","long_desc":"","tags":[],"see_also":[]}}},{"name":"zabbix","can_run":true,"error_string":"","module_options":{"discovery_interval":{"name":"discovery_interval","type":"uint","level":"advanced","flags":0,"default_value":"100","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"identifier":{"name":"identifier","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"interval":{"name":"interval","type":"secs","level":"advanced","flags":0,"default_value":"60","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"zabbix_host":{"name":"zabbix_host","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"zabbix_port":{"name":"zabbix_port","type":"int","level":"advanced","flags":0,"default_value":"10051","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"zabbix_sender":{"name":"zabbix_sender","type":"str","level":"advanced","flags":0,"default_value":"/usr/bin/zabbix_sender","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}}],"services":{"dashboard":"https://192.168.123.102:8443/","prometheus":"http://192.168.123.102:9283/"},"always_on_modules":{"octopus":["balancer","crash","devicehealth","orchestrator","pg_autoscaler","progress","rbd_support","status","telemetry","volumes"],"pacific":["balancer","crash","devicehealth","orchestrator","pg_autoscaler","progress","rbd_support","status","telemetry","volumes"],"quincy":["balancer","crash","devicehealth","orchestrator","pg_autoscaler","progress","rbd_support","status","telemetry","volumes"],"last_failure_osd_epoch":49,"active_clients":[{"addrvec":[{"type":"v2","addr":"192.168.123.102:0","nonce":350459207}]},{"addrvec":[{"type":"v2","addr":"192.168.123.102:0","nonce":3027066951}]},{"addrvec":[{"type":"v2","addr":"192.168.123.102:0","nonce":2188069013}]},{"addrvec":[{"type":"v2","addr":"192.168.123.102:0","nonce":2274851303}]}]}} 2026-03-10T05:46:57.096 INFO:tasks.cephadm.ceph_manager.ceph:mgr available! 2026-03-10T05:46:57.096 INFO:tasks.cephadm.ceph_manager.ceph:waiting for all up 2026-03-10T05:46:57.096 DEBUG:teuthology.orchestra.run.vm02:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 shell --fsid 107483ae-1c44-11f1-b530-c1172cd6122a -- ceph osd dump --format=json 2026-03-10T05:46:57.320 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:46:57 vm02 bash[17462]: audit 2026-03-10T05:46:56.319432+0000 mon.a (mon.0) 597 : audit [INF] from='mgr.14409 ' entity='mgr.y' 2026-03-10T05:46:57.320 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:46:57 vm02 bash[17462]: cephadm 2026-03-10T05:46:56.328077+0000 mgr.y (mgr.14409) 19 : cephadm [INF] Deploying daemon prometheus.a on vm05 2026-03-10T05:46:57.320 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:46:57 vm02 bash[17462]: audit 2026-03-10T05:46:56.735614+0000 mon.a (mon.0) 598 : audit [INF] from='mgr.14409 ' entity='mgr.y' 2026-03-10T05:46:57.320 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:46:57 vm02 bash[17462]: audit 2026-03-10T05:46:57.036319+0000 mon.c (mon.1) 50 : audit [DBG] from='client.? 192.168.123.102:0/3217330465' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json"}]: dispatch 2026-03-10T05:46:57.320 INFO:journalctl@ceph.node-exporter.a.vm02.stdout:Mar 10 05:46:57 vm02 bash[37510]: v1.3.1: Pulling from prometheus/node-exporter 2026-03-10T05:46:57.557 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:46:57 vm02 bash[22526]: audit 2026-03-10T05:46:56.319432+0000 mon.a (mon.0) 597 : audit [INF] from='mgr.14409 ' entity='mgr.y' 2026-03-10T05:46:57.557 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:46:57 vm02 bash[22526]: cephadm 2026-03-10T05:46:56.328077+0000 mgr.y (mgr.14409) 19 : cephadm [INF] Deploying daemon prometheus.a on vm05 2026-03-10T05:46:57.557 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:46:57 vm02 bash[22526]: audit 2026-03-10T05:46:56.735614+0000 mon.a (mon.0) 598 : audit [INF] from='mgr.14409 ' entity='mgr.y' 2026-03-10T05:46:57.557 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:46:57 vm02 bash[22526]: audit 2026-03-10T05:46:57.036319+0000 mon.c (mon.1) 50 : audit [DBG] from='client.? 192.168.123.102:0/3217330465' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json"}]: dispatch 2026-03-10T05:46:57.584 INFO:journalctl@ceph.node-exporter.a.vm02.stdout:Mar 10 05:46:57 vm02 bash[37510]: aa2a8d90b84c: Pulling fs layer 2026-03-10T05:46:57.584 INFO:journalctl@ceph.node-exporter.a.vm02.stdout:Mar 10 05:46:57 vm02 bash[37510]: b45d31ee2d7f: Pulling fs layer 2026-03-10T05:46:57.584 INFO:journalctl@ceph.node-exporter.a.vm02.stdout:Mar 10 05:46:57 vm02 bash[37510]: b5db1e299295: Pulling fs layer 2026-03-10T05:46:57.758 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:46:57 vm05 bash[17864]: audit 2026-03-10T05:46:56.319432+0000 mon.a (mon.0) 597 : audit [INF] from='mgr.14409 ' entity='mgr.y' 2026-03-10T05:46:57.758 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:46:57 vm05 bash[17864]: cephadm 2026-03-10T05:46:56.328077+0000 mgr.y (mgr.14409) 19 : cephadm [INF] Deploying daemon prometheus.a on vm05 2026-03-10T05:46:57.758 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:46:57 vm05 bash[17864]: audit 2026-03-10T05:46:56.735614+0000 mon.a (mon.0) 598 : audit [INF] from='mgr.14409 ' entity='mgr.y' 2026-03-10T05:46:57.758 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:46:57 vm05 bash[17864]: audit 2026-03-10T05:46:57.036319+0000 mon.c (mon.1) 50 : audit [DBG] from='client.? 192.168.123.102:0/3217330465' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json"}]: dispatch 2026-03-10T05:46:58.197 INFO:journalctl@ceph.node-exporter.b.vm05.stdout:Mar 10 05:46:57 vm05 bash[32679]: v1.3.1: Pulling from prometheus/node-exporter 2026-03-10T05:46:58.384 INFO:journalctl@ceph.node-exporter.a.vm02.stdout:Mar 10 05:46:58 vm02 bash[37510]: b45d31ee2d7f: Verifying Checksum 2026-03-10T05:46:58.384 INFO:journalctl@ceph.node-exporter.a.vm02.stdout:Mar 10 05:46:58 vm02 bash[37510]: b45d31ee2d7f: Download complete 2026-03-10T05:46:58.384 INFO:journalctl@ceph.node-exporter.a.vm02.stdout:Mar 10 05:46:58 vm02 bash[37510]: aa2a8d90b84c: Verifying Checksum 2026-03-10T05:46:58.384 INFO:journalctl@ceph.node-exporter.a.vm02.stdout:Mar 10 05:46:58 vm02 bash[37510]: aa2a8d90b84c: Download complete 2026-03-10T05:46:58.384 INFO:journalctl@ceph.node-exporter.a.vm02.stdout:Mar 10 05:46:58 vm02 bash[37510]: aa2a8d90b84c: Pull complete 2026-03-10T05:46:58.384 INFO:journalctl@ceph.node-exporter.a.vm02.stdout:Mar 10 05:46:58 vm02 bash[37510]: b5db1e299295: Verifying Checksum 2026-03-10T05:46:58.385 INFO:journalctl@ceph.node-exporter.a.vm02.stdout:Mar 10 05:46:58 vm02 bash[37510]: b5db1e299295: Download complete 2026-03-10T05:46:58.385 INFO:journalctl@ceph.node-exporter.a.vm02.stdout:Mar 10 05:46:58 vm02 bash[37510]: b45d31ee2d7f: Pull complete 2026-03-10T05:46:58.508 INFO:journalctl@ceph.node-exporter.b.vm05.stdout:Mar 10 05:46:58 vm05 bash[32679]: aa2a8d90b84c: Pulling fs layer 2026-03-10T05:46:58.509 INFO:journalctl@ceph.node-exporter.b.vm05.stdout:Mar 10 05:46:58 vm05 bash[32679]: b45d31ee2d7f: Pulling fs layer 2026-03-10T05:46:58.509 INFO:journalctl@ceph.node-exporter.b.vm05.stdout:Mar 10 05:46:58 vm05 bash[32679]: b5db1e299295: Pulling fs layer 2026-03-10T05:46:58.709 INFO:teuthology.orchestra.run.vm02.stderr:Inferring config /var/lib/ceph/107483ae-1c44-11f1-b530-c1172cd6122a/mon.c/config 2026-03-10T05:46:58.718 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:46:58 vm02 bash[17462]: cluster 2026-03-10T05:46:57.680557+0000 mgr.y (mgr.14409) 20 : cluster [DBG] pgmap v6: 1 pgs: 1 active+clean; 449 KiB data, 48 MiB used, 160 GiB / 160 GiB avail 2026-03-10T05:46:58.718 INFO:journalctl@ceph.node-exporter.a.vm02.stdout:Mar 10 05:46:58 vm02 bash[37510]: b5db1e299295: Pull complete 2026-03-10T05:46:58.718 INFO:journalctl@ceph.node-exporter.a.vm02.stdout:Mar 10 05:46:58 vm02 bash[37510]: Digest: sha256:f2269e73124dd0f60a7d19a2ce1264d33d08a985aed0ee6b0b89d0be470592cd 2026-03-10T05:46:58.718 INFO:journalctl@ceph.node-exporter.a.vm02.stdout:Mar 10 05:46:58 vm02 bash[37510]: Status: Downloaded newer image for quay.io/prometheus/node-exporter:v1.3.1 2026-03-10T05:46:58.718 INFO:journalctl@ceph.node-exporter.a.vm02.stdout:Mar 10 05:46:58 vm02 bash[37510]: ts=2026-03-10T05:46:58.515Z caller=node_exporter.go:182 level=info msg="Starting node_exporter" version="(version=1.3.1, branch=HEAD, revision=a2321e7b940ddcff26873612bccdf7cd4c42b6b6)" 2026-03-10T05:46:58.718 INFO:journalctl@ceph.node-exporter.a.vm02.stdout:Mar 10 05:46:58 vm02 bash[37510]: ts=2026-03-10T05:46:58.515Z caller=node_exporter.go:183 level=info msg="Build context" build_context="(go=go1.17.3, user=root@243aafa5525c, date=20211205-11:09:49)" 2026-03-10T05:46:58.718 INFO:journalctl@ceph.node-exporter.a.vm02.stdout:Mar 10 05:46:58 vm02 bash[37510]: ts=2026-03-10T05:46:58.516Z caller=filesystem_common.go:111 level=info collector=filesystem msg="Parsed flag --collector.filesystem.mount-points-exclude" flag=^/(dev|proc|run/credentials/.+|sys|var/lib/docker/.+)($|/) 2026-03-10T05:46:58.718 INFO:journalctl@ceph.node-exporter.a.vm02.stdout:Mar 10 05:46:58 vm02 bash[37510]: ts=2026-03-10T05:46:58.516Z caller=filesystem_common.go:113 level=info collector=filesystem msg="Parsed flag --collector.filesystem.fs-types-exclude" flag=^(autofs|binfmt_misc|bpf|cgroup2?|configfs|debugfs|devpts|devtmpfs|fusectl|hugetlbfs|iso9660|mqueue|nsfs|overlay|proc|procfs|pstore|rpc_pipefs|securityfs|selinuxfs|squashfs|sysfs|tracefs)$ 2026-03-10T05:46:58.719 INFO:journalctl@ceph.node-exporter.a.vm02.stdout:Mar 10 05:46:58 vm02 bash[37510]: ts=2026-03-10T05:46:58.516Z caller=node_exporter.go:108 level=info msg="Enabled collectors" 2026-03-10T05:46:58.719 INFO:journalctl@ceph.node-exporter.a.vm02.stdout:Mar 10 05:46:58 vm02 bash[37510]: ts=2026-03-10T05:46:58.516Z caller=node_exporter.go:115 level=info collector=arp 2026-03-10T05:46:58.719 INFO:journalctl@ceph.node-exporter.a.vm02.stdout:Mar 10 05:46:58 vm02 bash[37510]: ts=2026-03-10T05:46:58.516Z caller=node_exporter.go:115 level=info collector=bcache 2026-03-10T05:46:58.719 INFO:journalctl@ceph.node-exporter.a.vm02.stdout:Mar 10 05:46:58 vm02 bash[37510]: ts=2026-03-10T05:46:58.516Z caller=node_exporter.go:115 level=info collector=bonding 2026-03-10T05:46:58.719 INFO:journalctl@ceph.node-exporter.a.vm02.stdout:Mar 10 05:46:58 vm02 bash[37510]: ts=2026-03-10T05:46:58.516Z caller=node_exporter.go:115 level=info collector=btrfs 2026-03-10T05:46:58.719 INFO:journalctl@ceph.node-exporter.a.vm02.stdout:Mar 10 05:46:58 vm02 bash[37510]: ts=2026-03-10T05:46:58.516Z caller=node_exporter.go:115 level=info collector=conntrack 2026-03-10T05:46:58.719 INFO:journalctl@ceph.node-exporter.a.vm02.stdout:Mar 10 05:46:58 vm02 bash[37510]: ts=2026-03-10T05:46:58.516Z caller=node_exporter.go:115 level=info collector=cpu 2026-03-10T05:46:58.719 INFO:journalctl@ceph.node-exporter.a.vm02.stdout:Mar 10 05:46:58 vm02 bash[37510]: ts=2026-03-10T05:46:58.516Z caller=node_exporter.go:115 level=info collector=cpufreq 2026-03-10T05:46:58.719 INFO:journalctl@ceph.node-exporter.a.vm02.stdout:Mar 10 05:46:58 vm02 bash[37510]: ts=2026-03-10T05:46:58.516Z caller=node_exporter.go:115 level=info collector=diskstats 2026-03-10T05:46:58.719 INFO:journalctl@ceph.node-exporter.a.vm02.stdout:Mar 10 05:46:58 vm02 bash[37510]: ts=2026-03-10T05:46:58.516Z caller=node_exporter.go:115 level=info collector=dmi 2026-03-10T05:46:58.719 INFO:journalctl@ceph.node-exporter.a.vm02.stdout:Mar 10 05:46:58 vm02 bash[37510]: ts=2026-03-10T05:46:58.516Z caller=node_exporter.go:115 level=info collector=edac 2026-03-10T05:46:58.719 INFO:journalctl@ceph.node-exporter.a.vm02.stdout:Mar 10 05:46:58 vm02 bash[37510]: ts=2026-03-10T05:46:58.516Z caller=node_exporter.go:115 level=info collector=entropy 2026-03-10T05:46:58.719 INFO:journalctl@ceph.node-exporter.a.vm02.stdout:Mar 10 05:46:58 vm02 bash[37510]: ts=2026-03-10T05:46:58.516Z caller=node_exporter.go:115 level=info collector=fibrechannel 2026-03-10T05:46:58.719 INFO:journalctl@ceph.node-exporter.a.vm02.stdout:Mar 10 05:46:58 vm02 bash[37510]: ts=2026-03-10T05:46:58.516Z caller=node_exporter.go:115 level=info collector=filefd 2026-03-10T05:46:58.719 INFO:journalctl@ceph.node-exporter.a.vm02.stdout:Mar 10 05:46:58 vm02 bash[37510]: ts=2026-03-10T05:46:58.516Z caller=node_exporter.go:115 level=info collector=filesystem 2026-03-10T05:46:58.719 INFO:journalctl@ceph.node-exporter.a.vm02.stdout:Mar 10 05:46:58 vm02 bash[37510]: ts=2026-03-10T05:46:58.516Z caller=node_exporter.go:115 level=info collector=hwmon 2026-03-10T05:46:58.719 INFO:journalctl@ceph.node-exporter.a.vm02.stdout:Mar 10 05:46:58 vm02 bash[37510]: ts=2026-03-10T05:46:58.516Z caller=node_exporter.go:115 level=info collector=infiniband 2026-03-10T05:46:58.719 INFO:journalctl@ceph.node-exporter.a.vm02.stdout:Mar 10 05:46:58 vm02 bash[37510]: ts=2026-03-10T05:46:58.516Z caller=node_exporter.go:115 level=info collector=ipvs 2026-03-10T05:46:58.719 INFO:journalctl@ceph.node-exporter.a.vm02.stdout:Mar 10 05:46:58 vm02 bash[37510]: ts=2026-03-10T05:46:58.516Z caller=node_exporter.go:115 level=info collector=loadavg 2026-03-10T05:46:58.719 INFO:journalctl@ceph.node-exporter.a.vm02.stdout:Mar 10 05:46:58 vm02 bash[37510]: ts=2026-03-10T05:46:58.516Z caller=node_exporter.go:115 level=info collector=mdadm 2026-03-10T05:46:58.719 INFO:journalctl@ceph.node-exporter.a.vm02.stdout:Mar 10 05:46:58 vm02 bash[37510]: ts=2026-03-10T05:46:58.516Z caller=node_exporter.go:115 level=info collector=meminfo 2026-03-10T05:46:58.719 INFO:journalctl@ceph.node-exporter.a.vm02.stdout:Mar 10 05:46:58 vm02 bash[37510]: ts=2026-03-10T05:46:58.516Z caller=node_exporter.go:115 level=info collector=netclass 2026-03-10T05:46:58.719 INFO:journalctl@ceph.node-exporter.a.vm02.stdout:Mar 10 05:46:58 vm02 bash[37510]: ts=2026-03-10T05:46:58.516Z caller=node_exporter.go:115 level=info collector=netdev 2026-03-10T05:46:58.719 INFO:journalctl@ceph.node-exporter.a.vm02.stdout:Mar 10 05:46:58 vm02 bash[37510]: ts=2026-03-10T05:46:58.516Z caller=node_exporter.go:115 level=info collector=netstat 2026-03-10T05:46:58.719 INFO:journalctl@ceph.node-exporter.a.vm02.stdout:Mar 10 05:46:58 vm02 bash[37510]: ts=2026-03-10T05:46:58.516Z caller=node_exporter.go:115 level=info collector=nfs 2026-03-10T05:46:58.719 INFO:journalctl@ceph.node-exporter.a.vm02.stdout:Mar 10 05:46:58 vm02 bash[37510]: ts=2026-03-10T05:46:58.516Z caller=node_exporter.go:115 level=info collector=nfsd 2026-03-10T05:46:58.719 INFO:journalctl@ceph.node-exporter.a.vm02.stdout:Mar 10 05:46:58 vm02 bash[37510]: ts=2026-03-10T05:46:58.516Z caller=node_exporter.go:115 level=info collector=nvme 2026-03-10T05:46:58.719 INFO:journalctl@ceph.node-exporter.a.vm02.stdout:Mar 10 05:46:58 vm02 bash[37510]: ts=2026-03-10T05:46:58.516Z caller=node_exporter.go:115 level=info collector=os 2026-03-10T05:46:58.719 INFO:journalctl@ceph.node-exporter.a.vm02.stdout:Mar 10 05:46:58 vm02 bash[37510]: ts=2026-03-10T05:46:58.516Z caller=node_exporter.go:115 level=info collector=powersupplyclass 2026-03-10T05:46:58.719 INFO:journalctl@ceph.node-exporter.a.vm02.stdout:Mar 10 05:46:58 vm02 bash[37510]: ts=2026-03-10T05:46:58.516Z caller=node_exporter.go:115 level=info collector=pressure 2026-03-10T05:46:58.719 INFO:journalctl@ceph.node-exporter.a.vm02.stdout:Mar 10 05:46:58 vm02 bash[37510]: ts=2026-03-10T05:46:58.516Z caller=node_exporter.go:115 level=info collector=rapl 2026-03-10T05:46:58.719 INFO:journalctl@ceph.node-exporter.a.vm02.stdout:Mar 10 05:46:58 vm02 bash[37510]: ts=2026-03-10T05:46:58.516Z caller=node_exporter.go:115 level=info collector=schedstat 2026-03-10T05:46:58.719 INFO:journalctl@ceph.node-exporter.a.vm02.stdout:Mar 10 05:46:58 vm02 bash[37510]: ts=2026-03-10T05:46:58.516Z caller=node_exporter.go:115 level=info collector=sockstat 2026-03-10T05:46:58.719 INFO:journalctl@ceph.node-exporter.a.vm02.stdout:Mar 10 05:46:58 vm02 bash[37510]: ts=2026-03-10T05:46:58.516Z caller=node_exporter.go:115 level=info collector=softnet 2026-03-10T05:46:58.719 INFO:journalctl@ceph.node-exporter.a.vm02.stdout:Mar 10 05:46:58 vm02 bash[37510]: ts=2026-03-10T05:46:58.516Z caller=node_exporter.go:115 level=info collector=stat 2026-03-10T05:46:58.719 INFO:journalctl@ceph.node-exporter.a.vm02.stdout:Mar 10 05:46:58 vm02 bash[37510]: ts=2026-03-10T05:46:58.516Z caller=node_exporter.go:115 level=info collector=tapestats 2026-03-10T05:46:58.719 INFO:journalctl@ceph.node-exporter.a.vm02.stdout:Mar 10 05:46:58 vm02 bash[37510]: ts=2026-03-10T05:46:58.516Z caller=node_exporter.go:115 level=info collector=textfile 2026-03-10T05:46:58.719 INFO:journalctl@ceph.node-exporter.a.vm02.stdout:Mar 10 05:46:58 vm02 bash[37510]: ts=2026-03-10T05:46:58.516Z caller=node_exporter.go:115 level=info collector=thermal_zone 2026-03-10T05:46:58.719 INFO:journalctl@ceph.node-exporter.a.vm02.stdout:Mar 10 05:46:58 vm02 bash[37510]: ts=2026-03-10T05:46:58.516Z caller=node_exporter.go:115 level=info collector=time 2026-03-10T05:46:58.719 INFO:journalctl@ceph.node-exporter.a.vm02.stdout:Mar 10 05:46:58 vm02 bash[37510]: ts=2026-03-10T05:46:58.516Z caller=node_exporter.go:115 level=info collector=udp_queues 2026-03-10T05:46:58.719 INFO:journalctl@ceph.node-exporter.a.vm02.stdout:Mar 10 05:46:58 vm02 bash[37510]: ts=2026-03-10T05:46:58.516Z caller=node_exporter.go:115 level=info collector=uname 2026-03-10T05:46:58.719 INFO:journalctl@ceph.node-exporter.a.vm02.stdout:Mar 10 05:46:58 vm02 bash[37510]: ts=2026-03-10T05:46:58.516Z caller=node_exporter.go:115 level=info collector=vmstat 2026-03-10T05:46:58.720 INFO:journalctl@ceph.node-exporter.a.vm02.stdout:Mar 10 05:46:58 vm02 bash[37510]: ts=2026-03-10T05:46:58.516Z caller=node_exporter.go:115 level=info collector=xfs 2026-03-10T05:46:58.720 INFO:journalctl@ceph.node-exporter.a.vm02.stdout:Mar 10 05:46:58 vm02 bash[37510]: ts=2026-03-10T05:46:58.516Z caller=node_exporter.go:115 level=info collector=zfs 2026-03-10T05:46:58.720 INFO:journalctl@ceph.node-exporter.a.vm02.stdout:Mar 10 05:46:58 vm02 bash[37510]: ts=2026-03-10T05:46:58.516Z caller=node_exporter.go:199 level=info msg="Listening on" address=:9100 2026-03-10T05:46:58.720 INFO:journalctl@ceph.node-exporter.a.vm02.stdout:Mar 10 05:46:58 vm02 bash[37510]: ts=2026-03-10T05:46:58.516Z caller=tls_config.go:195 level=info msg="TLS is disabled." http2=false 2026-03-10T05:46:58.969 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:46:58 vm05 bash[17864]: cluster 2026-03-10T05:46:57.680557+0000 mgr.y (mgr.14409) 20 : cluster [DBG] pgmap v6: 1 pgs: 1 active+clean; 449 KiB data, 48 MiB used, 160 GiB / 160 GiB avail 2026-03-10T05:46:58.969 INFO:journalctl@ceph.node-exporter.b.vm05.stdout:Mar 10 05:46:58 vm05 bash[32679]: b45d31ee2d7f: Verifying Checksum 2026-03-10T05:46:58.969 INFO:journalctl@ceph.node-exporter.b.vm05.stdout:Mar 10 05:46:58 vm05 bash[32679]: b45d31ee2d7f: Download complete 2026-03-10T05:46:58.969 INFO:journalctl@ceph.node-exporter.b.vm05.stdout:Mar 10 05:46:58 vm05 bash[32679]: aa2a8d90b84c: Verifying Checksum 2026-03-10T05:46:58.969 INFO:journalctl@ceph.node-exporter.b.vm05.stdout:Mar 10 05:46:58 vm05 bash[32679]: aa2a8d90b84c: Download complete 2026-03-10T05:46:58.969 INFO:journalctl@ceph.node-exporter.b.vm05.stdout:Mar 10 05:46:58 vm05 bash[32679]: aa2a8d90b84c: Pull complete 2026-03-10T05:46:58.969 INFO:journalctl@ceph.node-exporter.b.vm05.stdout:Mar 10 05:46:58 vm05 bash[32679]: b5db1e299295: Verifying Checksum 2026-03-10T05:46:58.969 INFO:journalctl@ceph.node-exporter.b.vm05.stdout:Mar 10 05:46:58 vm05 bash[32679]: b5db1e299295: Download complete 2026-03-10T05:46:58.969 INFO:journalctl@ceph.node-exporter.b.vm05.stdout:Mar 10 05:46:58 vm05 bash[32679]: b45d31ee2d7f: Pull complete 2026-03-10T05:46:59.030 INFO:teuthology.orchestra.run.vm02.stdout: 2026-03-10T05:46:59.030 INFO:teuthology.orchestra.run.vm02.stdout:{"epoch":49,"fsid":"107483ae-1c44-11f1-b530-c1172cd6122a","created":"2026-03-10T05:43:51.949234+0000","modified":"2026-03-10T05:46:51.606195+0000","last_up_change":"2026-03-10T05:46:40.617204+0000","last_in_change":"2026-03-10T05:46:27.560077+0000","flags":"sortbitwise,recovery_deletes,purged_snapdirs,pglog_hardlimit","flags_num":5799936,"flags_set":["pglog_hardlimit","purged_snapdirs","recovery_deletes","sortbitwise"],"crush_version":18,"full_ratio":0.94999998807907104,"backfillfull_ratio":0.89999997615814209,"nearfull_ratio":0.85000002384185791,"cluster_snapshot":"","pool_max":1,"max_osd":8,"require_min_compat_client":"luminous","min_compat_client":"jewel","require_osd_release":"quincy","pools":[{"pool":1,"pool_name":".mgr","create_time":"2026-03-10T05:45:26.175212+0000","flags":1,"flags_names":"hashpspool","type":1,"size":3,"min_size":2,"crush_rule":0,"peering_crush_bucket_count":0,"peering_crush_bucket_target":0,"peering_crush_bucket_barrier":0,"peering_crush_bucket_mandatory_member":2147483647,"object_hash":2,"pg_autoscale_mode":"off","pg_num":1,"pg_placement_num":1,"pg_placement_num_target":1,"pg_num_target":1,"pg_num_pending":1,"last_pg_merge_meta":{"source_pgid":"0.0","ready_epoch":0,"last_epoch_started":0,"last_epoch_clean":0,"source_version":"0'0","target_version":"0'0"},"last_change":"20","last_force_op_resend":"0","last_force_op_resend_prenautilus":"0","last_force_op_resend_preluminous":"0","auid":0,"snap_mode":"selfmanaged","snap_seq":0,"snap_epoch":0,"pool_snaps":[],"removed_snaps":"[]","quota_max_bytes":0,"quota_max_objects":0,"tiers":[],"tier_of":-1,"read_tier":-1,"write_tier":-1,"cache_mode":"none","target_max_bytes":0,"target_max_objects":0,"cache_target_dirty_ratio_micro":400000,"cache_target_dirty_high_ratio_micro":600000,"cache_target_full_ratio_micro":800000,"cache_min_flush_age":0,"cache_min_evict_age":0,"erasure_code_profile":"","hit_set_params":{"type":"none"},"hit_set_period":0,"hit_set_count":0,"use_gmt_hitset":true,"min_read_recency_for_promote":0,"min_write_recency_for_promote":0,"hit_set_grade_decay_rate":0,"hit_set_search_last_n":0,"grade_table":[],"stripe_width":0,"expected_num_objects":0,"fast_read":false,"options":{"pg_num_max":32,"pg_num_min":1},"application_metadata":{"mgr":{}}}],"osds":[{"osd":0,"uuid":"181bfe3a-c244-4b31-bf3a-c6074cc650d1","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":8,"up_thru":46,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.102:6802","nonce":3358143121},{"type":"v1","addr":"192.168.123.102:6803","nonce":3358143121}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.102:6804","nonce":3358143121},{"type":"v1","addr":"192.168.123.102:6805","nonce":3358143121}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.102:6808","nonce":3358143121},{"type":"v1","addr":"192.168.123.102:6809","nonce":3358143121}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.102:6806","nonce":3358143121},{"type":"v1","addr":"192.168.123.102:6807","nonce":3358143121}]},"public_addr":"192.168.123.102:6803/3358143121","cluster_addr":"192.168.123.102:6805/3358143121","heartbeat_back_addr":"192.168.123.102:6809/3358143121","heartbeat_front_addr":"192.168.123.102:6807/3358143121","state":["exists","up"]},{"osd":1,"uuid":"c0820da9-42eb-422f-88aa-598d51d4e5e7","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":12,"up_thru":29,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.102:6810","nonce":3944310722},{"type":"v1","addr":"192.168.123.102:6811","nonce":3944310722}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.102:6812","nonce":3944310722},{"type":"v1","addr":"192.168.123.102:6813","nonce":3944310722}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.102:6816","nonce":3944310722},{"type":"v1","addr":"192.168.123.102:6817","nonce":3944310722}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.102:6814","nonce":3944310722},{"type":"v1","addr":"192.168.123.102:6815","nonce":3944310722}]},"public_addr":"192.168.123.102:6811/3944310722","cluster_addr":"192.168.123.102:6813/3944310722","heartbeat_back_addr":"192.168.123.102:6817/3944310722","heartbeat_front_addr":"192.168.123.102:6815/3944310722","state":["exists","up"]},{"osd":2,"uuid":"2d5b11d8-3856-47e7-80bc-ba0d5e91fd6c","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":17,"up_thru":0,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.102:6818","nonce":1818843754},{"type":"v1","addr":"192.168.123.102:6819","nonce":1818843754}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.102:6820","nonce":1818843754},{"type":"v1","addr":"192.168.123.102:6821","nonce":1818843754}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.102:6824","nonce":1818843754},{"type":"v1","addr":"192.168.123.102:6825","nonce":1818843754}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.102:6822","nonce":1818843754},{"type":"v1","addr":"192.168.123.102:6823","nonce":1818843754}]},"public_addr":"192.168.123.102:6819/1818843754","cluster_addr":"192.168.123.102:6821/1818843754","heartbeat_back_addr":"192.168.123.102:6825/1818843754","heartbeat_front_addr":"192.168.123.102:6823/1818843754","state":["exists","up"]},{"osd":3,"uuid":"c8c62231-6895-42f2-ba03-c49e0ca5380e","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":23,"up_thru":0,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.102:6826","nonce":268408037},{"type":"v1","addr":"192.168.123.102:6827","nonce":268408037}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.102:6828","nonce":268408037},{"type":"v1","addr":"192.168.123.102:6829","nonce":268408037}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.102:6832","nonce":268408037},{"type":"v1","addr":"192.168.123.102:6833","nonce":268408037}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.102:6830","nonce":268408037},{"type":"v1","addr":"192.168.123.102:6831","nonce":268408037}]},"public_addr":"192.168.123.102:6827/268408037","cluster_addr":"192.168.123.102:6829/268408037","heartbeat_back_addr":"192.168.123.102:6833/268408037","heartbeat_front_addr":"192.168.123.102:6831/268408037","state":["exists","up"]},{"osd":4,"uuid":"49541bd1-b8b0-4d09-9b97-6ca490c33f9d","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":28,"up_thru":0,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.105:6800","nonce":1737072685},{"type":"v1","addr":"192.168.123.105:6801","nonce":1737072685}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.105:6802","nonce":1737072685},{"type":"v1","addr":"192.168.123.105:6803","nonce":1737072685}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.105:6806","nonce":1737072685},{"type":"v1","addr":"192.168.123.105:6807","nonce":1737072685}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.105:6804","nonce":1737072685},{"type":"v1","addr":"192.168.123.105:6805","nonce":1737072685}]},"public_addr":"192.168.123.105:6801/1737072685","cluster_addr":"192.168.123.105:6803/1737072685","heartbeat_back_addr":"192.168.123.105:6807/1737072685","heartbeat_front_addr":"192.168.123.105:6805/1737072685","state":["exists","up"]},{"osd":5,"uuid":"2b35feb0-b492-4603-81e0-b864fb275f8c","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":34,"up_thru":35,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.105:6808","nonce":3303341454},{"type":"v1","addr":"192.168.123.105:6809","nonce":3303341454}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.105:6810","nonce":3303341454},{"type":"v1","addr":"192.168.123.105:6811","nonce":3303341454}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.105:6814","nonce":3303341454},{"type":"v1","addr":"192.168.123.105:6815","nonce":3303341454}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.105:6812","nonce":3303341454},{"type":"v1","addr":"192.168.123.105:6813","nonce":3303341454}]},"public_addr":"192.168.123.105:6809/3303341454","cluster_addr":"192.168.123.105:6811/3303341454","heartbeat_back_addr":"192.168.123.105:6815/3303341454","heartbeat_front_addr":"192.168.123.105:6813/3303341454","state":["exists","up"]},{"osd":6,"uuid":"b2fa96ba-d56a-43b9-ab42-f9fc8abe2daf","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":40,"up_thru":41,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.105:6816","nonce":566773014},{"type":"v1","addr":"192.168.123.105:6817","nonce":566773014}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.105:6818","nonce":566773014},{"type":"v1","addr":"192.168.123.105:6819","nonce":566773014}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.105:6822","nonce":566773014},{"type":"v1","addr":"192.168.123.105:6823","nonce":566773014}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.105:6820","nonce":566773014},{"type":"v1","addr":"192.168.123.105:6821","nonce":566773014}]},"public_addr":"192.168.123.105:6817/566773014","cluster_addr":"192.168.123.105:6819/566773014","heartbeat_back_addr":"192.168.123.105:6823/566773014","heartbeat_front_addr":"192.168.123.105:6821/566773014","state":["exists","up"]},{"osd":7,"uuid":"2d1f3ab7-28e5-424b-a95a-4d9947f78095","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":46,"up_thru":47,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.105:6824","nonce":3413503051},{"type":"v1","addr":"192.168.123.105:6825","nonce":3413503051}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.105:6826","nonce":3413503051},{"type":"v1","addr":"192.168.123.105:6827","nonce":3413503051}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.105:6830","nonce":3413503051},{"type":"v1","addr":"192.168.123.105:6831","nonce":3413503051}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.105:6828","nonce":3413503051},{"type":"v1","addr":"192.168.123.105:6829","nonce":3413503051}]},"public_addr":"192.168.123.105:6825/3413503051","cluster_addr":"192.168.123.105:6827/3413503051","heartbeat_back_addr":"192.168.123.105:6831/3413503051","heartbeat_front_addr":"192.168.123.105:6829/3413503051","state":["exists","up"]}],"osd_xinfo":[{"osd":0,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540138303579357183,"old_weight":0,"last_purged_snaps_scrub":"2026-03-10T05:44:53.076359+0000","dead_epoch":0},{"osd":1,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540138303579357183,"old_weight":0,"last_purged_snaps_scrub":"2026-03-10T05:45:08.109400+0000","dead_epoch":0},{"osd":2,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540138303579357183,"old_weight":0,"last_purged_snaps_scrub":"2026-03-10T05:45:23.624885+0000","dead_epoch":0},{"osd":3,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540138303579357183,"old_weight":0,"last_purged_snaps_scrub":"2026-03-10T05:45:39.146568+0000","dead_epoch":0},{"osd":4,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540138303579357183,"old_weight":0,"last_purged_snaps_scrub":"2026-03-10T05:45:53.043525+0000","dead_epoch":0},{"osd":5,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540138303579357183,"old_weight":0,"last_purged_snaps_scrub":"2026-03-10T05:46:07.589294+0000","dead_epoch":0},{"osd":6,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540138303579357183,"old_weight":0,"last_purged_snaps_scrub":"2026-03-10T05:46:23.196862+0000","dead_epoch":0},{"osd":7,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540138303579357183,"old_weight":0,"last_purged_snaps_scrub":"2026-03-10T05:46:38.565809+0000","dead_epoch":0}],"pg_upmap":[],"pg_upmap_items":[],"pg_temp":[],"primary_temp":[],"blocklist":{"192.168.123.102:0/1222859905":"2026-03-11T05:46:51.606099+0000","192.168.123.102:0/437369469":"2026-03-11T05:46:51.606099+0000","192.168.123.102:6800/3587596038":"2026-03-11T05:46:51.606099+0000","192.168.123.102:0/1174218704":"2026-03-11T05:46:51.606099+0000","192.168.123.102:6801/3587596038":"2026-03-11T05:46:51.606099+0000","192.168.123.102:0/180339681":"2026-03-11T05:44:13.944512+0000","192.168.123.102:0/3558265816":"2026-03-11T05:44:13.944512+0000","192.168.123.102:0/1876503597":"2026-03-11T05:44:13.944512+0000","192.168.123.102:6801/3932825893":"2026-03-11T05:44:13.944512+0000","192.168.123.102:6800/3932825893":"2026-03-11T05:44:13.944512+0000","192.168.123.102:0/2702126893":"2026-03-11T05:44:04.884697+0000","192.168.123.102:0/4232033379":"2026-03-11T05:44:04.884697+0000","192.168.123.102:6801/123828670":"2026-03-11T05:44:04.884697+0000","192.168.123.102:0/3250290581":"2026-03-11T05:44:04.884697+0000","192.168.123.102:0/4276843242":"2026-03-11T05:46:51.606099+0000","192.168.123.102:6800/123828670":"2026-03-11T05:44:04.884697+0000"},"erasure_code_profiles":{"default":{"crush-failure-domain":"osd","k":"2","m":"1","plugin":"jerasure","technique":"reed_sol_van"}},"removed_snaps_queue":[],"new_removed_snaps":[],"new_purged_snaps":[],"crush_node_flags":{},"device_class_flags":{},"stretch_mode":{"stretch_mode_enabled":false,"stretch_bucket_count":0,"degraded_stretch_mode":0,"recovering_stretch_mode":0,"stretch_mode_bucket":0}} 2026-03-10T05:46:59.041 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:46:58 vm02 bash[22526]: cluster 2026-03-10T05:46:57.680557+0000 mgr.y (mgr.14409) 20 : cluster [DBG] pgmap v6: 1 pgs: 1 active+clean; 449 KiB data, 48 MiB used, 160 GiB / 160 GiB avail 2026-03-10T05:46:59.082 INFO:tasks.cephadm.ceph_manager.ceph:all up! 2026-03-10T05:46:59.082 DEBUG:teuthology.orchestra.run.vm02:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 shell --fsid 107483ae-1c44-11f1-b530-c1172cd6122a -- ceph osd dump --format=json 2026-03-10T05:46:59.258 INFO:journalctl@ceph.node-exporter.b.vm05.stdout:Mar 10 05:46:58 vm05 bash[32679]: b5db1e299295: Pull complete 2026-03-10T05:46:59.258 INFO:journalctl@ceph.node-exporter.b.vm05.stdout:Mar 10 05:46:58 vm05 bash[32679]: Digest: sha256:f2269e73124dd0f60a7d19a2ce1264d33d08a985aed0ee6b0b89d0be470592cd 2026-03-10T05:46:59.258 INFO:journalctl@ceph.node-exporter.b.vm05.stdout:Mar 10 05:46:58 vm05 bash[32679]: Status: Downloaded newer image for quay.io/prometheus/node-exporter:v1.3.1 2026-03-10T05:46:59.258 INFO:journalctl@ceph.node-exporter.b.vm05.stdout:Mar 10 05:46:59 vm05 bash[32679]: ts=2026-03-10T05:46:59.095Z caller=node_exporter.go:182 level=info msg="Starting node_exporter" version="(version=1.3.1, branch=HEAD, revision=a2321e7b940ddcff26873612bccdf7cd4c42b6b6)" 2026-03-10T05:46:59.258 INFO:journalctl@ceph.node-exporter.b.vm05.stdout:Mar 10 05:46:59 vm05 bash[32679]: ts=2026-03-10T05:46:59.095Z caller=node_exporter.go:183 level=info msg="Build context" build_context="(go=go1.17.3, user=root@243aafa5525c, date=20211205-11:09:49)" 2026-03-10T05:46:59.258 INFO:journalctl@ceph.node-exporter.b.vm05.stdout:Mar 10 05:46:59 vm05 bash[32679]: ts=2026-03-10T05:46:59.096Z caller=filesystem_common.go:111 level=info collector=filesystem msg="Parsed flag --collector.filesystem.mount-points-exclude" flag=^/(dev|proc|run/credentials/.+|sys|var/lib/docker/.+)($|/) 2026-03-10T05:46:59.258 INFO:journalctl@ceph.node-exporter.b.vm05.stdout:Mar 10 05:46:59 vm05 bash[32679]: ts=2026-03-10T05:46:59.096Z caller=filesystem_common.go:113 level=info collector=filesystem msg="Parsed flag --collector.filesystem.fs-types-exclude" flag=^(autofs|binfmt_misc|bpf|cgroup2?|configfs|debugfs|devpts|devtmpfs|fusectl|hugetlbfs|iso9660|mqueue|nsfs|overlay|proc|procfs|pstore|rpc_pipefs|securityfs|selinuxfs|squashfs|sysfs|tracefs)$ 2026-03-10T05:46:59.258 INFO:journalctl@ceph.node-exporter.b.vm05.stdout:Mar 10 05:46:59 vm05 bash[32679]: ts=2026-03-10T05:46:59.096Z caller=node_exporter.go:108 level=info msg="Enabled collectors" 2026-03-10T05:46:59.258 INFO:journalctl@ceph.node-exporter.b.vm05.stdout:Mar 10 05:46:59 vm05 bash[32679]: ts=2026-03-10T05:46:59.096Z caller=node_exporter.go:115 level=info collector=arp 2026-03-10T05:46:59.258 INFO:journalctl@ceph.node-exporter.b.vm05.stdout:Mar 10 05:46:59 vm05 bash[32679]: ts=2026-03-10T05:46:59.096Z caller=node_exporter.go:115 level=info collector=bcache 2026-03-10T05:46:59.258 INFO:journalctl@ceph.node-exporter.b.vm05.stdout:Mar 10 05:46:59 vm05 bash[32679]: ts=2026-03-10T05:46:59.096Z caller=node_exporter.go:115 level=info collector=bonding 2026-03-10T05:46:59.258 INFO:journalctl@ceph.node-exporter.b.vm05.stdout:Mar 10 05:46:59 vm05 bash[32679]: ts=2026-03-10T05:46:59.096Z caller=node_exporter.go:115 level=info collector=btrfs 2026-03-10T05:46:59.258 INFO:journalctl@ceph.node-exporter.b.vm05.stdout:Mar 10 05:46:59 vm05 bash[32679]: ts=2026-03-10T05:46:59.096Z caller=node_exporter.go:115 level=info collector=conntrack 2026-03-10T05:46:59.258 INFO:journalctl@ceph.node-exporter.b.vm05.stdout:Mar 10 05:46:59 vm05 bash[32679]: ts=2026-03-10T05:46:59.096Z caller=node_exporter.go:115 level=info collector=cpu 2026-03-10T05:46:59.258 INFO:journalctl@ceph.node-exporter.b.vm05.stdout:Mar 10 05:46:59 vm05 bash[32679]: ts=2026-03-10T05:46:59.096Z caller=node_exporter.go:115 level=info collector=cpufreq 2026-03-10T05:46:59.258 INFO:journalctl@ceph.node-exporter.b.vm05.stdout:Mar 10 05:46:59 vm05 bash[32679]: ts=2026-03-10T05:46:59.096Z caller=node_exporter.go:115 level=info collector=diskstats 2026-03-10T05:46:59.258 INFO:journalctl@ceph.node-exporter.b.vm05.stdout:Mar 10 05:46:59 vm05 bash[32679]: ts=2026-03-10T05:46:59.096Z caller=node_exporter.go:115 level=info collector=dmi 2026-03-10T05:46:59.258 INFO:journalctl@ceph.node-exporter.b.vm05.stdout:Mar 10 05:46:59 vm05 bash[32679]: ts=2026-03-10T05:46:59.096Z caller=node_exporter.go:115 level=info collector=edac 2026-03-10T05:46:59.258 INFO:journalctl@ceph.node-exporter.b.vm05.stdout:Mar 10 05:46:59 vm05 bash[32679]: ts=2026-03-10T05:46:59.096Z caller=node_exporter.go:115 level=info collector=entropy 2026-03-10T05:46:59.258 INFO:journalctl@ceph.node-exporter.b.vm05.stdout:Mar 10 05:46:59 vm05 bash[32679]: ts=2026-03-10T05:46:59.097Z caller=node_exporter.go:115 level=info collector=fibrechannel 2026-03-10T05:46:59.259 INFO:journalctl@ceph.node-exporter.b.vm05.stdout:Mar 10 05:46:59 vm05 bash[32679]: ts=2026-03-10T05:46:59.097Z caller=node_exporter.go:115 level=info collector=filefd 2026-03-10T05:46:59.259 INFO:journalctl@ceph.node-exporter.b.vm05.stdout:Mar 10 05:46:59 vm05 bash[32679]: ts=2026-03-10T05:46:59.097Z caller=node_exporter.go:115 level=info collector=filesystem 2026-03-10T05:46:59.259 INFO:journalctl@ceph.node-exporter.b.vm05.stdout:Mar 10 05:46:59 vm05 bash[32679]: ts=2026-03-10T05:46:59.097Z caller=node_exporter.go:115 level=info collector=hwmon 2026-03-10T05:46:59.259 INFO:journalctl@ceph.node-exporter.b.vm05.stdout:Mar 10 05:46:59 vm05 bash[32679]: ts=2026-03-10T05:46:59.097Z caller=node_exporter.go:115 level=info collector=infiniband 2026-03-10T05:46:59.259 INFO:journalctl@ceph.node-exporter.b.vm05.stdout:Mar 10 05:46:59 vm05 bash[32679]: ts=2026-03-10T05:46:59.097Z caller=node_exporter.go:115 level=info collector=ipvs 2026-03-10T05:46:59.259 INFO:journalctl@ceph.node-exporter.b.vm05.stdout:Mar 10 05:46:59 vm05 bash[32679]: ts=2026-03-10T05:46:59.097Z caller=node_exporter.go:115 level=info collector=loadavg 2026-03-10T05:46:59.259 INFO:journalctl@ceph.node-exporter.b.vm05.stdout:Mar 10 05:46:59 vm05 bash[32679]: ts=2026-03-10T05:46:59.097Z caller=node_exporter.go:115 level=info collector=mdadm 2026-03-10T05:46:59.259 INFO:journalctl@ceph.node-exporter.b.vm05.stdout:Mar 10 05:46:59 vm05 bash[32679]: ts=2026-03-10T05:46:59.097Z caller=node_exporter.go:115 level=info collector=meminfo 2026-03-10T05:46:59.259 INFO:journalctl@ceph.node-exporter.b.vm05.stdout:Mar 10 05:46:59 vm05 bash[32679]: ts=2026-03-10T05:46:59.097Z caller=node_exporter.go:115 level=info collector=netclass 2026-03-10T05:46:59.259 INFO:journalctl@ceph.node-exporter.b.vm05.stdout:Mar 10 05:46:59 vm05 bash[32679]: ts=2026-03-10T05:46:59.097Z caller=node_exporter.go:115 level=info collector=netdev 2026-03-10T05:46:59.259 INFO:journalctl@ceph.node-exporter.b.vm05.stdout:Mar 10 05:46:59 vm05 bash[32679]: ts=2026-03-10T05:46:59.097Z caller=node_exporter.go:115 level=info collector=netstat 2026-03-10T05:46:59.259 INFO:journalctl@ceph.node-exporter.b.vm05.stdout:Mar 10 05:46:59 vm05 bash[32679]: ts=2026-03-10T05:46:59.097Z caller=node_exporter.go:115 level=info collector=nfs 2026-03-10T05:46:59.259 INFO:journalctl@ceph.node-exporter.b.vm05.stdout:Mar 10 05:46:59 vm05 bash[32679]: ts=2026-03-10T05:46:59.097Z caller=node_exporter.go:115 level=info collector=nfsd 2026-03-10T05:46:59.259 INFO:journalctl@ceph.node-exporter.b.vm05.stdout:Mar 10 05:46:59 vm05 bash[32679]: ts=2026-03-10T05:46:59.097Z caller=node_exporter.go:115 level=info collector=nvme 2026-03-10T05:46:59.259 INFO:journalctl@ceph.node-exporter.b.vm05.stdout:Mar 10 05:46:59 vm05 bash[32679]: ts=2026-03-10T05:46:59.097Z caller=node_exporter.go:115 level=info collector=os 2026-03-10T05:46:59.259 INFO:journalctl@ceph.node-exporter.b.vm05.stdout:Mar 10 05:46:59 vm05 bash[32679]: ts=2026-03-10T05:46:59.097Z caller=node_exporter.go:115 level=info collector=powersupplyclass 2026-03-10T05:46:59.259 INFO:journalctl@ceph.node-exporter.b.vm05.stdout:Mar 10 05:46:59 vm05 bash[32679]: ts=2026-03-10T05:46:59.097Z caller=node_exporter.go:115 level=info collector=pressure 2026-03-10T05:46:59.259 INFO:journalctl@ceph.node-exporter.b.vm05.stdout:Mar 10 05:46:59 vm05 bash[32679]: ts=2026-03-10T05:46:59.097Z caller=node_exporter.go:115 level=info collector=rapl 2026-03-10T05:46:59.259 INFO:journalctl@ceph.node-exporter.b.vm05.stdout:Mar 10 05:46:59 vm05 bash[32679]: ts=2026-03-10T05:46:59.097Z caller=node_exporter.go:115 level=info collector=schedstat 2026-03-10T05:46:59.259 INFO:journalctl@ceph.node-exporter.b.vm05.stdout:Mar 10 05:46:59 vm05 bash[32679]: ts=2026-03-10T05:46:59.097Z caller=node_exporter.go:115 level=info collector=sockstat 2026-03-10T05:46:59.259 INFO:journalctl@ceph.node-exporter.b.vm05.stdout:Mar 10 05:46:59 vm05 bash[32679]: ts=2026-03-10T05:46:59.097Z caller=node_exporter.go:115 level=info collector=softnet 2026-03-10T05:46:59.259 INFO:journalctl@ceph.node-exporter.b.vm05.stdout:Mar 10 05:46:59 vm05 bash[32679]: ts=2026-03-10T05:46:59.097Z caller=node_exporter.go:115 level=info collector=stat 2026-03-10T05:46:59.259 INFO:journalctl@ceph.node-exporter.b.vm05.stdout:Mar 10 05:46:59 vm05 bash[32679]: ts=2026-03-10T05:46:59.097Z caller=node_exporter.go:115 level=info collector=tapestats 2026-03-10T05:46:59.259 INFO:journalctl@ceph.node-exporter.b.vm05.stdout:Mar 10 05:46:59 vm05 bash[32679]: ts=2026-03-10T05:46:59.097Z caller=node_exporter.go:115 level=info collector=textfile 2026-03-10T05:46:59.259 INFO:journalctl@ceph.node-exporter.b.vm05.stdout:Mar 10 05:46:59 vm05 bash[32679]: ts=2026-03-10T05:46:59.097Z caller=node_exporter.go:115 level=info collector=thermal_zone 2026-03-10T05:46:59.259 INFO:journalctl@ceph.node-exporter.b.vm05.stdout:Mar 10 05:46:59 vm05 bash[32679]: ts=2026-03-10T05:46:59.097Z caller=node_exporter.go:115 level=info collector=time 2026-03-10T05:46:59.259 INFO:journalctl@ceph.node-exporter.b.vm05.stdout:Mar 10 05:46:59 vm05 bash[32679]: ts=2026-03-10T05:46:59.097Z caller=node_exporter.go:115 level=info collector=udp_queues 2026-03-10T05:46:59.259 INFO:journalctl@ceph.node-exporter.b.vm05.stdout:Mar 10 05:46:59 vm05 bash[32679]: ts=2026-03-10T05:46:59.097Z caller=node_exporter.go:115 level=info collector=uname 2026-03-10T05:46:59.259 INFO:journalctl@ceph.node-exporter.b.vm05.stdout:Mar 10 05:46:59 vm05 bash[32679]: ts=2026-03-10T05:46:59.098Z caller=node_exporter.go:115 level=info collector=vmstat 2026-03-10T05:46:59.259 INFO:journalctl@ceph.node-exporter.b.vm05.stdout:Mar 10 05:46:59 vm05 bash[32679]: ts=2026-03-10T05:46:59.098Z caller=node_exporter.go:115 level=info collector=xfs 2026-03-10T05:46:59.259 INFO:journalctl@ceph.node-exporter.b.vm05.stdout:Mar 10 05:46:59 vm05 bash[32679]: ts=2026-03-10T05:46:59.098Z caller=node_exporter.go:115 level=info collector=zfs 2026-03-10T05:46:59.259 INFO:journalctl@ceph.node-exporter.b.vm05.stdout:Mar 10 05:46:59 vm05 bash[32679]: ts=2026-03-10T05:46:59.098Z caller=node_exporter.go:199 level=info msg="Listening on" address=:9100 2026-03-10T05:46:59.259 INFO:journalctl@ceph.node-exporter.b.vm05.stdout:Mar 10 05:46:59 vm05 bash[32679]: ts=2026-03-10T05:46:59.098Z caller=tls_config.go:195 level=info msg="TLS is disabled." http2=false 2026-03-10T05:47:00.008 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:46:59 vm05 bash[17864]: audit 2026-03-10T05:46:59.030173+0000 mon.a (mon.0) 599 : audit [DBG] from='client.? 192.168.123.102:0/1460628699' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-10T05:47:00.084 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:46:59 vm02 bash[17462]: audit 2026-03-10T05:46:59.030173+0000 mon.a (mon.0) 599 : audit [DBG] from='client.? 192.168.123.102:0/1460628699' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-10T05:47:00.084 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:46:59 vm02 bash[22526]: audit 2026-03-10T05:46:59.030173+0000 mon.a (mon.0) 599 : audit [DBG] from='client.? 192.168.123.102:0/1460628699' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-10T05:47:00.719 INFO:teuthology.orchestra.run.vm02.stderr:Inferring config /var/lib/ceph/107483ae-1c44-11f1-b530-c1172cd6122a/mon.c/config 2026-03-10T05:47:01.008 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:47:00 vm05 bash[17864]: cluster 2026-03-10T05:46:59.680828+0000 mgr.y (mgr.14409) 21 : cluster [DBG] pgmap v7: 1 pgs: 1 active+clean; 449 KiB data, 48 MiB used, 160 GiB / 160 GiB avail 2026-03-10T05:47:01.030 INFO:teuthology.orchestra.run.vm02.stdout: 2026-03-10T05:47:01.031 INFO:teuthology.orchestra.run.vm02.stdout:{"epoch":49,"fsid":"107483ae-1c44-11f1-b530-c1172cd6122a","created":"2026-03-10T05:43:51.949234+0000","modified":"2026-03-10T05:46:51.606195+0000","last_up_change":"2026-03-10T05:46:40.617204+0000","last_in_change":"2026-03-10T05:46:27.560077+0000","flags":"sortbitwise,recovery_deletes,purged_snapdirs,pglog_hardlimit","flags_num":5799936,"flags_set":["pglog_hardlimit","purged_snapdirs","recovery_deletes","sortbitwise"],"crush_version":18,"full_ratio":0.94999998807907104,"backfillfull_ratio":0.89999997615814209,"nearfull_ratio":0.85000002384185791,"cluster_snapshot":"","pool_max":1,"max_osd":8,"require_min_compat_client":"luminous","min_compat_client":"jewel","require_osd_release":"quincy","pools":[{"pool":1,"pool_name":".mgr","create_time":"2026-03-10T05:45:26.175212+0000","flags":1,"flags_names":"hashpspool","type":1,"size":3,"min_size":2,"crush_rule":0,"peering_crush_bucket_count":0,"peering_crush_bucket_target":0,"peering_crush_bucket_barrier":0,"peering_crush_bucket_mandatory_member":2147483647,"object_hash":2,"pg_autoscale_mode":"off","pg_num":1,"pg_placement_num":1,"pg_placement_num_target":1,"pg_num_target":1,"pg_num_pending":1,"last_pg_merge_meta":{"source_pgid":"0.0","ready_epoch":0,"last_epoch_started":0,"last_epoch_clean":0,"source_version":"0'0","target_version":"0'0"},"last_change":"20","last_force_op_resend":"0","last_force_op_resend_prenautilus":"0","last_force_op_resend_preluminous":"0","auid":0,"snap_mode":"selfmanaged","snap_seq":0,"snap_epoch":0,"pool_snaps":[],"removed_snaps":"[]","quota_max_bytes":0,"quota_max_objects":0,"tiers":[],"tier_of":-1,"read_tier":-1,"write_tier":-1,"cache_mode":"none","target_max_bytes":0,"target_max_objects":0,"cache_target_dirty_ratio_micro":400000,"cache_target_dirty_high_ratio_micro":600000,"cache_target_full_ratio_micro":800000,"cache_min_flush_age":0,"cache_min_evict_age":0,"erasure_code_profile":"","hit_set_params":{"type":"none"},"hit_set_period":0,"hit_set_count":0,"use_gmt_hitset":true,"min_read_recency_for_promote":0,"min_write_recency_for_promote":0,"hit_set_grade_decay_rate":0,"hit_set_search_last_n":0,"grade_table":[],"stripe_width":0,"expected_num_objects":0,"fast_read":false,"options":{"pg_num_max":32,"pg_num_min":1},"application_metadata":{"mgr":{}}}],"osds":[{"osd":0,"uuid":"181bfe3a-c244-4b31-bf3a-c6074cc650d1","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":8,"up_thru":46,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.102:6802","nonce":3358143121},{"type":"v1","addr":"192.168.123.102:6803","nonce":3358143121}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.102:6804","nonce":3358143121},{"type":"v1","addr":"192.168.123.102:6805","nonce":3358143121}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.102:6808","nonce":3358143121},{"type":"v1","addr":"192.168.123.102:6809","nonce":3358143121}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.102:6806","nonce":3358143121},{"type":"v1","addr":"192.168.123.102:6807","nonce":3358143121}]},"public_addr":"192.168.123.102:6803/3358143121","cluster_addr":"192.168.123.102:6805/3358143121","heartbeat_back_addr":"192.168.123.102:6809/3358143121","heartbeat_front_addr":"192.168.123.102:6807/3358143121","state":["exists","up"]},{"osd":1,"uuid":"c0820da9-42eb-422f-88aa-598d51d4e5e7","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":12,"up_thru":29,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.102:6810","nonce":3944310722},{"type":"v1","addr":"192.168.123.102:6811","nonce":3944310722}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.102:6812","nonce":3944310722},{"type":"v1","addr":"192.168.123.102:6813","nonce":3944310722}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.102:6816","nonce":3944310722},{"type":"v1","addr":"192.168.123.102:6817","nonce":3944310722}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.102:6814","nonce":3944310722},{"type":"v1","addr":"192.168.123.102:6815","nonce":3944310722}]},"public_addr":"192.168.123.102:6811/3944310722","cluster_addr":"192.168.123.102:6813/3944310722","heartbeat_back_addr":"192.168.123.102:6817/3944310722","heartbeat_front_addr":"192.168.123.102:6815/3944310722","state":["exists","up"]},{"osd":2,"uuid":"2d5b11d8-3856-47e7-80bc-ba0d5e91fd6c","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":17,"up_thru":0,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.102:6818","nonce":1818843754},{"type":"v1","addr":"192.168.123.102:6819","nonce":1818843754}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.102:6820","nonce":1818843754},{"type":"v1","addr":"192.168.123.102:6821","nonce":1818843754}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.102:6824","nonce":1818843754},{"type":"v1","addr":"192.168.123.102:6825","nonce":1818843754}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.102:6822","nonce":1818843754},{"type":"v1","addr":"192.168.123.102:6823","nonce":1818843754}]},"public_addr":"192.168.123.102:6819/1818843754","cluster_addr":"192.168.123.102:6821/1818843754","heartbeat_back_addr":"192.168.123.102:6825/1818843754","heartbeat_front_addr":"192.168.123.102:6823/1818843754","state":["exists","up"]},{"osd":3,"uuid":"c8c62231-6895-42f2-ba03-c49e0ca5380e","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":23,"up_thru":0,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.102:6826","nonce":268408037},{"type":"v1","addr":"192.168.123.102:6827","nonce":268408037}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.102:6828","nonce":268408037},{"type":"v1","addr":"192.168.123.102:6829","nonce":268408037}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.102:6832","nonce":268408037},{"type":"v1","addr":"192.168.123.102:6833","nonce":268408037}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.102:6830","nonce":268408037},{"type":"v1","addr":"192.168.123.102:6831","nonce":268408037}]},"public_addr":"192.168.123.102:6827/268408037","cluster_addr":"192.168.123.102:6829/268408037","heartbeat_back_addr":"192.168.123.102:6833/268408037","heartbeat_front_addr":"192.168.123.102:6831/268408037","state":["exists","up"]},{"osd":4,"uuid":"49541bd1-b8b0-4d09-9b97-6ca490c33f9d","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":28,"up_thru":0,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.105:6800","nonce":1737072685},{"type":"v1","addr":"192.168.123.105:6801","nonce":1737072685}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.105:6802","nonce":1737072685},{"type":"v1","addr":"192.168.123.105:6803","nonce":1737072685}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.105:6806","nonce":1737072685},{"type":"v1","addr":"192.168.123.105:6807","nonce":1737072685}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.105:6804","nonce":1737072685},{"type":"v1","addr":"192.168.123.105:6805","nonce":1737072685}]},"public_addr":"192.168.123.105:6801/1737072685","cluster_addr":"192.168.123.105:6803/1737072685","heartbeat_back_addr":"192.168.123.105:6807/1737072685","heartbeat_front_addr":"192.168.123.105:6805/1737072685","state":["exists","up"]},{"osd":5,"uuid":"2b35feb0-b492-4603-81e0-b864fb275f8c","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":34,"up_thru":35,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.105:6808","nonce":3303341454},{"type":"v1","addr":"192.168.123.105:6809","nonce":3303341454}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.105:6810","nonce":3303341454},{"type":"v1","addr":"192.168.123.105:6811","nonce":3303341454}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.105:6814","nonce":3303341454},{"type":"v1","addr":"192.168.123.105:6815","nonce":3303341454}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.105:6812","nonce":3303341454},{"type":"v1","addr":"192.168.123.105:6813","nonce":3303341454}]},"public_addr":"192.168.123.105:6809/3303341454","cluster_addr":"192.168.123.105:6811/3303341454","heartbeat_back_addr":"192.168.123.105:6815/3303341454","heartbeat_front_addr":"192.168.123.105:6813/3303341454","state":["exists","up"]},{"osd":6,"uuid":"b2fa96ba-d56a-43b9-ab42-f9fc8abe2daf","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":40,"up_thru":41,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.105:6816","nonce":566773014},{"type":"v1","addr":"192.168.123.105:6817","nonce":566773014}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.105:6818","nonce":566773014},{"type":"v1","addr":"192.168.123.105:6819","nonce":566773014}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.105:6822","nonce":566773014},{"type":"v1","addr":"192.168.123.105:6823","nonce":566773014}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.105:6820","nonce":566773014},{"type":"v1","addr":"192.168.123.105:6821","nonce":566773014}]},"public_addr":"192.168.123.105:6817/566773014","cluster_addr":"192.168.123.105:6819/566773014","heartbeat_back_addr":"192.168.123.105:6823/566773014","heartbeat_front_addr":"192.168.123.105:6821/566773014","state":["exists","up"]},{"osd":7,"uuid":"2d1f3ab7-28e5-424b-a95a-4d9947f78095","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":46,"up_thru":47,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.105:6824","nonce":3413503051},{"type":"v1","addr":"192.168.123.105:6825","nonce":3413503051}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.105:6826","nonce":3413503051},{"type":"v1","addr":"192.168.123.105:6827","nonce":3413503051}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.105:6830","nonce":3413503051},{"type":"v1","addr":"192.168.123.105:6831","nonce":3413503051}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.105:6828","nonce":3413503051},{"type":"v1","addr":"192.168.123.105:6829","nonce":3413503051}]},"public_addr":"192.168.123.105:6825/3413503051","cluster_addr":"192.168.123.105:6827/3413503051","heartbeat_back_addr":"192.168.123.105:6831/3413503051","heartbeat_front_addr":"192.168.123.105:6829/3413503051","state":["exists","up"]}],"osd_xinfo":[{"osd":0,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540138303579357183,"old_weight":0,"last_purged_snaps_scrub":"2026-03-10T05:44:53.076359+0000","dead_epoch":0},{"osd":1,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540138303579357183,"old_weight":0,"last_purged_snaps_scrub":"2026-03-10T05:45:08.109400+0000","dead_epoch":0},{"osd":2,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540138303579357183,"old_weight":0,"last_purged_snaps_scrub":"2026-03-10T05:45:23.624885+0000","dead_epoch":0},{"osd":3,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540138303579357183,"old_weight":0,"last_purged_snaps_scrub":"2026-03-10T05:45:39.146568+0000","dead_epoch":0},{"osd":4,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540138303579357183,"old_weight":0,"last_purged_snaps_scrub":"2026-03-10T05:45:53.043525+0000","dead_epoch":0},{"osd":5,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540138303579357183,"old_weight":0,"last_purged_snaps_scrub":"2026-03-10T05:46:07.589294+0000","dead_epoch":0},{"osd":6,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540138303579357183,"old_weight":0,"last_purged_snaps_scrub":"2026-03-10T05:46:23.196862+0000","dead_epoch":0},{"osd":7,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540138303579357183,"old_weight":0,"last_purged_snaps_scrub":"2026-03-10T05:46:38.565809+0000","dead_epoch":0}],"pg_upmap":[],"pg_upmap_items":[],"pg_temp":[],"primary_temp":[],"blocklist":{"192.168.123.102:0/1222859905":"2026-03-11T05:46:51.606099+0000","192.168.123.102:0/437369469":"2026-03-11T05:46:51.606099+0000","192.168.123.102:6800/3587596038":"2026-03-11T05:46:51.606099+0000","192.168.123.102:0/1174218704":"2026-03-11T05:46:51.606099+0000","192.168.123.102:6801/3587596038":"2026-03-11T05:46:51.606099+0000","192.168.123.102:0/180339681":"2026-03-11T05:44:13.944512+0000","192.168.123.102:0/3558265816":"2026-03-11T05:44:13.944512+0000","192.168.123.102:0/1876503597":"2026-03-11T05:44:13.944512+0000","192.168.123.102:6801/3932825893":"2026-03-11T05:44:13.944512+0000","192.168.123.102:6800/3932825893":"2026-03-11T05:44:13.944512+0000","192.168.123.102:0/2702126893":"2026-03-11T05:44:04.884697+0000","192.168.123.102:0/4232033379":"2026-03-11T05:44:04.884697+0000","192.168.123.102:6801/123828670":"2026-03-11T05:44:04.884697+0000","192.168.123.102:0/3250290581":"2026-03-11T05:44:04.884697+0000","192.168.123.102:0/4276843242":"2026-03-11T05:46:51.606099+0000","192.168.123.102:6800/123828670":"2026-03-11T05:44:04.884697+0000"},"erasure_code_profiles":{"default":{"crush-failure-domain":"osd","k":"2","m":"1","plugin":"jerasure","technique":"reed_sol_van"}},"removed_snaps_queue":[],"new_removed_snaps":[],"new_purged_snaps":[],"crush_node_flags":{},"device_class_flags":{},"stretch_mode":{"stretch_mode_enabled":false,"stretch_bucket_count":0,"degraded_stretch_mode":0,"recovering_stretch_mode":0,"stretch_mode_bucket":0}} 2026-03-10T05:47:01.041 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:47:00 vm02 bash[17462]: cluster 2026-03-10T05:46:59.680828+0000 mgr.y (mgr.14409) 21 : cluster [DBG] pgmap v7: 1 pgs: 1 active+clean; 449 KiB data, 48 MiB used, 160 GiB / 160 GiB avail 2026-03-10T05:47:01.041 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:47:00 vm02 bash[22526]: cluster 2026-03-10T05:46:59.680828+0000 mgr.y (mgr.14409) 21 : cluster [DBG] pgmap v7: 1 pgs: 1 active+clean; 449 KiB data, 48 MiB used, 160 GiB / 160 GiB avail 2026-03-10T05:47:01.084 DEBUG:teuthology.orchestra.run.vm02:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 shell --fsid 107483ae-1c44-11f1-b530-c1172cd6122a -- ceph tell osd.0 flush_pg_stats 2026-03-10T05:47:01.084 DEBUG:teuthology.orchestra.run.vm02:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 shell --fsid 107483ae-1c44-11f1-b530-c1172cd6122a -- ceph tell osd.1 flush_pg_stats 2026-03-10T05:47:01.084 DEBUG:teuthology.orchestra.run.vm02:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 shell --fsid 107483ae-1c44-11f1-b530-c1172cd6122a -- ceph tell osd.2 flush_pg_stats 2026-03-10T05:47:01.084 DEBUG:teuthology.orchestra.run.vm02:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 shell --fsid 107483ae-1c44-11f1-b530-c1172cd6122a -- ceph tell osd.3 flush_pg_stats 2026-03-10T05:47:01.084 DEBUG:teuthology.orchestra.run.vm02:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 shell --fsid 107483ae-1c44-11f1-b530-c1172cd6122a -- ceph tell osd.4 flush_pg_stats 2026-03-10T05:47:01.084 DEBUG:teuthology.orchestra.run.vm02:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 shell --fsid 107483ae-1c44-11f1-b530-c1172cd6122a -- ceph tell osd.5 flush_pg_stats 2026-03-10T05:47:01.084 DEBUG:teuthology.orchestra.run.vm02:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 shell --fsid 107483ae-1c44-11f1-b530-c1172cd6122a -- ceph tell osd.6 flush_pg_stats 2026-03-10T05:47:01.085 DEBUG:teuthology.orchestra.run.vm02:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 shell --fsid 107483ae-1c44-11f1-b530-c1172cd6122a -- ceph tell osd.7 flush_pg_stats 2026-03-10T05:47:02.008 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:47:01 vm05 bash[17864]: audit 2026-03-10T05:47:01.024477+0000 mon.b (mon.2) 23 : audit [DBG] from='client.? 192.168.123.102:0/2321808630' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-10T05:47:02.084 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:47:01 vm02 bash[17462]: audit 2026-03-10T05:47:01.024477+0000 mon.b (mon.2) 23 : audit [DBG] from='client.? 192.168.123.102:0/2321808630' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-10T05:47:02.084 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:47:01 vm02 bash[22526]: audit 2026-03-10T05:47:01.024477+0000 mon.b (mon.2) 23 : audit [DBG] from='client.? 192.168.123.102:0/2321808630' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-10T05:47:02.976 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:47:02 vm05 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:47:02.976 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:47:02 vm05 bash[17864]: cluster 2026-03-10T05:47:01.681094+0000 mgr.y (mgr.14409) 22 : cluster [DBG] pgmap v8: 1 pgs: 1 active+clean; 449 KiB data, 48 MiB used, 160 GiB / 160 GiB avail 2026-03-10T05:47:02.976 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:47:02 vm05 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:47:02.976 INFO:journalctl@ceph.osd.6.vm05.stdout:Mar 10 05:47:02 vm05 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:47:02.976 INFO:journalctl@ceph.osd.6.vm05.stdout:Mar 10 05:47:02 vm05 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:47:02.976 INFO:journalctl@ceph.prometheus.a.vm05.stdout:Mar 10 05:47:02 vm05 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:47:02.976 INFO:journalctl@ceph.prometheus.a.vm05.stdout:Mar 10 05:47:02 vm05 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:47:02.976 INFO:journalctl@ceph.prometheus.a.vm05.stdout:Mar 10 05:47:02 vm05 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:47:02.976 INFO:journalctl@ceph.prometheus.a.vm05.stdout:Mar 10 05:47:02 vm05 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:47:02.976 INFO:journalctl@ceph.prometheus.a.vm05.stdout:Mar 10 05:47:02 vm05 systemd[1]: Started Ceph prometheus.a for 107483ae-1c44-11f1-b530-c1172cd6122a. 2026-03-10T05:47:02.977 INFO:journalctl@ceph.mgr.x.vm05.stdout:Mar 10 05:47:02 vm05 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:47:02.978 INFO:journalctl@ceph.mgr.x.vm05.stdout:Mar 10 05:47:02 vm05 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:47:02.978 INFO:journalctl@ceph.osd.4.vm05.stdout:Mar 10 05:47:02 vm05 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:47:02.978 INFO:journalctl@ceph.osd.4.vm05.stdout:Mar 10 05:47:02 vm05 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:47:02.978 INFO:journalctl@ceph.osd.5.vm05.stdout:Mar 10 05:47:02 vm05 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:47:02.978 INFO:journalctl@ceph.osd.5.vm05.stdout:Mar 10 05:47:02 vm05 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:47:02.978 INFO:journalctl@ceph.osd.7.vm05.stdout:Mar 10 05:47:02 vm05 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:47:02.978 INFO:journalctl@ceph.osd.7.vm05.stdout:Mar 10 05:47:02 vm05 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:47:02.978 INFO:journalctl@ceph.node-exporter.b.vm05.stdout:Mar 10 05:47:02 vm05 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:47:02.978 INFO:journalctl@ceph.node-exporter.b.vm05.stdout:Mar 10 05:47:02 vm05 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:47:03.011 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:47:02 vm02 bash[17462]: cluster 2026-03-10T05:47:01.681094+0000 mgr.y (mgr.14409) 22 : cluster [DBG] pgmap v8: 1 pgs: 1 active+clean; 449 KiB data, 48 MiB used, 160 GiB / 160 GiB avail 2026-03-10T05:47:03.011 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:47:02 vm02 bash[22526]: cluster 2026-03-10T05:47:01.681094+0000 mgr.y (mgr.14409) 22 : cluster [DBG] pgmap v8: 1 pgs: 1 active+clean; 449 KiB data, 48 MiB used, 160 GiB / 160 GiB avail 2026-03-10T05:47:03.258 INFO:journalctl@ceph.prometheus.a.vm05.stdout:Mar 10 05:47:03 vm05 bash[33062]: ts=2026-03-10T05:47:03.084Z caller=main.go:475 level=info msg="No time or size retention was set so using the default time retention" duration=15d 2026-03-10T05:47:03.258 INFO:journalctl@ceph.prometheus.a.vm05.stdout:Mar 10 05:47:03 vm05 bash[33062]: ts=2026-03-10T05:47:03.084Z caller=main.go:512 level=info msg="Starting Prometheus" version="(version=2.33.4, branch=HEAD, revision=83032011a5d3e6102624fe58241a374a7201fee8)" 2026-03-10T05:47:03.258 INFO:journalctl@ceph.prometheus.a.vm05.stdout:Mar 10 05:47:03 vm05 bash[33062]: ts=2026-03-10T05:47:03.084Z caller=main.go:517 level=info build_context="(go=go1.17.7, user=root@d13bf69e7be8, date=20220222-16:51:28)" 2026-03-10T05:47:03.258 INFO:journalctl@ceph.prometheus.a.vm05.stdout:Mar 10 05:47:03 vm05 bash[33062]: ts=2026-03-10T05:47:03.084Z caller=main.go:518 level=info host_details="(Linux 5.15.0-1092-kvm #97-Ubuntu SMP Fri Jan 23 15:00:24 UTC 2026 x86_64 vm05 (none))" 2026-03-10T05:47:03.258 INFO:journalctl@ceph.prometheus.a.vm05.stdout:Mar 10 05:47:03 vm05 bash[33062]: ts=2026-03-10T05:47:03.084Z caller=main.go:519 level=info fd_limits="(soft=1048576, hard=1048576)" 2026-03-10T05:47:03.258 INFO:journalctl@ceph.prometheus.a.vm05.stdout:Mar 10 05:47:03 vm05 bash[33062]: ts=2026-03-10T05:47:03.084Z caller=main.go:520 level=info vm_limits="(soft=unlimited, hard=unlimited)" 2026-03-10T05:47:03.258 INFO:journalctl@ceph.prometheus.a.vm05.stdout:Mar 10 05:47:03 vm05 bash[33062]: ts=2026-03-10T05:47:03.085Z caller=web.go:570 level=info component=web msg="Start listening for connections" address=:9095 2026-03-10T05:47:03.258 INFO:journalctl@ceph.prometheus.a.vm05.stdout:Mar 10 05:47:03 vm05 bash[33062]: ts=2026-03-10T05:47:03.086Z caller=main.go:923 level=info msg="Starting TSDB ..." 2026-03-10T05:47:03.258 INFO:journalctl@ceph.prometheus.a.vm05.stdout:Mar 10 05:47:03 vm05 bash[33062]: ts=2026-03-10T05:47:03.087Z caller=head.go:493 level=info component=tsdb msg="Replaying on-disk memory mappable chunks if any" 2026-03-10T05:47:03.258 INFO:journalctl@ceph.prometheus.a.vm05.stdout:Mar 10 05:47:03 vm05 bash[33062]: ts=2026-03-10T05:47:03.087Z caller=head.go:527 level=info component=tsdb msg="On-disk memory mappable chunks replay completed" duration=1.142µs 2026-03-10T05:47:03.258 INFO:journalctl@ceph.prometheus.a.vm05.stdout:Mar 10 05:47:03 vm05 bash[33062]: ts=2026-03-10T05:47:03.087Z caller=head.go:533 level=info component=tsdb msg="Replaying WAL, this may take a while" 2026-03-10T05:47:03.258 INFO:journalctl@ceph.prometheus.a.vm05.stdout:Mar 10 05:47:03 vm05 bash[33062]: ts=2026-03-10T05:47:03.087Z caller=head.go:604 level=info component=tsdb msg="WAL segment loaded" segment=0 maxSegment=0 2026-03-10T05:47:03.258 INFO:journalctl@ceph.prometheus.a.vm05.stdout:Mar 10 05:47:03 vm05 bash[33062]: ts=2026-03-10T05:47:03.087Z caller=head.go:610 level=info component=tsdb msg="WAL replay completed" checkpoint_replay_duration=16.16µs wal_replay_duration=84.507µs total_replay_duration=108.713µs 2026-03-10T05:47:03.259 INFO:journalctl@ceph.prometheus.a.vm05.stdout:Mar 10 05:47:03 vm05 bash[33062]: ts=2026-03-10T05:47:03.088Z caller=main.go:944 level=info fs_type=EXT4_SUPER_MAGIC 2026-03-10T05:47:03.259 INFO:journalctl@ceph.prometheus.a.vm05.stdout:Mar 10 05:47:03 vm05 bash[33062]: ts=2026-03-10T05:47:03.088Z caller=main.go:947 level=info msg="TSDB started" 2026-03-10T05:47:03.259 INFO:journalctl@ceph.prometheus.a.vm05.stdout:Mar 10 05:47:03 vm05 bash[33062]: ts=2026-03-10T05:47:03.088Z caller=main.go:1128 level=info msg="Loading configuration file" filename=/etc/prometheus/prometheus.yml 2026-03-10T05:47:03.259 INFO:journalctl@ceph.prometheus.a.vm05.stdout:Mar 10 05:47:03 vm05 bash[33062]: ts=2026-03-10T05:47:03.089Z caller=tls_config.go:195 level=info component=web msg="TLS is disabled." http2=false 2026-03-10T05:47:03.259 INFO:journalctl@ceph.prometheus.a.vm05.stdout:Mar 10 05:47:03 vm05 bash[33062]: ts=2026-03-10T05:47:03.100Z caller=main.go:1165 level=info msg="Completed loading of configuration file" filename=/etc/prometheus/prometheus.yml totalDuration=12.815614ms db_storage=541ns remote_storage=922ns web_handler=160ns query_engine=301ns scrape=1.72683ms scrape_sd=48.09µs notify=411ns notify_sd=1.563µs rules=10.91553ms 2026-03-10T05:47:03.259 INFO:journalctl@ceph.prometheus.a.vm05.stdout:Mar 10 05:47:03 vm05 bash[33062]: ts=2026-03-10T05:47:03.101Z caller=main.go:896 level=info msg="Server is ready to receive web requests." 2026-03-10T05:47:03.334 INFO:journalctl@ceph.alertmanager.a.vm02.stdout:Mar 10 05:47:03 vm02 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:47:03.922 INFO:teuthology.orchestra.run.vm02.stderr:Inferring config /var/lib/ceph/107483ae-1c44-11f1-b530-c1172cd6122a/mon.c/config 2026-03-10T05:47:03.923 INFO:teuthology.orchestra.run.vm02.stderr:Inferring config /var/lib/ceph/107483ae-1c44-11f1-b530-c1172cd6122a/mon.c/config 2026-03-10T05:47:03.924 INFO:teuthology.orchestra.run.vm02.stderr:Inferring config /var/lib/ceph/107483ae-1c44-11f1-b530-c1172cd6122a/mon.c/config 2026-03-10T05:47:03.926 INFO:teuthology.orchestra.run.vm02.stderr:Inferring config /var/lib/ceph/107483ae-1c44-11f1-b530-c1172cd6122a/mon.c/config 2026-03-10T05:47:03.931 INFO:teuthology.orchestra.run.vm02.stderr:Inferring config /var/lib/ceph/107483ae-1c44-11f1-b530-c1172cd6122a/mon.c/config 2026-03-10T05:47:03.933 INFO:teuthology.orchestra.run.vm02.stderr:Inferring config /var/lib/ceph/107483ae-1c44-11f1-b530-c1172cd6122a/mon.c/config 2026-03-10T05:47:03.933 INFO:teuthology.orchestra.run.vm02.stderr:Inferring config /var/lib/ceph/107483ae-1c44-11f1-b530-c1172cd6122a/mon.c/config 2026-03-10T05:47:03.934 INFO:teuthology.orchestra.run.vm02.stderr:Inferring config /var/lib/ceph/107483ae-1c44-11f1-b530-c1172cd6122a/mon.c/config 2026-03-10T05:47:04.215 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:47:04 vm02 bash[17462]: audit 2026-03-10T05:47:02.999563+0000 mon.a (mon.0) 600 : audit [INF] from='mgr.14409 ' entity='mgr.y' 2026-03-10T05:47:04.215 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:47:04 vm02 bash[17462]: cephadm 2026-03-10T05:47:03.004769+0000 mgr.y (mgr.14409) 23 : cephadm [INF] Deploying daemon alertmanager.a on vm02 2026-03-10T05:47:04.215 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:47:04 vm02 bash[17462]: cluster 2026-03-10T05:47:03.681304+0000 mgr.y (mgr.14409) 24 : cluster [DBG] pgmap v9: 1 pgs: 1 active+clean; 449 KiB data, 48 MiB used, 160 GiB / 160 GiB avail 2026-03-10T05:47:04.215 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:47:04 vm02 bash[22526]: audit 2026-03-10T05:47:02.999563+0000 mon.a (mon.0) 600 : audit [INF] from='mgr.14409 ' entity='mgr.y' 2026-03-10T05:47:04.215 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:47:04 vm02 bash[22526]: cephadm 2026-03-10T05:47:03.004769+0000 mgr.y (mgr.14409) 23 : cephadm [INF] Deploying daemon alertmanager.a on vm02 2026-03-10T05:47:04.215 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:47:04 vm02 bash[22526]: cluster 2026-03-10T05:47:03.681304+0000 mgr.y (mgr.14409) 24 : cluster [DBG] pgmap v9: 1 pgs: 1 active+clean; 449 KiB data, 48 MiB used, 160 GiB / 160 GiB avail 2026-03-10T05:47:04.508 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:47:04 vm05 bash[17864]: audit 2026-03-10T05:47:02.999563+0000 mon.a (mon.0) 600 : audit [INF] from='mgr.14409 ' entity='mgr.y' 2026-03-10T05:47:04.508 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:47:04 vm05 bash[17864]: cephadm 2026-03-10T05:47:03.004769+0000 mgr.y (mgr.14409) 23 : cephadm [INF] Deploying daemon alertmanager.a on vm02 2026-03-10T05:47:04.508 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:47:04 vm05 bash[17864]: cluster 2026-03-10T05:47:03.681304+0000 mgr.y (mgr.14409) 24 : cluster [DBG] pgmap v9: 1 pgs: 1 active+clean; 449 KiB data, 48 MiB used, 160 GiB / 160 GiB avail 2026-03-10T05:47:04.775 INFO:teuthology.orchestra.run.vm02.stdout:34359738395 2026-03-10T05:47:04.776 DEBUG:teuthology.orchestra.run.vm02:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 shell --fsid 107483ae-1c44-11f1-b530-c1172cd6122a -- ceph osd last-stat-seq osd.0 2026-03-10T05:47:04.993 INFO:teuthology.orchestra.run.vm02.stdout:197568495622 2026-03-10T05:47:04.993 DEBUG:teuthology.orchestra.run.vm02:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 shell --fsid 107483ae-1c44-11f1-b530-c1172cd6122a -- ceph osd last-stat-seq osd.7 2026-03-10T05:47:05.035 INFO:teuthology.orchestra.run.vm02.stdout:146028888076 2026-03-10T05:47:05.035 DEBUG:teuthology.orchestra.run.vm02:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 shell --fsid 107483ae-1c44-11f1-b530-c1172cd6122a -- ceph osd last-stat-seq osd.5 2026-03-10T05:47:05.084 INFO:teuthology.orchestra.run.vm02.stdout:51539607576 2026-03-10T05:47:05.084 DEBUG:teuthology.orchestra.run.vm02:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 shell --fsid 107483ae-1c44-11f1-b530-c1172cd6122a -- ceph osd last-stat-seq osd.1 2026-03-10T05:47:05.218 INFO:teuthology.orchestra.run.vm02.stdout:120259084303 2026-03-10T05:47:05.218 DEBUG:teuthology.orchestra.run.vm02:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 shell --fsid 107483ae-1c44-11f1-b530-c1172cd6122a -- ceph osd last-stat-seq osd.4 2026-03-10T05:47:05.284 INFO:teuthology.orchestra.run.vm02.stdout:98784247826 2026-03-10T05:47:05.285 DEBUG:teuthology.orchestra.run.vm02:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 shell --fsid 107483ae-1c44-11f1-b530-c1172cd6122a -- ceph osd last-stat-seq osd.3 2026-03-10T05:47:05.347 INFO:teuthology.orchestra.run.vm02.stdout:73014444053 2026-03-10T05:47:05.347 DEBUG:teuthology.orchestra.run.vm02:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 shell --fsid 107483ae-1c44-11f1-b530-c1172cd6122a -- ceph osd last-stat-seq osd.2 2026-03-10T05:47:05.356 INFO:teuthology.orchestra.run.vm02.stdout:171798691849 2026-03-10T05:47:05.356 DEBUG:teuthology.orchestra.run.vm02:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 shell --fsid 107483ae-1c44-11f1-b530-c1172cd6122a -- ceph osd last-stat-seq osd.6 2026-03-10T05:47:06.535 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:47:06 vm02 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:47:06.535 INFO:journalctl@ceph.mgr.y.vm02.stdout:Mar 10 05:47:06 vm02 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:47:06.535 INFO:journalctl@ceph.osd.0.vm02.stdout:Mar 10 05:47:06 vm02 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:47:06.535 INFO:journalctl@ceph.osd.1.vm02.stdout:Mar 10 05:47:06 vm02 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:47:06.535 INFO:journalctl@ceph.osd.2.vm02.stdout:Mar 10 05:47:06 vm02 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:47:06.535 INFO:journalctl@ceph.osd.3.vm02.stdout:Mar 10 05:47:06 vm02 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:47:06.535 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:47:06 vm02 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:47:06.535 INFO:journalctl@ceph.alertmanager.a.vm02.stdout:Mar 10 05:47:06 vm02 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:47:06.535 INFO:journalctl@ceph.alertmanager.a.vm02.stdout:Mar 10 05:47:06 vm02 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:47:06.535 INFO:journalctl@ceph.alertmanager.a.vm02.stdout:Mar 10 05:47:06 vm02 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:47:06.535 INFO:journalctl@ceph.node-exporter.a.vm02.stdout:Mar 10 05:47:06 vm02 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:47:06.833 INFO:journalctl@ceph.osd.0.vm02.stdout:Mar 10 05:47:06 vm02 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:47:06.834 INFO:journalctl@ceph.osd.1.vm02.stdout:Mar 10 05:47:06 vm02 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:47:06.834 INFO:journalctl@ceph.osd.2.vm02.stdout:Mar 10 05:47:06 vm02 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:47:06.834 INFO:journalctl@ceph.osd.3.vm02.stdout:Mar 10 05:47:06 vm02 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:47:06.834 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:47:06 vm02 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:47:06.834 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:47:06 vm02 bash[17462]: cluster 2026-03-10T05:47:05.681511+0000 mgr.y (mgr.14409) 25 : cluster [DBG] pgmap v10: 1 pgs: 1 active+clean; 449 KiB data, 48 MiB used, 160 GiB / 160 GiB avail 2026-03-10T05:47:06.834 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:47:06 vm02 bash[17462]: audit 2026-03-10T05:47:06.625333+0000 mon.a (mon.0) 601 : audit [INF] from='mgr.14409 ' entity='mgr.y' 2026-03-10T05:47:06.834 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:47:06 vm02 bash[17462]: audit 2026-03-10T05:47:06.663148+0000 mon.a (mon.0) 602 : audit [INF] from='mgr.14409 ' entity='mgr.y' 2026-03-10T05:47:06.834 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:47:06 vm02 bash[17462]: audit 2026-03-10T05:47:06.668790+0000 mon.a (mon.0) 603 : audit [INF] from='mgr.14409 ' entity='mgr.y' 2026-03-10T05:47:06.834 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:47:06 vm02 bash[17462]: audit 2026-03-10T05:47:06.671615+0000 mon.c (mon.1) 51 : audit [INF] from='mgr.14409 192.168.123.102:0/4073702081' entity='mgr.y' cmd=[{"prefix": "dashboard set-grafana-api-ssl-verify", "value": "false"}]: dispatch 2026-03-10T05:47:06.834 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:47:06 vm02 bash[17462]: audit 2026-03-10T05:47:06.676840+0000 mon.a (mon.0) 604 : audit [INF] from='mgr.14409 ' entity='mgr.y' 2026-03-10T05:47:06.834 INFO:journalctl@ceph.mgr.y.vm02.stdout:Mar 10 05:47:06 vm02 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:47:06.834 INFO:journalctl@ceph.node-exporter.a.vm02.stdout:Mar 10 05:47:06 vm02 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:47:06.834 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:47:06 vm02 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:47:06.834 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:47:06 vm02 bash[22526]: cluster 2026-03-10T05:47:05.681511+0000 mgr.y (mgr.14409) 25 : cluster [DBG] pgmap v10: 1 pgs: 1 active+clean; 449 KiB data, 48 MiB used, 160 GiB / 160 GiB avail 2026-03-10T05:47:06.834 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:47:06 vm02 bash[22526]: audit 2026-03-10T05:47:06.625333+0000 mon.a (mon.0) 601 : audit [INF] from='mgr.14409 ' entity='mgr.y' 2026-03-10T05:47:06.834 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:47:06 vm02 bash[22526]: audit 2026-03-10T05:47:06.663148+0000 mon.a (mon.0) 602 : audit [INF] from='mgr.14409 ' entity='mgr.y' 2026-03-10T05:47:06.834 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:47:06 vm02 bash[22526]: audit 2026-03-10T05:47:06.668790+0000 mon.a (mon.0) 603 : audit [INF] from='mgr.14409 ' entity='mgr.y' 2026-03-10T05:47:06.834 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:47:06 vm02 bash[22526]: audit 2026-03-10T05:47:06.671615+0000 mon.c (mon.1) 51 : audit [INF] from='mgr.14409 192.168.123.102:0/4073702081' entity='mgr.y' cmd=[{"prefix": "dashboard set-grafana-api-ssl-verify", "value": "false"}]: dispatch 2026-03-10T05:47:06.834 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:47:06 vm02 bash[22526]: audit 2026-03-10T05:47:06.676840+0000 mon.a (mon.0) 604 : audit [INF] from='mgr.14409 ' entity='mgr.y' 2026-03-10T05:47:06.834 INFO:journalctl@ceph.alertmanager.a.vm02.stdout:Mar 10 05:47:06 vm02 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:47:06.834 INFO:journalctl@ceph.alertmanager.a.vm02.stdout:Mar 10 05:47:06 vm02 systemd[1]: Started Ceph alertmanager.a for 107483ae-1c44-11f1-b530-c1172cd6122a. 2026-03-10T05:47:06.834 INFO:journalctl@ceph.alertmanager.a.vm02.stdout:Mar 10 05:47:06 vm02 bash[39873]: level=info ts=2026-03-10T05:47:06.724Z caller=main.go:225 msg="Starting Alertmanager" version="(version=0.23.0, branch=HEAD, revision=61046b17771a57cfd4c4a51be370ab930a4d7d54)" 2026-03-10T05:47:06.835 INFO:journalctl@ceph.alertmanager.a.vm02.stdout:Mar 10 05:47:06 vm02 bash[39873]: level=info ts=2026-03-10T05:47:06.724Z caller=main.go:226 build_context="(go=go1.16.7, user=root@e21a959be8d2, date=20210825-10:48:55)" 2026-03-10T05:47:06.835 INFO:journalctl@ceph.alertmanager.a.vm02.stdout:Mar 10 05:47:06 vm02 bash[39873]: level=info ts=2026-03-10T05:47:06.725Z caller=cluster.go:184 component=cluster msg="setting advertise address explicitly" addr=192.168.123.102 port=9094 2026-03-10T05:47:06.835 INFO:journalctl@ceph.alertmanager.a.vm02.stdout:Mar 10 05:47:06 vm02 bash[39873]: level=info ts=2026-03-10T05:47:06.726Z caller=cluster.go:671 component=cluster msg="Waiting for gossip to settle..." interval=2s 2026-03-10T05:47:06.835 INFO:journalctl@ceph.alertmanager.a.vm02.stdout:Mar 10 05:47:06 vm02 bash[39873]: level=info ts=2026-03-10T05:47:06.748Z caller=coordinator.go:113 component=configuration msg="Loading configuration file" file=/etc/alertmanager/alertmanager.yml 2026-03-10T05:47:06.835 INFO:journalctl@ceph.alertmanager.a.vm02.stdout:Mar 10 05:47:06 vm02 bash[39873]: level=info ts=2026-03-10T05:47:06.750Z caller=coordinator.go:126 component=configuration msg="Completed loading of configuration file" file=/etc/alertmanager/alertmanager.yml 2026-03-10T05:47:06.835 INFO:journalctl@ceph.alertmanager.a.vm02.stdout:Mar 10 05:47:06 vm02 bash[39873]: level=info ts=2026-03-10T05:47:06.751Z caller=main.go:518 msg=Listening address=:9093 2026-03-10T05:47:06.835 INFO:journalctl@ceph.alertmanager.a.vm02.stdout:Mar 10 05:47:06 vm02 bash[39873]: level=info ts=2026-03-10T05:47:06.751Z caller=tls_config.go:191 msg="TLS is disabled." http2=false 2026-03-10T05:47:07.008 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:47:06 vm05 bash[17864]: cluster 2026-03-10T05:47:05.681511+0000 mgr.y (mgr.14409) 25 : cluster [DBG] pgmap v10: 1 pgs: 1 active+clean; 449 KiB data, 48 MiB used, 160 GiB / 160 GiB avail 2026-03-10T05:47:07.008 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:47:06 vm05 bash[17864]: audit 2026-03-10T05:47:06.625333+0000 mon.a (mon.0) 601 : audit [INF] from='mgr.14409 ' entity='mgr.y' 2026-03-10T05:47:07.008 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:47:06 vm05 bash[17864]: audit 2026-03-10T05:47:06.663148+0000 mon.a (mon.0) 602 : audit [INF] from='mgr.14409 ' entity='mgr.y' 2026-03-10T05:47:07.008 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:47:06 vm05 bash[17864]: audit 2026-03-10T05:47:06.668790+0000 mon.a (mon.0) 603 : audit [INF] from='mgr.14409 ' entity='mgr.y' 2026-03-10T05:47:07.008 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:47:06 vm05 bash[17864]: audit 2026-03-10T05:47:06.671615+0000 mon.c (mon.1) 51 : audit [INF] from='mgr.14409 192.168.123.102:0/4073702081' entity='mgr.y' cmd=[{"prefix": "dashboard set-grafana-api-ssl-verify", "value": "false"}]: dispatch 2026-03-10T05:47:07.008 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:47:06 vm05 bash[17864]: audit 2026-03-10T05:47:06.676840+0000 mon.a (mon.0) 604 : audit [INF] from='mgr.14409 ' entity='mgr.y' 2026-03-10T05:47:07.008 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:47:06 vm05 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:47:08.008 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:47:07 vm05 bash[17864]: audit 2026-03-10T05:47:06.671905+0000 mgr.y (mgr.14409) 26 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard set-grafana-api-ssl-verify", "value": "false"}]: dispatch 2026-03-10T05:47:08.008 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:47:07 vm05 bash[17864]: cephadm 2026-03-10T05:47:06.687747+0000 mgr.y (mgr.14409) 27 : cephadm [INF] Deploying daemon grafana.a on vm05 2026-03-10T05:47:08.008 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:47:07 vm05 bash[17864]: audit 2026-03-10T05:47:06.748412+0000 mon.a (mon.0) 605 : audit [INF] from='mgr.14409 ' entity='mgr.y' 2026-03-10T05:47:08.084 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:47:07 vm02 bash[17462]: audit 2026-03-10T05:47:06.671905+0000 mgr.y (mgr.14409) 26 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard set-grafana-api-ssl-verify", "value": "false"}]: dispatch 2026-03-10T05:47:08.084 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:47:07 vm02 bash[17462]: cephadm 2026-03-10T05:47:06.687747+0000 mgr.y (mgr.14409) 27 : cephadm [INF] Deploying daemon grafana.a on vm05 2026-03-10T05:47:08.084 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:47:07 vm02 bash[17462]: audit 2026-03-10T05:47:06.748412+0000 mon.a (mon.0) 605 : audit [INF] from='mgr.14409 ' entity='mgr.y' 2026-03-10T05:47:08.084 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:47:07 vm02 bash[22526]: audit 2026-03-10T05:47:06.671905+0000 mgr.y (mgr.14409) 26 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard set-grafana-api-ssl-verify", "value": "false"}]: dispatch 2026-03-10T05:47:08.084 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:47:07 vm02 bash[22526]: cephadm 2026-03-10T05:47:06.687747+0000 mgr.y (mgr.14409) 27 : cephadm [INF] Deploying daemon grafana.a on vm05 2026-03-10T05:47:08.084 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:47:07 vm02 bash[22526]: audit 2026-03-10T05:47:06.748412+0000 mon.a (mon.0) 605 : audit [INF] from='mgr.14409 ' entity='mgr.y' 2026-03-10T05:47:08.459 INFO:teuthology.orchestra.run.vm02.stderr:Inferring config /var/lib/ceph/107483ae-1c44-11f1-b530-c1172cd6122a/mon.c/config 2026-03-10T05:47:08.460 INFO:teuthology.orchestra.run.vm02.stderr:Inferring config /var/lib/ceph/107483ae-1c44-11f1-b530-c1172cd6122a/mon.c/config 2026-03-10T05:47:08.460 INFO:teuthology.orchestra.run.vm02.stderr:Inferring config /var/lib/ceph/107483ae-1c44-11f1-b530-c1172cd6122a/mon.c/config 2026-03-10T05:47:08.463 INFO:teuthology.orchestra.run.vm02.stderr:Inferring config /var/lib/ceph/107483ae-1c44-11f1-b530-c1172cd6122a/mon.c/config 2026-03-10T05:47:08.468 INFO:teuthology.orchestra.run.vm02.stderr:Inferring config /var/lib/ceph/107483ae-1c44-11f1-b530-c1172cd6122a/mon.c/config 2026-03-10T05:47:08.469 INFO:teuthology.orchestra.run.vm02.stderr:Inferring config /var/lib/ceph/107483ae-1c44-11f1-b530-c1172cd6122a/mon.c/config 2026-03-10T05:47:08.469 INFO:teuthology.orchestra.run.vm02.stderr:Inferring config /var/lib/ceph/107483ae-1c44-11f1-b530-c1172cd6122a/mon.c/config 2026-03-10T05:47:08.469 INFO:teuthology.orchestra.run.vm02.stderr:Inferring config /var/lib/ceph/107483ae-1c44-11f1-b530-c1172cd6122a/mon.c/config 2026-03-10T05:47:09.008 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:47:08 vm05 bash[17864]: cluster 2026-03-10T05:47:07.681764+0000 mgr.y (mgr.14409) 28 : cluster [DBG] pgmap v11: 1 pgs: 1 active+clean; 449 KiB data, 48 MiB used, 160 GiB / 160 GiB avail 2026-03-10T05:47:09.084 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:47:08 vm02 bash[17462]: cluster 2026-03-10T05:47:07.681764+0000 mgr.y (mgr.14409) 28 : cluster [DBG] pgmap v11: 1 pgs: 1 active+clean; 449 KiB data, 48 MiB used, 160 GiB / 160 GiB avail 2026-03-10T05:47:09.084 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:47:08 vm02 bash[22526]: cluster 2026-03-10T05:47:07.681764+0000 mgr.y (mgr.14409) 28 : cluster [DBG] pgmap v11: 1 pgs: 1 active+clean; 449 KiB data, 48 MiB used, 160 GiB / 160 GiB avail 2026-03-10T05:47:09.084 INFO:journalctl@ceph.alertmanager.a.vm02.stdout:Mar 10 05:47:08 vm02 bash[39873]: level=info ts=2026-03-10T05:47:08.727Z caller=cluster.go:696 component=cluster msg="gossip not settled" polls=0 before=0 now=1 elapsed=2.000905442s 2026-03-10T05:47:09.597 INFO:teuthology.orchestra.run.vm02.stdout:73014444053 2026-03-10T05:47:09.601 INFO:teuthology.orchestra.run.vm02.stdout:171798691849 2026-03-10T05:47:09.673 INFO:teuthology.orchestra.run.vm02.stdout:34359738395 2026-03-10T05:47:09.704 INFO:teuthology.orchestra.run.vm02.stdout:146028888076 2026-03-10T05:47:09.750 INFO:tasks.cephadm.ceph_manager.ceph:need seq 73014444053 got 73014444053 for osd.2 2026-03-10T05:47:09.750 DEBUG:teuthology.parallel:result is None 2026-03-10T05:47:09.824 INFO:tasks.cephadm.ceph_manager.ceph:need seq 171798691849 got 171798691849 for osd.6 2026-03-10T05:47:09.825 DEBUG:teuthology.parallel:result is None 2026-03-10T05:47:09.880 INFO:tasks.cephadm.ceph_manager.ceph:need seq 146028888076 got 146028888076 for osd.5 2026-03-10T05:47:09.880 DEBUG:teuthology.parallel:result is None 2026-03-10T05:47:09.894 INFO:tasks.cephadm.ceph_manager.ceph:need seq 34359738395 got 34359738395 for osd.0 2026-03-10T05:47:09.894 DEBUG:teuthology.parallel:result is None 2026-03-10T05:47:09.910 INFO:teuthology.orchestra.run.vm02.stdout:51539607577 2026-03-10T05:47:09.982 INFO:tasks.cephadm.ceph_manager.ceph:need seq 51539607576 got 51539607577 for osd.1 2026-03-10T05:47:09.982 INFO:teuthology.orchestra.run.vm02.stdout:197568495623 2026-03-10T05:47:09.982 DEBUG:teuthology.parallel:result is None 2026-03-10T05:47:09.998 INFO:teuthology.orchestra.run.vm02.stdout:98784247827 2026-03-10T05:47:10.007 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:47:09 vm02 bash[17462]: audit 2026-03-10T05:47:09.588714+0000 mon.c (mon.1) 52 : audit [DBG] from='client.? 192.168.123.102:0/2279055470' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 2}]: dispatch 2026-03-10T05:47:10.007 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:47:09 vm02 bash[17462]: audit 2026-03-10T05:47:09.601362+0000 mon.a (mon.0) 606 : audit [DBG] from='client.? 192.168.123.102:0/1104130991' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 6}]: dispatch 2026-03-10T05:47:10.007 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:47:09 vm02 bash[17462]: audit 2026-03-10T05:47:09.671844+0000 mon.a (mon.0) 607 : audit [DBG] from='client.? 192.168.123.102:0/1497591621' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 0}]: dispatch 2026-03-10T05:47:10.007 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:47:09 vm02 bash[17462]: audit 2026-03-10T05:47:09.700879+0000 mon.c (mon.1) 53 : audit [DBG] from='client.? 192.168.123.102:0/1938884499' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 5}]: dispatch 2026-03-10T05:47:10.007 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:47:09 vm02 bash[22526]: audit 2026-03-10T05:47:09.588714+0000 mon.c (mon.1) 52 : audit [DBG] from='client.? 192.168.123.102:0/2279055470' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 2}]: dispatch 2026-03-10T05:47:10.007 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:47:09 vm02 bash[22526]: audit 2026-03-10T05:47:09.601362+0000 mon.a (mon.0) 606 : audit [DBG] from='client.? 192.168.123.102:0/1104130991' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 6}]: dispatch 2026-03-10T05:47:10.007 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:47:09 vm02 bash[22526]: audit 2026-03-10T05:47:09.671844+0000 mon.a (mon.0) 607 : audit [DBG] from='client.? 192.168.123.102:0/1497591621' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 0}]: dispatch 2026-03-10T05:47:10.007 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:47:09 vm02 bash[22526]: audit 2026-03-10T05:47:09.700879+0000 mon.c (mon.1) 53 : audit [DBG] from='client.? 192.168.123.102:0/1938884499' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 5}]: dispatch 2026-03-10T05:47:10.047 INFO:teuthology.orchestra.run.vm02.stdout:120259084304 2026-03-10T05:47:10.056 INFO:tasks.cephadm.ceph_manager.ceph:need seq 197568495622 got 197568495623 for osd.7 2026-03-10T05:47:10.056 DEBUG:teuthology.parallel:result is None 2026-03-10T05:47:10.066 INFO:tasks.cephadm.ceph_manager.ceph:need seq 98784247826 got 98784247827 for osd.3 2026-03-10T05:47:10.066 DEBUG:teuthology.parallel:result is None 2026-03-10T05:47:10.111 INFO:tasks.cephadm.ceph_manager.ceph:need seq 120259084303 got 120259084304 for osd.4 2026-03-10T05:47:10.111 DEBUG:teuthology.parallel:result is None 2026-03-10T05:47:10.112 INFO:tasks.cephadm.ceph_manager.ceph:waiting for clean 2026-03-10T05:47:10.112 DEBUG:teuthology.orchestra.run.vm02:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 shell --fsid 107483ae-1c44-11f1-b530-c1172cd6122a -- ceph pg dump --format=json 2026-03-10T05:47:10.258 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:47:09 vm05 bash[17864]: audit 2026-03-10T05:47:09.588714+0000 mon.c (mon.1) 52 : audit [DBG] from='client.? 192.168.123.102:0/2279055470' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 2}]: dispatch 2026-03-10T05:47:10.258 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:47:09 vm05 bash[17864]: audit 2026-03-10T05:47:09.601362+0000 mon.a (mon.0) 606 : audit [DBG] from='client.? 192.168.123.102:0/1104130991' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 6}]: dispatch 2026-03-10T05:47:10.258 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:47:09 vm05 bash[17864]: audit 2026-03-10T05:47:09.671844+0000 mon.a (mon.0) 607 : audit [DBG] from='client.? 192.168.123.102:0/1497591621' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 0}]: dispatch 2026-03-10T05:47:10.258 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:47:09 vm05 bash[17864]: audit 2026-03-10T05:47:09.700879+0000 mon.c (mon.1) 53 : audit [DBG] from='client.? 192.168.123.102:0/1938884499' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 5}]: dispatch 2026-03-10T05:47:11.084 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:47:10 vm02 bash[17462]: cluster 2026-03-10T05:47:09.682024+0000 mgr.y (mgr.14409) 29 : cluster [DBG] pgmap v12: 1 pgs: 1 active+clean; 449 KiB data, 48 MiB used, 160 GiB / 160 GiB avail 2026-03-10T05:47:11.084 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:47:10 vm02 bash[17462]: audit 2026-03-10T05:47:09.910221+0000 mon.a (mon.0) 608 : audit [DBG] from='client.? 192.168.123.102:0/843784099' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 1}]: dispatch 2026-03-10T05:47:11.084 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:47:10 vm02 bash[17462]: audit 2026-03-10T05:47:09.979283+0000 mon.c (mon.1) 54 : audit [DBG] from='client.? 192.168.123.102:0/87462172' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 7}]: dispatch 2026-03-10T05:47:11.084 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:47:10 vm02 bash[17462]: audit 2026-03-10T05:47:09.998887+0000 mon.c (mon.1) 55 : audit [DBG] from='client.? 192.168.123.102:0/2044179138' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 3}]: dispatch 2026-03-10T05:47:11.084 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:47:10 vm02 bash[17462]: audit 2026-03-10T05:47:10.034284+0000 mon.b (mon.2) 24 : audit [DBG] from='client.? 192.168.123.102:0/2624748021' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 4}]: dispatch 2026-03-10T05:47:11.084 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:47:10 vm02 bash[22526]: cluster 2026-03-10T05:47:09.682024+0000 mgr.y (mgr.14409) 29 : cluster [DBG] pgmap v12: 1 pgs: 1 active+clean; 449 KiB data, 48 MiB used, 160 GiB / 160 GiB avail 2026-03-10T05:47:11.084 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:47:10 vm02 bash[22526]: audit 2026-03-10T05:47:09.910221+0000 mon.a (mon.0) 608 : audit [DBG] from='client.? 192.168.123.102:0/843784099' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 1}]: dispatch 2026-03-10T05:47:11.084 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:47:10 vm02 bash[22526]: audit 2026-03-10T05:47:09.979283+0000 mon.c (mon.1) 54 : audit [DBG] from='client.? 192.168.123.102:0/87462172' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 7}]: dispatch 2026-03-10T05:47:11.084 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:47:10 vm02 bash[22526]: audit 2026-03-10T05:47:09.998887+0000 mon.c (mon.1) 55 : audit [DBG] from='client.? 192.168.123.102:0/2044179138' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 3}]: dispatch 2026-03-10T05:47:11.084 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:47:10 vm02 bash[22526]: audit 2026-03-10T05:47:10.034284+0000 mon.b (mon.2) 24 : audit [DBG] from='client.? 192.168.123.102:0/2624748021' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 4}]: dispatch 2026-03-10T05:47:11.258 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:47:10 vm05 bash[17864]: cluster 2026-03-10T05:47:09.682024+0000 mgr.y (mgr.14409) 29 : cluster [DBG] pgmap v12: 1 pgs: 1 active+clean; 449 KiB data, 48 MiB used, 160 GiB / 160 GiB avail 2026-03-10T05:47:11.258 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:47:10 vm05 bash[17864]: audit 2026-03-10T05:47:09.910221+0000 mon.a (mon.0) 608 : audit [DBG] from='client.? 192.168.123.102:0/843784099' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 1}]: dispatch 2026-03-10T05:47:11.258 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:47:10 vm05 bash[17864]: audit 2026-03-10T05:47:09.979283+0000 mon.c (mon.1) 54 : audit [DBG] from='client.? 192.168.123.102:0/87462172' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 7}]: dispatch 2026-03-10T05:47:11.258 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:47:10 vm05 bash[17864]: audit 2026-03-10T05:47:09.998887+0000 mon.c (mon.1) 55 : audit [DBG] from='client.? 192.168.123.102:0/2044179138' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 3}]: dispatch 2026-03-10T05:47:11.258 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:47:10 vm05 bash[17864]: audit 2026-03-10T05:47:10.034284+0000 mon.b (mon.2) 24 : audit [DBG] from='client.? 192.168.123.102:0/2624748021' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 4}]: dispatch 2026-03-10T05:47:12.722 INFO:teuthology.orchestra.run.vm02.stderr:Inferring config /var/lib/ceph/107483ae-1c44-11f1-b530-c1172cd6122a/mon.c/config 2026-03-10T05:47:13.030 INFO:teuthology.orchestra.run.vm02.stdout: 2026-03-10T05:47:13.032 INFO:teuthology.orchestra.run.vm02.stderr:dumped all 2026-03-10T05:47:13.039 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:47:12 vm02 bash[22526]: cluster 2026-03-10T05:47:11.682247+0000 mgr.y (mgr.14409) 30 : cluster [DBG] pgmap v13: 1 pgs: 1 active+clean; 449 KiB data, 48 MiB used, 160 GiB / 160 GiB avail 2026-03-10T05:47:13.041 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:47:12 vm02 bash[17462]: cluster 2026-03-10T05:47:11.682247+0000 mgr.y (mgr.14409) 30 : cluster [DBG] pgmap v13: 1 pgs: 1 active+clean; 449 KiB data, 48 MiB used, 160 GiB / 160 GiB avail 2026-03-10T05:47:13.079 INFO:teuthology.orchestra.run.vm02.stdout:{"pg_ready":true,"pg_map":{"version":13,"stamp":"2026-03-10T05:47:11.682151+0000","last_osdmap_epoch":0,"last_pg_scan":0,"pg_stats_sum":{"stat_sum":{"num_bytes":459280,"num_objects":2,"num_object_clones":0,"num_object_copies":6,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":2,"num_whiteouts":0,"num_read":192,"num_read_kb":288,"num_write":133,"num_write_kb":1372,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":2,"num_bytes_recovered":397840,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"store_stats":{"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},"log_size":87,"ondisk_log_size":87,"up":3,"acting":3,"num_store_stats":0},"osd_stats_sum":{"up_from":0,"seq":0,"num_pgs":3,"num_osds":8,"num_per_pool_osds":3,"num_per_pool_omap_osds":3,"kb":167739392,"kb_used":49620,"kb_used_data":4884,"kb_used_omap":0,"kb_used_meta":44672,"kb_avail":167689772,"statfs":{"total":171765137408,"available":171714326528,"internally_reserved":0,"allocated":5001216,"data_stored":2736309,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":45744128},"hb_peers":[],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[],"network_ping_times":[]},"pg_stats_delta":{"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"store_stats":{"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},"log_size":0,"ondisk_log_size":0,"up":0,"acting":0,"num_store_stats":0,"stamp_delta":"12.001444"},"pg_stats":[{"pgid":"1.0","version":"49'87","reported_seq":56,"reported_epoch":49,"state":"active+clean","last_fresh":"2026-03-10T05:46:52.550214+0000","last_change":"2026-03-10T05:46:48.372680+0000","last_active":"2026-03-10T05:46:52.550214+0000","last_peered":"2026-03-10T05:46:52.550214+0000","last_clean":"2026-03-10T05:46:52.550214+0000","last_became_active":"2026-03-10T05:46:42.634851+0000","last_became_peered":"2026-03-10T05:46:42.634851+0000","last_unstale":"2026-03-10T05:46:52.550214+0000","last_undegraded":"2026-03-10T05:46:52.550214+0000","last_fullsized":"2026-03-10T05:46:52.550214+0000","mapping_epoch":47,"log_start":"0'0","ondisk_log_start":"0'0","created":18,"last_epoch_clean":48,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T05:45:26.540635+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T05:45:26.540635+0000","last_clean_scrub_stamp":"2026-03-10T05:45:26.540635+0000","objects_scrubbed":0,"log_size":87,"ondisk_log_size":87,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T08:20:11.900816+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":459280,"num_objects":2,"num_object_clones":0,"num_object_copies":6,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":2,"num_whiteouts":0,"num_read":192,"num_read_kb":288,"num_write":133,"num_write_kb":1372,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":2,"num_bytes_recovered":397840,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[7,0,6],"acting":[7,0,6],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":7,"acting_primary":7,"purged_snaps":[]}],"pool_stats":[{"poolid":1,"num_pg":1,"stat_sum":{"num_bytes":459280,"num_objects":2,"num_object_clones":0,"num_object_copies":6,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":2,"num_whiteouts":0,"num_read":192,"num_read_kb":288,"num_write":133,"num_write_kb":1372,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":2,"num_bytes_recovered":397840,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"store_stats":{"total":0,"available":0,"internally_reserved":0,"allocated":1204224,"data_stored":1193520,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},"log_size":87,"ondisk_log_size":87,"up":3,"acting":3,"num_store_stats":4}],"osd_stats":[{"osd":7,"up_from":46,"seq":197568495623,"num_pgs":1,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":1,"kb":20967424,"kb_used":6176,"kb_used_data":856,"kb_used_omap":0,"kb_used_meta":5312,"kb_avail":20961248,"statfs":{"total":21470642176,"available":21464317952,"internally_reserved":0,"allocated":876544,"data_stored":590728,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":5439488},"hb_peers":[0,1,2,3,4,5,6],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[],"network_ping_times":[{"osd":0,"last update":"Thu Jan 1 00:00:00 1970","interfaces":[{"interface":"back","average":{"1min":0,"5min":0,"15min":0},"min":{"1min":0,"5min":0,"15min":0},"max":{"1min":0,"5min":0,"15min":0},"last":0.57099999999999995}]},{"osd":1,"last update":"Thu Jan 1 00:00:00 1970","interfaces":[{"interface":"back","average":{"1min":0,"5min":0,"15min":0},"min":{"1min":0,"5min":0,"15min":0},"max":{"1min":0,"5min":0,"15min":0},"last":0.55400000000000005}]},{"osd":2,"last update":"Thu Jan 1 00:00:00 1970","interfaces":[{"interface":"back","average":{"1min":0,"5min":0,"15min":0},"min":{"1min":0,"5min":0,"15min":0},"max":{"1min":0,"5min":0,"15min":0},"last":0.72199999999999998}]},{"osd":3,"last update":"Thu Jan 1 00:00:00 1970","interfaces":[{"interface":"back","average":{"1min":0,"5min":0,"15min":0},"min":{"1min":0,"5min":0,"15min":0},"max":{"1min":0,"5min":0,"15min":0},"last":0.79300000000000004}]},{"osd":4,"last update":"Thu Jan 1 00:00:00 1970","interfaces":[{"interface":"back","average":{"1min":0,"5min":0,"15min":0},"min":{"1min":0,"5min":0,"15min":0},"max":{"1min":0,"5min":0,"15min":0},"last":0.499}]},{"osd":5,"last update":"Thu Jan 1 00:00:00 1970","interfaces":[{"interface":"back","average":{"1min":0,"5min":0,"15min":0},"min":{"1min":0,"5min":0,"15min":0},"max":{"1min":0,"5min":0,"15min":0},"last":0.73499999999999999}]},{"osd":6,"last update":"Thu Jan 1 00:00:00 1970","interfaces":[{"interface":"back","average":{"1min":0,"5min":0,"15min":0},"min":{"1min":0,"5min":0,"15min":0},"max":{"1min":0,"5min":0,"15min":0},"last":0.50900000000000001}]}]},{"osd":6,"up_from":40,"seq":171798691850,"num_pgs":1,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":1,"kb":20967424,"kb_used":6172,"kb_used_data":852,"kb_used_omap":0,"kb_used_meta":5312,"kb_avail":20961252,"statfs":{"total":21470642176,"available":21464322048,"internally_reserved":0,"allocated":872448,"data_stored":590413,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":5439488},"hb_peers":[0,1,2,3,4,5,7],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[],"network_ping_times":[{"osd":0,"last update":"Thu Jan 1 00:00:00 1970","interfaces":[{"interface":"back","average":{"1min":0,"5min":0,"15min":0},"min":{"1min":0,"5min":0,"15min":0},"max":{"1min":0,"5min":0,"15min":0},"last":0.753}]},{"osd":1,"last update":"Thu Jan 1 00:00:00 1970","interfaces":[{"interface":"back","average":{"1min":0,"5min":0,"15min":0},"min":{"1min":0,"5min":0,"15min":0},"max":{"1min":0,"5min":0,"15min":0},"last":0.97999999999999998}]},{"osd":2,"last update":"Thu Jan 1 00:00:00 1970","interfaces":[{"interface":"back","average":{"1min":0,"5min":0,"15min":0},"min":{"1min":0,"5min":0,"15min":0},"max":{"1min":0,"5min":0,"15min":0},"last":0.89200000000000002}]},{"osd":3,"last update":"Thu Jan 1 00:00:00 1970","interfaces":[{"interface":"back","average":{"1min":0,"5min":0,"15min":0},"min":{"1min":0,"5min":0,"15min":0},"max":{"1min":0,"5min":0,"15min":0},"last":0.66900000000000004}]},{"osd":4,"last update":"Thu Jan 1 00:00:00 1970","interfaces":[{"interface":"back","average":{"1min":0,"5min":0,"15min":0},"min":{"1min":0,"5min":0,"15min":0},"max":{"1min":0,"5min":0,"15min":0},"last":0.47899999999999998}]},{"osd":5,"last update":"Thu Jan 1 00:00:00 1970","interfaces":[{"interface":"back","average":{"1min":0,"5min":0,"15min":0},"min":{"1min":0,"5min":0,"15min":0},"max":{"1min":0,"5min":0,"15min":0},"last":0.39200000000000002}]},{"osd":7,"last update":"Thu Jan 1 00:00:00 1970","interfaces":[{"interface":"back","average":{"1min":0,"5min":0,"15min":0},"min":{"1min":0,"5min":0,"15min":0},"max":{"1min":0,"5min":0,"15min":0},"last":0.60199999999999998}]}]},{"osd":1,"up_from":12,"seq":51539607577,"num_pgs":0,"num_osds":1,"num_per_pool_osds":0,"num_per_pool_omap_osds":0,"kb":20967424,"kb_used":6424,"kb_used_data":464,"kb_used_omap":0,"kb_used_meta":5952,"kb_avail":20961000,"statfs":{"total":21470642176,"available":21464064000,"internally_reserved":0,"allocated":475136,"data_stored":192888,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":6094848},"hb_peers":[0,2,3,4,5,6,7],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[],"network_ping_times":[{"osd":0,"last update":"Tue Mar 10 05:46:10 2026","interfaces":[{"interface":"back","average":{"1min":0.46400000000000002,"5min":0.46400000000000002,"15min":0.46400000000000002},"min":{"1min":0.20899999999999999,"5min":0.20899999999999999,"15min":0.20899999999999999},"max":{"1min":1.8759999999999999,"5min":1.8759999999999999,"15min":1.8759999999999999},"last":0.69199999999999995},{"interface":"front","average":{"1min":0.47599999999999998,"5min":0.47599999999999998,"15min":0.47599999999999998},"min":{"1min":0.221,"5min":0.221,"15min":0.221},"max":{"1min":1.891,"5min":1.891,"15min":1.891},"last":0.77600000000000002}]},{"osd":2,"last update":"Tue Mar 10 05:46:30 2026","interfaces":[{"interface":"back","average":{"1min":0.47399999999999998,"5min":0.47399999999999998,"15min":0.47399999999999998},"min":{"1min":0.28999999999999998,"5min":0.28999999999999998,"15min":0.28999999999999998},"max":{"1min":0.71299999999999997,"5min":0.71299999999999997,"15min":0.71299999999999997},"last":0.79800000000000004},{"interface":"front","average":{"1min":0.46600000000000003,"5min":0.46600000000000003,"15min":0.46600000000000003},"min":{"1min":0.224,"5min":0.224,"15min":0.224},"max":{"1min":1.1040000000000001,"5min":1.1040000000000001,"15min":1.1040000000000001},"last":0.879}]},{"osd":3,"last update":"Tue Mar 10 05:46:42 2026","interfaces":[{"interface":"back","average":{"1min":0.54500000000000004,"5min":0.54500000000000004,"15min":0.54500000000000004},"min":{"1min":0.35699999999999998,"5min":0.35699999999999998,"15min":0.35699999999999998},"max":{"1min":0.86499999999999999,"5min":0.86499999999999999,"15min":0.86499999999999999},"last":0.81899999999999995},{"interface":"front","average":{"1min":0.55500000000000005,"5min":0.55500000000000005,"15min":0.55500000000000005},"min":{"1min":0.27700000000000002,"5min":0.27700000000000002,"15min":0.27700000000000002},"max":{"1min":1.0589999999999999,"5min":1.0589999999999999,"15min":1.0589999999999999},"last":0.72099999999999997}]},{"osd":4,"last update":"Tue Mar 10 05:46:56 2026","interfaces":[{"interface":"back","average":{"1min":0.54300000000000004,"5min":0.54300000000000004,"15min":0.54300000000000004},"min":{"1min":0.38900000000000001,"5min":0.38900000000000001,"15min":0.38900000000000001},"max":{"1min":0.83899999999999997,"5min":0.83899999999999997,"15min":0.83899999999999997},"last":0.88900000000000001},{"interface":"front","average":{"1min":0.58099999999999996,"5min":0.58099999999999996,"15min":0.58099999999999996},"min":{"1min":0.38100000000000001,"5min":0.38100000000000001,"15min":0.38100000000000001},"max":{"1min":0.82799999999999996,"5min":0.82799999999999996,"15min":0.82799999999999996},"last":0.76900000000000002}]},{"osd":5,"last update":"Thu Jan 1 00:00:00 1970","interfaces":[{"interface":"back","average":{"1min":0,"5min":0,"15min":0},"min":{"1min":0,"5min":0,"15min":0},"max":{"1min":0,"5min":0,"15min":0},"last":0.70499999999999996}]},{"osd":6,"last update":"Thu Jan 1 00:00:00 1970","interfaces":[{"interface":"back","average":{"1min":0,"5min":0,"15min":0},"min":{"1min":0,"5min":0,"15min":0},"max":{"1min":0,"5min":0,"15min":0},"last":0.76000000000000001}]},{"osd":7,"last update":"Thu Jan 1 00:00:00 1970","interfaces":[{"interface":"back","average":{"1min":0,"5min":0,"15min":0},"min":{"1min":0,"5min":0,"15min":0},"max":{"1min":0,"5min":0,"15min":0},"last":0.85599999999999998}]}]},{"osd":0,"up_from":8,"seq":34359738396,"num_pgs":1,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":1,"kb":20967424,"kb_used":6880,"kb_used_data":856,"kb_used_omap":0,"kb_used_meta":6016,"kb_avail":20960544,"statfs":{"total":21470642176,"available":21463597056,"internally_reserved":0,"allocated":876544,"data_stored":590728,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":6160384},"hb_peers":[1,2,3,4,5,6,7],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[],"network_ping_times":[{"osd":1,"last update":"Tue Mar 10 05:46:15 2026","interfaces":[{"interface":"back","average":{"1min":0.34699999999999998,"5min":0.34699999999999998,"15min":0.34699999999999998},"min":{"1min":0.21299999999999999,"5min":0.21299999999999999,"15min":0.21299999999999999},"max":{"1min":0.63700000000000001,"5min":0.63700000000000001,"15min":0.63700000000000001},"last":4.1929999999999996},{"interface":"front","average":{"1min":0.379,"5min":0.379,"15min":0.379},"min":{"1min":0.191,"5min":0.191,"15min":0.191},"max":{"1min":0.58199999999999996,"5min":0.58199999999999996,"15min":0.58199999999999996},"last":4.4320000000000004}]},{"osd":2,"last update":"Tue Mar 10 05:46:29 2026","interfaces":[{"interface":"back","average":{"1min":0.438,"5min":0.438,"15min":0.438},"min":{"1min":0.18099999999999999,"5min":0.18099999999999999,"15min":0.18099999999999999},"max":{"1min":0.72999999999999998,"5min":0.72999999999999998,"15min":0.72999999999999998},"last":4.4100000000000001},{"interface":"front","average":{"1min":0.41199999999999998,"5min":0.41199999999999998,"15min":0.41199999999999998},"min":{"1min":0.20000000000000001,"5min":0.20000000000000001,"15min":0.20000000000000001},"max":{"1min":0.68600000000000005,"5min":0.68600000000000005,"15min":0.68600000000000005},"last":4.1740000000000004}]},{"osd":3,"last update":"Tue Mar 10 05:46:41 2026","interfaces":[{"interface":"back","average":{"1min":0.49399999999999999,"5min":0.49399999999999999,"15min":0.49399999999999999},"min":{"1min":0.161,"5min":0.161,"15min":0.161},"max":{"1min":0.69499999999999995,"5min":0.69499999999999995,"15min":0.69499999999999995},"last":4.2240000000000002},{"interface":"front","average":{"1min":0.51600000000000001,"5min":0.51600000000000001,"15min":0.51600000000000001},"min":{"1min":0.23100000000000001,"5min":0.23100000000000001,"15min":0.23100000000000001},"max":{"1min":0.84899999999999998,"5min":0.84899999999999998,"15min":0.84899999999999998},"last":3.8380000000000001}]},{"osd":4,"last update":"Tue Mar 10 05:46:56 2026","interfaces":[{"interface":"back","average":{"1min":0.53700000000000003,"5min":0.53700000000000003,"15min":0.53700000000000003},"min":{"1min":0.376,"5min":0.376,"15min":0.376},"max":{"1min":0.80100000000000005,"5min":0.80100000000000005,"15min":0.80100000000000005},"last":4.4409999999999998},{"interface":"front","average":{"1min":0.51300000000000001,"5min":0.51300000000000001,"15min":0.51300000000000001},"min":{"1min":0.24399999999999999,"5min":0.24399999999999999,"15min":0.24399999999999999},"max":{"1min":0.73699999999999999,"5min":0.73699999999999999,"15min":0.73699999999999999},"last":4.2000000000000002}]},{"osd":5,"last update":"Thu Jan 1 00:00:00 1970","interfaces":[{"interface":"back","average":{"1min":0,"5min":0,"15min":0},"min":{"1min":0,"5min":0,"15min":0},"max":{"1min":0,"5min":0,"15min":0},"last":4.2140000000000004}]},{"osd":6,"last update":"Thu Jan 1 00:00:00 1970","interfaces":[{"interface":"back","average":{"1min":0,"5min":0,"15min":0},"min":{"1min":0,"5min":0,"15min":0},"max":{"1min":0,"5min":0,"15min":0},"last":3.887}]},{"osd":7,"last update":"Thu Jan 1 00:00:00 1970","interfaces":[{"interface":"back","average":{"1min":0,"5min":0,"15min":0},"min":{"1min":0,"5min":0,"15min":0},"max":{"1min":0,"5min":0,"15min":0},"last":4.1840000000000002}]}]},{"osd":2,"up_from":17,"seq":73014444054,"num_pgs":0,"num_osds":1,"num_per_pool_osds":0,"num_per_pool_omap_osds":0,"kb":20967424,"kb_used":6360,"kb_used_data":464,"kb_used_omap":0,"kb_used_meta":5888,"kb_avail":20961064,"statfs":{"total":21470642176,"available":21464129536,"internally_reserved":0,"allocated":475136,"data_stored":192888,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":6029312},"hb_peers":[0,1,3,4,5,6,7],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[],"network_ping_times":[{"osd":0,"last update":"Tue Mar 10 05:46:27 2026","interfaces":[{"interface":"back","average":{"1min":0.5,"5min":0.5,"15min":0.5},"min":{"1min":0.29399999999999998,"5min":0.29399999999999998,"15min":0.29399999999999998},"max":{"1min":1.355,"5min":1.355,"15min":1.355},"last":0.35899999999999999},{"interface":"front","average":{"1min":0.45600000000000002,"5min":0.45600000000000002,"15min":0.45600000000000002},"min":{"1min":0.159,"5min":0.159,"15min":0.159},"max":{"1min":1.4299999999999999,"5min":1.4299999999999999,"15min":1.4299999999999999},"last":0.34200000000000003}]},{"osd":1,"last update":"Tue Mar 10 05:46:27 2026","interfaces":[{"interface":"back","average":{"1min":0.46700000000000003,"5min":0.46700000000000003,"15min":0.46700000000000003},"min":{"1min":0.25900000000000001,"5min":0.25900000000000001,"15min":0.25900000000000001},"max":{"1min":1.214,"5min":1.214,"15min":1.214},"last":0.55700000000000005},{"interface":"front","average":{"1min":0.51100000000000001,"5min":0.51100000000000001,"15min":0.51100000000000001},"min":{"1min":0.28799999999999998,"5min":0.28799999999999998,"15min":0.28799999999999998},"max":{"1min":1.0389999999999999,"5min":1.0389999999999999,"15min":1.0389999999999999},"last":0.60099999999999998}]},{"osd":3,"last update":"Tue Mar 10 05:46:44 2026","interfaces":[{"interface":"back","average":{"1min":0.52500000000000002,"5min":0.52500000000000002,"15min":0.52500000000000002},"min":{"1min":0.316,"5min":0.316,"15min":0.316},"max":{"1min":0.83299999999999996,"5min":0.83299999999999996,"15min":0.83299999999999996},"last":0.60999999999999999},{"interface":"front","average":{"1min":0.56399999999999995,"5min":0.56399999999999995,"15min":0.56399999999999995},"min":{"1min":0.312,"5min":0.312,"15min":0.312},"max":{"1min":0.94599999999999995,"5min":0.94599999999999995,"15min":0.94599999999999995},"last":0.57999999999999996}]},{"osd":4,"last update":"Tue Mar 10 05:46:56 2026","interfaces":[{"interface":"back","average":{"1min":0.53400000000000003,"5min":0.53400000000000003,"15min":0.53400000000000003},"min":{"1min":0.42099999999999999,"5min":0.42099999999999999,"15min":0.42099999999999999},"max":{"1min":0.86799999999999999,"5min":0.86799999999999999,"15min":0.86799999999999999},"last":0.71199999999999997},{"interface":"front","average":{"1min":0.56399999999999995,"5min":0.56399999999999995,"15min":0.56399999999999995},"min":{"1min":0.35299999999999998,"5min":0.35299999999999998,"15min":0.35299999999999998},"max":{"1min":0.85299999999999998,"5min":0.85299999999999998,"15min":0.85299999999999998},"last":0.70099999999999996}]},{"osd":5,"last update":"Thu Jan 1 00:00:00 1970","interfaces":[{"interface":"back","average":{"1min":0,"5min":0,"15min":0},"min":{"1min":0,"5min":0,"15min":0},"max":{"1min":0,"5min":0,"15min":0},"last":0.57099999999999995}]},{"osd":6,"last update":"Thu Jan 1 00:00:00 1970","interfaces":[{"interface":"back","average":{"1min":0,"5min":0,"15min":0},"min":{"1min":0,"5min":0,"15min":0},"max":{"1min":0,"5min":0,"15min":0},"last":0.63700000000000001}]},{"osd":7,"last update":"Thu Jan 1 00:00:00 1970","interfaces":[{"interface":"back","average":{"1min":0,"5min":0,"15min":0},"min":{"1min":0,"5min":0,"15min":0},"max":{"1min":0,"5min":0,"15min":0},"last":0.68700000000000006}]}]},{"osd":3,"up_from":23,"seq":98784247827,"num_pgs":0,"num_osds":1,"num_per_pool_osds":0,"num_per_pool_omap_osds":0,"kb":20967424,"kb_used":5848,"kb_used_data":464,"kb_used_omap":0,"kb_used_meta":5376,"kb_avail":20961576,"statfs":{"total":21470642176,"available":21464653824,"internally_reserved":0,"allocated":475136,"data_stored":192888,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":5505024},"hb_peers":[0,1,2,4,5,6,7],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[],"network_ping_times":[{"osd":0,"last update":"Tue Mar 10 05:46:41 2026","interfaces":[{"interface":"back","average":{"1min":0.52900000000000003,"5min":0.52900000000000003,"15min":0.52900000000000003},"min":{"1min":0.315,"5min":0.315,"15min":0.315},"max":{"1min":1.0309999999999999,"5min":1.0309999999999999,"15min":1.0309999999999999},"last":0.57799999999999996},{"interface":"front","average":{"1min":0.52800000000000002,"5min":0.52800000000000002,"15min":0.52800000000000002},"min":{"1min":0.33500000000000002,"5min":0.33500000000000002,"15min":0.33500000000000002},"max":{"1min":1.0169999999999999,"5min":1.0169999999999999,"15min":1.0169999999999999},"last":0.60899999999999999}]},{"osd":1,"last update":"Tue Mar 10 05:46:41 2026","interfaces":[{"interface":"back","average":{"1min":0.52600000000000002,"5min":0.52600000000000002,"15min":0.52600000000000002},"min":{"1min":0.32000000000000001,"5min":0.32000000000000001,"15min":0.32000000000000001},"max":{"1min":0.77100000000000002,"5min":0.77100000000000002,"15min":0.77100000000000002},"last":0.63},{"interface":"front","average":{"1min":0.54900000000000004,"5min":0.54900000000000004,"15min":0.54900000000000004},"min":{"1min":0.35999999999999999,"5min":0.35999999999999999,"15min":0.35999999999999999},"max":{"1min":0.88600000000000001,"5min":0.88600000000000001,"15min":0.88600000000000001},"last":0.53200000000000003}]},{"osd":2,"last update":"Tue Mar 10 05:46:41 2026","interfaces":[{"interface":"back","average":{"1min":0.54800000000000004,"5min":0.54800000000000004,"15min":0.54800000000000004},"min":{"1min":0.33400000000000002,"5min":0.33400000000000002,"15min":0.33400000000000002},"max":{"1min":0.99299999999999999,"5min":0.99299999999999999,"15min":0.99299999999999999},"last":0.52000000000000002},{"interface":"front","average":{"1min":0.54800000000000004,"5min":0.54800000000000004,"15min":0.54800000000000004},"min":{"1min":0.23799999999999999,"5min":0.23799999999999999,"15min":0.23799999999999999},"max":{"1min":1.0880000000000001,"5min":1.0880000000000001,"15min":1.0880000000000001},"last":0.54900000000000004}]},{"osd":4,"last update":"Tue Mar 10 05:46:58 2026","interfaces":[{"interface":"back","average":{"1min":0.621,"5min":0.621,"15min":0.621},"min":{"1min":0.434,"5min":0.434,"15min":0.434},"max":{"1min":1.0269999999999999,"5min":1.0269999999999999,"15min":1.0269999999999999},"last":0.59799999999999998},{"interface":"front","average":{"1min":0.60199999999999998,"5min":0.60199999999999998,"15min":0.60199999999999998},"min":{"1min":0.46000000000000002,"5min":0.46000000000000002,"15min":0.46000000000000002},"max":{"1min":0.92800000000000005,"5min":0.92800000000000005,"15min":0.92800000000000005},"last":0.85999999999999999}]},{"osd":5,"last update":"Thu Jan 1 00:00:00 1970","interfaces":[{"interface":"back","average":{"1min":0,"5min":0,"15min":0},"min":{"1min":0,"5min":0,"15min":0},"max":{"1min":0,"5min":0,"15min":0},"last":0.67600000000000005}]},{"osd":6,"last update":"Thu Jan 1 00:00:00 1970","interfaces":[{"interface":"back","average":{"1min":0,"5min":0,"15min":0},"min":{"1min":0,"5min":0,"15min":0},"max":{"1min":0,"5min":0,"15min":0},"last":0.78900000000000003}]},{"osd":7,"last update":"Thu Jan 1 00:00:00 1970","interfaces":[{"interface":"back","average":{"1min":0,"5min":0,"15min":0},"min":{"1min":0,"5min":0,"15min":0},"max":{"1min":0,"5min":0,"15min":0},"last":0.62}]}]},{"osd":4,"up_from":28,"seq":120259084304,"num_pgs":0,"num_osds":1,"num_per_pool_osds":0,"num_per_pool_omap_osds":0,"kb":20967424,"kb_used":5912,"kb_used_data":464,"kb_used_omap":0,"kb_used_meta":5440,"kb_avail":20961512,"statfs":{"total":21470642176,"available":21464588288,"internally_reserved":0,"allocated":475136,"data_stored":192888,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":5570560},"hb_peers":[0,1,2,3,5,6,7],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[],"network_ping_times":[{"osd":0,"last update":"Tue Mar 10 05:46:54 2026","interfaces":[{"interface":"back","average":{"1min":0.52000000000000002,"5min":0.52000000000000002,"15min":0.52000000000000002},"min":{"1min":0.31900000000000001,"5min":0.31900000000000001,"15min":0.31900000000000001},"max":{"1min":0.98499999999999999,"5min":0.98499999999999999,"15min":0.98499999999999999},"last":0.55400000000000005},{"interface":"front","average":{"1min":0.54800000000000004,"5min":0.54800000000000004,"15min":0.54800000000000004},"min":{"1min":0.312,"5min":0.312,"15min":0.312},"max":{"1min":1.3169999999999999,"5min":1.3169999999999999,"15min":1.3169999999999999},"last":0.47699999999999998}]},{"osd":1,"last update":"Tue Mar 10 05:46:54 2026","interfaces":[{"interface":"back","average":{"1min":0.58699999999999997,"5min":0.58699999999999997,"15min":0.58699999999999997},"min":{"1min":0.34000000000000002,"5min":0.34000000000000002,"15min":0.34000000000000002},"max":{"1min":0.97299999999999998,"5min":0.97299999999999998,"15min":0.97299999999999998},"last":0.442},{"interface":"front","average":{"1min":0.55500000000000005,"5min":0.55500000000000005,"15min":0.55500000000000005},"min":{"1min":0.29899999999999999,"5min":0.29899999999999999,"15min":0.29899999999999999},"max":{"1min":0.999,"5min":0.999,"15min":0.999},"last":0.54500000000000004}]},{"osd":2,"last update":"Tue Mar 10 05:46:54 2026","interfaces":[{"interface":"back","average":{"1min":0.59099999999999997,"5min":0.59099999999999997,"15min":0.59099999999999997},"min":{"1min":0.32100000000000001,"5min":0.32100000000000001,"15min":0.32100000000000001},"max":{"1min":0.99199999999999999,"5min":0.99199999999999999,"15min":0.99199999999999999},"last":0.56100000000000005},{"interface":"front","average":{"1min":0.56399999999999995,"5min":0.56399999999999995,"15min":0.56399999999999995},"min":{"1min":0.23999999999999999,"5min":0.23999999999999999,"15min":0.23999999999999999},"max":{"1min":0.79200000000000004,"5min":0.79200000000000004,"15min":0.79200000000000004},"last":0.57099999999999995}]},{"osd":3,"last update":"Tue Mar 10 05:46:54 2026","interfaces":[{"interface":"back","average":{"1min":0.59299999999999997,"5min":0.59299999999999997,"15min":0.59299999999999997},"min":{"1min":0.41299999999999998,"5min":0.41299999999999998,"15min":0.41299999999999998},"max":{"1min":0.79900000000000004,"5min":0.79900000000000004,"15min":0.79900000000000004},"last":0.63300000000000001},{"interface":"front","average":{"1min":0.61299999999999999,"5min":0.61299999999999999,"15min":0.61299999999999999},"min":{"1min":0.34699999999999998,"5min":0.34699999999999998,"15min":0.34699999999999998},"max":{"1min":0.94399999999999995,"5min":0.94399999999999995,"15min":0.94399999999999995},"last":0.61399999999999999}]},{"osd":5,"last update":"Thu Jan 1 00:00:00 1970","interfaces":[{"interface":"back","average":{"1min":0,"5min":0,"15min":0},"min":{"1min":0,"5min":0,"15min":0},"max":{"1min":0,"5min":0,"15min":0},"last":0.57699999999999996}]},{"osd":6,"last update":"Thu Jan 1 00:00:00 1970","interfaces":[{"interface":"back","average":{"1min":0,"5min":0,"15min":0},"min":{"1min":0,"5min":0,"15min":0},"max":{"1min":0,"5min":0,"15min":0},"last":0.52100000000000002}]},{"osd":7,"last update":"Thu Jan 1 00:00:00 1970","interfaces":[{"interface":"back","average":{"1min":0,"5min":0,"15min":0},"min":{"1min":0,"5min":0,"15min":0},"max":{"1min":0,"5min":0,"15min":0},"last":0.45400000000000001}]}]},{"osd":5,"up_from":34,"seq":146028888077,"num_pgs":0,"num_osds":1,"num_per_pool_osds":0,"num_per_pool_omap_osds":0,"kb":20967424,"kb_used":5848,"kb_used_data":464,"kb_used_omap":0,"kb_used_meta":5376,"kb_avail":20961576,"statfs":{"total":21470642176,"available":21464653824,"internally_reserved":0,"allocated":475136,"data_stored":192888,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":5505024},"hb_peers":[0,1,2,3,4,6,7],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[],"network_ping_times":[{"osd":0,"last update":"Thu Jan 1 00:00:00 1970","interfaces":[{"interface":"back","average":{"1min":0,"5min":0,"15min":0},"min":{"1min":0,"5min":0,"15min":0},"max":{"1min":0,"5min":0,"15min":0},"last":0.78700000000000003}]},{"osd":1,"last update":"Thu Jan 1 00:00:00 1970","interfaces":[{"interface":"back","average":{"1min":0,"5min":0,"15min":0},"min":{"1min":0,"5min":0,"15min":0},"max":{"1min":0,"5min":0,"15min":0},"last":0.54200000000000004}]},{"osd":2,"last update":"Thu Jan 1 00:00:00 1970","interfaces":[{"interface":"back","average":{"1min":0,"5min":0,"15min":0},"min":{"1min":0,"5min":0,"15min":0},"max":{"1min":0,"5min":0,"15min":0},"last":0.73899999999999999}]},{"osd":3,"last update":"Thu Jan 1 00:00:00 1970","interfaces":[{"interface":"back","average":{"1min":0,"5min":0,"15min":0},"min":{"1min":0,"5min":0,"15min":0},"max":{"1min":0,"5min":0,"15min":0},"last":0.70299999999999996}]},{"osd":4,"last update":"Thu Jan 1 00:00:00 1970","interfaces":[{"interface":"back","average":{"1min":0,"5min":0,"15min":0},"min":{"1min":0,"5min":0,"15min":0},"max":{"1min":0,"5min":0,"15min":0},"last":0.49199999999999999}]},{"osd":6,"last update":"Thu Jan 1 00:00:00 1970","interfaces":[{"interface":"back","average":{"1min":0,"5min":0,"15min":0},"min":{"1min":0,"5min":0,"15min":0},"max":{"1min":0,"5min":0,"15min":0},"last":0.68200000000000005}]},{"osd":7,"last update":"Thu Jan 1 00:00:00 1970","interfaces":[{"interface":"back","average":{"1min":0,"5min":0,"15min":0},"min":{"1min":0,"5min":0,"15min":0},"max":{"1min":0,"5min":0,"15min":0},"last":0.52700000000000002}]}]}],"pool_statfs":[{"poolid":1,"osd":0,"total":0,"available":0,"internally_reserved":0,"allocated":401408,"data_stored":397840,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":1,"osd":1,"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":1,"osd":6,"total":0,"available":0,"internally_reserved":0,"allocated":401408,"data_stored":397840,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":1,"osd":7,"total":0,"available":0,"internally_reserved":0,"allocated":401408,"data_stored":397840,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0}]}} 2026-03-10T05:47:13.080 DEBUG:teuthology.orchestra.run.vm02:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 shell --fsid 107483ae-1c44-11f1-b530-c1172cd6122a -- ceph pg dump --format=json 2026-03-10T05:47:13.258 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:47:12 vm05 bash[17864]: cluster 2026-03-10T05:47:11.682247+0000 mgr.y (mgr.14409) 30 : cluster [DBG] pgmap v13: 1 pgs: 1 active+clean; 449 KiB data, 48 MiB used, 160 GiB / 160 GiB avail 2026-03-10T05:47:13.258 INFO:journalctl@ceph.mgr.x.vm05.stdout:Mar 10 05:47:12 vm05 bash[18520]: ::ffff:192.168.123.105 - - [10/Mar/2026:05:47:12] "GET /metrics HTTP/1.1" 200 - "" "Prometheus/2.33.4" 2026-03-10T05:47:14.733 INFO:teuthology.orchestra.run.vm02.stderr:Inferring config /var/lib/ceph/107483ae-1c44-11f1-b530-c1172cd6122a/mon.c/config 2026-03-10T05:47:15.072 INFO:teuthology.orchestra.run.vm02.stdout: 2026-03-10T05:47:15.074 INFO:teuthology.orchestra.run.vm02.stderr:dumped all 2026-03-10T05:47:15.082 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:47:14 vm02 bash[17462]: audit 2026-03-10T05:47:13.029564+0000 mgr.y (mgr.14409) 31 : audit [DBG] from='client.24427 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-10T05:47:15.082 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:47:14 vm02 bash[17462]: cluster 2026-03-10T05:47:13.682556+0000 mgr.y (mgr.14409) 32 : cluster [DBG] pgmap v14: 1 pgs: 1 active+clean; 449 KiB data, 48 MiB used, 160 GiB / 160 GiB avail 2026-03-10T05:47:15.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:47:14 vm02 bash[22526]: audit 2026-03-10T05:47:13.029564+0000 mgr.y (mgr.14409) 31 : audit [DBG] from='client.24427 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-10T05:47:15.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:47:14 vm02 bash[22526]: cluster 2026-03-10T05:47:13.682556+0000 mgr.y (mgr.14409) 32 : cluster [DBG] pgmap v14: 1 pgs: 1 active+clean; 449 KiB data, 48 MiB used, 160 GiB / 160 GiB avail 2026-03-10T05:47:15.123 INFO:teuthology.orchestra.run.vm02.stdout:{"pg_ready":true,"pg_map":{"version":14,"stamp":"2026-03-10T05:47:13.682392+0000","last_osdmap_epoch":0,"last_pg_scan":0,"pg_stats_sum":{"stat_sum":{"num_bytes":459280,"num_objects":2,"num_object_clones":0,"num_object_copies":6,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":2,"num_whiteouts":0,"num_read":192,"num_read_kb":288,"num_write":133,"num_write_kb":1372,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":2,"num_bytes_recovered":397840,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"store_stats":{"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},"log_size":87,"ondisk_log_size":87,"up":3,"acting":3,"num_store_stats":0},"osd_stats_sum":{"up_from":0,"seq":0,"num_pgs":3,"num_osds":8,"num_per_pool_osds":3,"num_per_pool_omap_osds":3,"kb":167739392,"kb_used":49620,"kb_used_data":4884,"kb_used_omap":0,"kb_used_meta":44672,"kb_avail":167689772,"statfs":{"total":171765137408,"available":171714326528,"internally_reserved":0,"allocated":5001216,"data_stored":2736309,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":45744128},"hb_peers":[],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[],"network_ping_times":[]},"pg_stats_delta":{"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"store_stats":{"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},"log_size":0,"ondisk_log_size":0,"up":0,"acting":0,"num_store_stats":0,"stamp_delta":"12.001414"},"pg_stats":[{"pgid":"1.0","version":"49'87","reported_seq":56,"reported_epoch":49,"state":"active+clean","last_fresh":"2026-03-10T05:46:52.550214+0000","last_change":"2026-03-10T05:46:48.372680+0000","last_active":"2026-03-10T05:46:52.550214+0000","last_peered":"2026-03-10T05:46:52.550214+0000","last_clean":"2026-03-10T05:46:52.550214+0000","last_became_active":"2026-03-10T05:46:42.634851+0000","last_became_peered":"2026-03-10T05:46:42.634851+0000","last_unstale":"2026-03-10T05:46:52.550214+0000","last_undegraded":"2026-03-10T05:46:52.550214+0000","last_fullsized":"2026-03-10T05:46:52.550214+0000","mapping_epoch":47,"log_start":"0'0","ondisk_log_start":"0'0","created":18,"last_epoch_clean":48,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T05:45:26.540635+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T05:45:26.540635+0000","last_clean_scrub_stamp":"2026-03-10T05:45:26.540635+0000","objects_scrubbed":0,"log_size":87,"ondisk_log_size":87,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T08:20:11.900816+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":459280,"num_objects":2,"num_object_clones":0,"num_object_copies":6,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":2,"num_whiteouts":0,"num_read":192,"num_read_kb":288,"num_write":133,"num_write_kb":1372,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":2,"num_bytes_recovered":397840,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[7,0,6],"acting":[7,0,6],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":7,"acting_primary":7,"purged_snaps":[]}],"pool_stats":[{"poolid":1,"num_pg":1,"stat_sum":{"num_bytes":459280,"num_objects":2,"num_object_clones":0,"num_object_copies":6,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":2,"num_whiteouts":0,"num_read":192,"num_read_kb":288,"num_write":133,"num_write_kb":1372,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":2,"num_bytes_recovered":397840,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"store_stats":{"total":0,"available":0,"internally_reserved":0,"allocated":1204224,"data_stored":1193520,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},"log_size":87,"ondisk_log_size":87,"up":3,"acting":3,"num_store_stats":4}],"osd_stats":[{"osd":7,"up_from":46,"seq":197568495624,"num_pgs":1,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":1,"kb":20967424,"kb_used":6176,"kb_used_data":856,"kb_used_omap":0,"kb_used_meta":5312,"kb_avail":20961248,"statfs":{"total":21470642176,"available":21464317952,"internally_reserved":0,"allocated":876544,"data_stored":590728,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":5439488},"hb_peers":[0,1,2,3,4,5,6],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[],"network_ping_times":[{"osd":0,"last update":"Thu Jan 1 00:00:00 1970","interfaces":[{"interface":"back","average":{"1min":0,"5min":0,"15min":0},"min":{"1min":0,"5min":0,"15min":0},"max":{"1min":0,"5min":0,"15min":0},"last":0.52900000000000003}]},{"osd":1,"last update":"Thu Jan 1 00:00:00 1970","interfaces":[{"interface":"back","average":{"1min":0,"5min":0,"15min":0},"min":{"1min":0,"5min":0,"15min":0},"max":{"1min":0,"5min":0,"15min":0},"last":0.47799999999999998}]},{"osd":2,"last update":"Thu Jan 1 00:00:00 1970","interfaces":[{"interface":"back","average":{"1min":0,"5min":0,"15min":0},"min":{"1min":0,"5min":0,"15min":0},"max":{"1min":0,"5min":0,"15min":0},"last":0.443}]},{"osd":3,"last update":"Thu Jan 1 00:00:00 1970","interfaces":[{"interface":"back","average":{"1min":0,"5min":0,"15min":0},"min":{"1min":0,"5min":0,"15min":0},"max":{"1min":0,"5min":0,"15min":0},"last":0.54900000000000004}]},{"osd":4,"last update":"Thu Jan 1 00:00:00 1970","interfaces":[{"interface":"back","average":{"1min":0,"5min":0,"15min":0},"min":{"1min":0,"5min":0,"15min":0},"max":{"1min":0,"5min":0,"15min":0},"last":0.63600000000000001}]},{"osd":5,"last update":"Thu Jan 1 00:00:00 1970","interfaces":[{"interface":"back","average":{"1min":0,"5min":0,"15min":0},"min":{"1min":0,"5min":0,"15min":0},"max":{"1min":0,"5min":0,"15min":0},"last":0.46999999999999997}]},{"osd":6,"last update":"Thu Jan 1 00:00:00 1970","interfaces":[{"interface":"back","average":{"1min":0,"5min":0,"15min":0},"min":{"1min":0,"5min":0,"15min":0},"max":{"1min":0,"5min":0,"15min":0},"last":0.73799999999999999}]}]},{"osd":6,"up_from":40,"seq":171798691851,"num_pgs":1,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":1,"kb":20967424,"kb_used":6172,"kb_used_data":852,"kb_used_omap":0,"kb_used_meta":5312,"kb_avail":20961252,"statfs":{"total":21470642176,"available":21464322048,"internally_reserved":0,"allocated":872448,"data_stored":590413,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":5439488},"hb_peers":[0,1,2,3,4,5,7],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[],"network_ping_times":[{"osd":0,"last update":"Thu Jan 1 00:00:00 1970","interfaces":[{"interface":"back","average":{"1min":0,"5min":0,"15min":0},"min":{"1min":0,"5min":0,"15min":0},"max":{"1min":0,"5min":0,"15min":0},"last":0.57399999999999995}]},{"osd":1,"last update":"Thu Jan 1 00:00:00 1970","interfaces":[{"interface":"back","average":{"1min":0,"5min":0,"15min":0},"min":{"1min":0,"5min":0,"15min":0},"max":{"1min":0,"5min":0,"15min":0},"last":0.85999999999999999}]},{"osd":2,"last update":"Thu Jan 1 00:00:00 1970","interfaces":[{"interface":"back","average":{"1min":0,"5min":0,"15min":0},"min":{"1min":0,"5min":0,"15min":0},"max":{"1min":0,"5min":0,"15min":0},"last":0.66500000000000004}]},{"osd":3,"last update":"Thu Jan 1 00:00:00 1970","interfaces":[{"interface":"back","average":{"1min":0,"5min":0,"15min":0},"min":{"1min":0,"5min":0,"15min":0},"max":{"1min":0,"5min":0,"15min":0},"last":0.77700000000000002}]},{"osd":4,"last update":"Thu Jan 1 00:00:00 1970","interfaces":[{"interface":"back","average":{"1min":0,"5min":0,"15min":0},"min":{"1min":0,"5min":0,"15min":0},"max":{"1min":0,"5min":0,"15min":0},"last":0.69399999999999995}]},{"osd":5,"last update":"Thu Jan 1 00:00:00 1970","interfaces":[{"interface":"back","average":{"1min":0,"5min":0,"15min":0},"min":{"1min":0,"5min":0,"15min":0},"max":{"1min":0,"5min":0,"15min":0},"last":0.67700000000000005}]},{"osd":7,"last update":"Thu Jan 1 00:00:00 1970","interfaces":[{"interface":"back","average":{"1min":0,"5min":0,"15min":0},"min":{"1min":0,"5min":0,"15min":0},"max":{"1min":0,"5min":0,"15min":0},"last":0.68700000000000006}]}]},{"osd":1,"up_from":12,"seq":51539607578,"num_pgs":0,"num_osds":1,"num_per_pool_osds":0,"num_per_pool_omap_osds":0,"kb":20967424,"kb_used":6424,"kb_used_data":464,"kb_used_omap":0,"kb_used_meta":5952,"kb_avail":20961000,"statfs":{"total":21470642176,"available":21464064000,"internally_reserved":0,"allocated":475136,"data_stored":192888,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":6094848},"hb_peers":[0,2,3,4,5,6,7],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[],"network_ping_times":[{"osd":0,"last update":"Tue Mar 10 05:47:11 2026","interfaces":[{"interface":"back","average":{"1min":0.73899999999999999,"5min":0.51900000000000002,"15min":0.48199999999999998},"min":{"1min":0.28100000000000003,"5min":0.20899999999999999,"15min":0.20899999999999999},"max":{"1min":4.5209999999999999,"5min":4.5209999999999999,"15min":4.5209999999999999},"last":0.30199999999999999},{"interface":"front","average":{"1min":0.77700000000000002,"5min":0.53600000000000003,"15min":0.496},"min":{"1min":0.29299999999999998,"5min":0.221,"15min":0.221},"max":{"1min":4.4139999999999997,"5min":4.4139999999999997,"15min":4.4139999999999997},"last":0.76400000000000001}]},{"osd":2,"last update":"Tue Mar 10 05:46:30 2026","interfaces":[{"interface":"back","average":{"1min":0.47399999999999998,"5min":0.47399999999999998,"15min":0.47399999999999998},"min":{"1min":0.28999999999999998,"5min":0.28999999999999998,"15min":0.28999999999999998},"max":{"1min":0.71299999999999997,"5min":0.71299999999999997,"15min":0.71299999999999997},"last":0.60599999999999998},{"interface":"front","average":{"1min":0.46600000000000003,"5min":0.46600000000000003,"15min":0.46600000000000003},"min":{"1min":0.224,"5min":0.224,"15min":0.224},"max":{"1min":1.1040000000000001,"5min":1.1040000000000001,"15min":1.1040000000000001},"last":0.71599999999999997}]},{"osd":3,"last update":"Tue Mar 10 05:46:42 2026","interfaces":[{"interface":"back","average":{"1min":0.54500000000000004,"5min":0.54500000000000004,"15min":0.54500000000000004},"min":{"1min":0.35699999999999998,"5min":0.35699999999999998,"15min":0.35699999999999998},"max":{"1min":0.86499999999999999,"5min":0.86499999999999999,"15min":0.86499999999999999},"last":0.69899999999999995},{"interface":"front","average":{"1min":0.55500000000000005,"5min":0.55500000000000005,"15min":0.55500000000000005},"min":{"1min":0.27700000000000002,"5min":0.27700000000000002,"15min":0.27700000000000002},"max":{"1min":1.0589999999999999,"5min":1.0589999999999999,"15min":1.0589999999999999},"last":0.36599999999999999}]},{"osd":4,"last update":"Tue Mar 10 05:46:56 2026","interfaces":[{"interface":"back","average":{"1min":0.54300000000000004,"5min":0.54300000000000004,"15min":0.54300000000000004},"min":{"1min":0.38900000000000001,"5min":0.38900000000000001,"15min":0.38900000000000001},"max":{"1min":0.83899999999999997,"5min":0.83899999999999997,"15min":0.83899999999999997},"last":0.77500000000000002},{"interface":"front","average":{"1min":0.58099999999999996,"5min":0.58099999999999996,"15min":0.58099999999999996},"min":{"1min":0.38100000000000001,"5min":0.38100000000000001,"15min":0.38100000000000001},"max":{"1min":0.82799999999999996,"5min":0.82799999999999996,"15min":0.82799999999999996},"last":0.72599999999999998}]},{"osd":5,"last update":"Tue Mar 10 05:47:11 2026","interfaces":[{"interface":"back","average":{"1min":0.81799999999999995,"5min":0.81799999999999995,"15min":0.81799999999999995},"min":{"1min":0.33900000000000002,"5min":0.33900000000000002,"15min":0.33900000000000002},"max":{"1min":4.5039999999999996,"5min":4.5039999999999996,"15min":4.5039999999999996},"last":0.51700000000000002},{"interface":"front","average":{"1min":0.629,"5min":0.629,"15min":0.629},"min":{"1min":0.40200000000000002,"5min":0.40200000000000002,"15min":0.40200000000000002},"max":{"1min":0.94399999999999995,"5min":0.94399999999999995,"15min":0.94399999999999995},"last":0.75}]},{"osd":6,"last update":"Thu Jan 1 00:00:00 1970","interfaces":[{"interface":"back","average":{"1min":0,"5min":0,"15min":0},"min":{"1min":0,"5min":0,"15min":0},"max":{"1min":0,"5min":0,"15min":0},"last":0.66400000000000003}]},{"osd":7,"last update":"Thu Jan 1 00:00:00 1970","interfaces":[{"interface":"back","average":{"1min":0,"5min":0,"15min":0},"min":{"1min":0,"5min":0,"15min":0},"max":{"1min":0,"5min":0,"15min":0},"last":0.79500000000000004}]}]},{"osd":0,"up_from":8,"seq":34359738397,"num_pgs":1,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":1,"kb":20967424,"kb_used":6880,"kb_used_data":856,"kb_used_omap":0,"kb_used_meta":6016,"kb_avail":20960544,"statfs":{"total":21470642176,"available":21463597056,"internally_reserved":0,"allocated":876544,"data_stored":590728,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":6160384},"hb_peers":[1,2,3,4,5,6,7],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[],"network_ping_times":[{"osd":1,"last update":"Tue Mar 10 05:46:15 2026","interfaces":[{"interface":"back","average":{"1min":0.34699999999999998,"5min":0.34699999999999998,"15min":0.34699999999999998},"min":{"1min":0.21299999999999999,"5min":0.21299999999999999,"15min":0.21299999999999999},"max":{"1min":0.63700000000000001,"5min":0.63700000000000001,"15min":0.63700000000000001},"last":1.6799999999999999},{"interface":"front","average":{"1min":0.379,"5min":0.379,"15min":0.379},"min":{"1min":0.191,"5min":0.191,"15min":0.191},"max":{"1min":0.58199999999999996,"5min":0.58199999999999996,"15min":0.58199999999999996},"last":1.893}]},{"osd":2,"last update":"Tue Mar 10 05:46:29 2026","interfaces":[{"interface":"back","average":{"1min":0.438,"5min":0.438,"15min":0.438},"min":{"1min":0.18099999999999999,"5min":0.18099999999999999,"15min":0.18099999999999999},"max":{"1min":0.72999999999999998,"5min":0.72999999999999998,"15min":0.72999999999999998},"last":1.9239999999999999},{"interface":"front","average":{"1min":0.41199999999999998,"5min":0.41199999999999998,"15min":0.41199999999999998},"min":{"1min":0.20000000000000001,"5min":0.20000000000000001,"15min":0.20000000000000001},"max":{"1min":0.68600000000000005,"5min":0.68600000000000005,"15min":0.68600000000000005},"last":1.877}]},{"osd":3,"last update":"Tue Mar 10 05:46:41 2026","interfaces":[{"interface":"back","average":{"1min":0.49399999999999999,"5min":0.49399999999999999,"15min":0.49399999999999999},"min":{"1min":0.161,"5min":0.161,"15min":0.161},"max":{"1min":0.69499999999999995,"5min":0.69499999999999995,"15min":0.69499999999999995},"last":1.847},{"interface":"front","average":{"1min":0.51600000000000001,"5min":0.51600000000000001,"15min":0.51600000000000001},"min":{"1min":0.23100000000000001,"5min":0.23100000000000001,"15min":0.23100000000000001},"max":{"1min":0.84899999999999998,"5min":0.84899999999999998,"15min":0.84899999999999998},"last":1.8320000000000001}]},{"osd":4,"last update":"Tue Mar 10 05:46:56 2026","interfaces":[{"interface":"back","average":{"1min":0.53700000000000003,"5min":0.53700000000000003,"15min":0.53700000000000003},"min":{"1min":0.376,"5min":0.376,"15min":0.376},"max":{"1min":0.80100000000000005,"5min":0.80100000000000005,"15min":0.80100000000000005},"last":1.905},{"interface":"front","average":{"1min":0.51300000000000001,"5min":0.51300000000000001,"15min":0.51300000000000001},"min":{"1min":0.24399999999999999,"5min":0.24399999999999999,"15min":0.24399999999999999},"max":{"1min":0.73699999999999999,"5min":0.73699999999999999,"15min":0.73699999999999999},"last":1.859}]},{"osd":5,"last update":"Tue Mar 10 05:47:08 2026","interfaces":[{"interface":"back","average":{"1min":0.85299999999999998,"5min":0.85299999999999998,"15min":0.85299999999999998},"min":{"1min":0.252,"5min":0.252,"15min":0.252},"max":{"1min":4.2140000000000004,"5min":4.2140000000000004,"15min":4.2140000000000004},"last":1.8680000000000001},{"interface":"front","average":{"1min":0.92600000000000005,"5min":0.92600000000000005,"15min":0.92600000000000005},"min":{"1min":0.29099999999999998,"5min":0.29099999999999998,"15min":0.29099999999999998},"max":{"1min":4.4489999999999998,"5min":4.4489999999999998,"15min":4.4489999999999998},"last":1.9139999999999999}]},{"osd":6,"last update":"Thu Jan 1 00:00:00 1970","interfaces":[{"interface":"back","average":{"1min":0,"5min":0,"15min":0},"min":{"1min":0,"5min":0,"15min":0},"max":{"1min":0,"5min":0,"15min":0},"last":1.6890000000000001}]},{"osd":7,"last update":"Thu Jan 1 00:00:00 1970","interfaces":[{"interface":"back","average":{"1min":0,"5min":0,"15min":0},"min":{"1min":0,"5min":0,"15min":0},"max":{"1min":0,"5min":0,"15min":0},"last":1.7070000000000001}]}]},{"osd":2,"up_from":17,"seq":73014444055,"num_pgs":0,"num_osds":1,"num_per_pool_osds":0,"num_per_pool_omap_osds":0,"kb":20967424,"kb_used":6360,"kb_used_data":464,"kb_used_omap":0,"kb_used_meta":5888,"kb_avail":20961064,"statfs":{"total":21470642176,"available":21464129536,"internally_reserved":0,"allocated":475136,"data_stored":192888,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":6029312},"hb_peers":[0,1,3,4,5,6,7],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[],"network_ping_times":[{"osd":0,"last update":"Tue Mar 10 05:46:27 2026","interfaces":[{"interface":"back","average":{"1min":0.5,"5min":0.5,"15min":0.5},"min":{"1min":0.29399999999999998,"5min":0.29399999999999998,"15min":0.29399999999999998},"max":{"1min":1.355,"5min":1.355,"15min":1.355},"last":0.503},{"interface":"front","average":{"1min":0.45600000000000002,"5min":0.45600000000000002,"15min":0.45600000000000002},"min":{"1min":0.159,"5min":0.159,"15min":0.159},"max":{"1min":1.4299999999999999,"5min":1.4299999999999999,"15min":1.4299999999999999},"last":0.60899999999999999}]},{"osd":1,"last update":"Tue Mar 10 05:46:27 2026","interfaces":[{"interface":"back","average":{"1min":0.46700000000000003,"5min":0.46700000000000003,"15min":0.46700000000000003},"min":{"1min":0.25900000000000001,"5min":0.25900000000000001,"15min":0.25900000000000001},"max":{"1min":1.214,"5min":1.214,"15min":1.214},"last":0.248},{"interface":"front","average":{"1min":0.51100000000000001,"5min":0.51100000000000001,"15min":0.51100000000000001},"min":{"1min":0.28799999999999998,"5min":0.28799999999999998,"15min":0.28799999999999998},"max":{"1min":1.0389999999999999,"5min":1.0389999999999999,"15min":1.0389999999999999},"last":0.48299999999999998}]},{"osd":3,"last update":"Tue Mar 10 05:46:44 2026","interfaces":[{"interface":"back","average":{"1min":0.52500000000000002,"5min":0.52500000000000002,"15min":0.52500000000000002},"min":{"1min":0.316,"5min":0.316,"15min":0.316},"max":{"1min":0.83299999999999996,"5min":0.83299999999999996,"15min":0.83299999999999996},"last":0.68500000000000005},{"interface":"front","average":{"1min":0.56399999999999995,"5min":0.56399999999999995,"15min":0.56399999999999995},"min":{"1min":0.312,"5min":0.312,"15min":0.312},"max":{"1min":0.94599999999999995,"5min":0.94599999999999995,"15min":0.94599999999999995},"last":0.65200000000000002}]},{"osd":4,"last update":"Tue Mar 10 05:46:56 2026","interfaces":[{"interface":"back","average":{"1min":0.53400000000000003,"5min":0.53400000000000003,"15min":0.53400000000000003},"min":{"1min":0.42099999999999999,"5min":0.42099999999999999,"15min":0.42099999999999999},"max":{"1min":0.86799999999999999,"5min":0.86799999999999999,"15min":0.86799999999999999},"last":0.66200000000000003},{"interface":"front","average":{"1min":0.56399999999999995,"5min":0.56399999999999995,"15min":0.56399999999999995},"min":{"1min":0.35299999999999998,"5min":0.35299999999999998,"15min":0.35299999999999998},"max":{"1min":0.85299999999999998,"5min":0.85299999999999998,"15min":0.85299999999999998},"last":0.71299999999999997}]},{"osd":5,"last update":"Tue Mar 10 05:47:11 2026","interfaces":[{"interface":"back","average":{"1min":0.68999999999999995,"5min":0.68999999999999995,"15min":0.68999999999999995},"min":{"1min":0.497,"5min":0.497,"15min":0.497},"max":{"1min":1.008,"5min":1.008,"15min":1.008},"last":0.628},{"interface":"front","average":{"1min":0.69599999999999995,"5min":0.69599999999999995,"15min":0.69599999999999995},"min":{"1min":0.45200000000000001,"5min":0.45200000000000001,"15min":0.45200000000000001},"max":{"1min":0.89900000000000002,"5min":0.89900000000000002,"15min":0.89900000000000002},"last":0.70399999999999996}]},{"osd":6,"last update":"Thu Jan 1 00:00:00 1970","interfaces":[{"interface":"back","average":{"1min":0,"5min":0,"15min":0},"min":{"1min":0,"5min":0,"15min":0},"max":{"1min":0,"5min":0,"15min":0},"last":0.69599999999999995}]},{"osd":7,"last update":"Thu Jan 1 00:00:00 1970","interfaces":[{"interface":"back","average":{"1min":0,"5min":0,"15min":0},"min":{"1min":0,"5min":0,"15min":0},"max":{"1min":0,"5min":0,"15min":0},"last":0.74399999999999999}]}]},{"osd":3,"up_from":23,"seq":98784247828,"num_pgs":0,"num_osds":1,"num_per_pool_osds":0,"num_per_pool_omap_osds":0,"kb":20967424,"kb_used":5848,"kb_used_data":464,"kb_used_omap":0,"kb_used_meta":5376,"kb_avail":20961576,"statfs":{"total":21470642176,"available":21464653824,"internally_reserved":0,"allocated":475136,"data_stored":192888,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":5505024},"hb_peers":[0,1,2,4,5,6,7],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[],"network_ping_times":[{"osd":0,"last update":"Tue Mar 10 05:46:41 2026","interfaces":[{"interface":"back","average":{"1min":0.52900000000000003,"5min":0.52900000000000003,"15min":0.52900000000000003},"min":{"1min":0.315,"5min":0.315,"15min":0.315},"max":{"1min":1.0309999999999999,"5min":1.0309999999999999,"15min":1.0309999999999999},"last":0.32800000000000001},{"interface":"front","average":{"1min":0.52800000000000002,"5min":0.52800000000000002,"15min":0.52800000000000002},"min":{"1min":0.33500000000000002,"5min":0.33500000000000002,"15min":0.33500000000000002},"max":{"1min":1.0169999999999999,"5min":1.0169999999999999,"15min":1.0169999999999999},"last":0.64300000000000002}]},{"osd":1,"last update":"Tue Mar 10 05:46:41 2026","interfaces":[{"interface":"back","average":{"1min":0.52600000000000002,"5min":0.52600000000000002,"15min":0.52600000000000002},"min":{"1min":0.32000000000000001,"5min":0.32000000000000001,"15min":0.32000000000000001},"max":{"1min":0.77100000000000002,"5min":0.77100000000000002,"15min":0.77100000000000002},"last":0.68500000000000005},{"interface":"front","average":{"1min":0.54900000000000004,"5min":0.54900000000000004,"15min":0.54900000000000004},"min":{"1min":0.35999999999999999,"5min":0.35999999999999999,"15min":0.35999999999999999},"max":{"1min":0.88600000000000001,"5min":0.88600000000000001,"15min":0.88600000000000001},"last":0.72399999999999998}]},{"osd":2,"last update":"Tue Mar 10 05:46:41 2026","interfaces":[{"interface":"back","average":{"1min":0.54800000000000004,"5min":0.54800000000000004,"15min":0.54800000000000004},"min":{"1min":0.33400000000000002,"5min":0.33400000000000002,"15min":0.33400000000000002},"max":{"1min":0.99299999999999999,"5min":0.99299999999999999,"15min":0.99299999999999999},"last":0.70799999999999996},{"interface":"front","average":{"1min":0.54800000000000004,"5min":0.54800000000000004,"15min":0.54800000000000004},"min":{"1min":0.23799999999999999,"5min":0.23799999999999999,"15min":0.23799999999999999},"max":{"1min":1.0880000000000001,"5min":1.0880000000000001,"15min":1.0880000000000001},"last":0.54000000000000004}]},{"osd":4,"last update":"Tue Mar 10 05:46:58 2026","interfaces":[{"interface":"back","average":{"1min":0.621,"5min":0.621,"15min":0.621},"min":{"1min":0.434,"5min":0.434,"15min":0.434},"max":{"1min":1.0269999999999999,"5min":1.0269999999999999,"15min":1.0269999999999999},"last":0.73999999999999999},{"interface":"front","average":{"1min":0.60199999999999998,"5min":0.60199999999999998,"15min":0.60199999999999998},"min":{"1min":0.46000000000000002,"5min":0.46000000000000002,"15min":0.46000000000000002},"max":{"1min":0.92800000000000005,"5min":0.92800000000000005,"15min":0.92800000000000005},"last":0.71799999999999997}]},{"osd":5,"last update":"Tue Mar 10 05:47:09 2026","interfaces":[{"interface":"back","average":{"1min":0.63200000000000001,"5min":0.63200000000000001,"15min":0.63200000000000001},"min":{"1min":0.47999999999999998,"5min":0.47999999999999998,"15min":0.47999999999999998},"max":{"1min":1.0529999999999999,"5min":1.0529999999999999,"15min":1.0529999999999999},"last":0.54800000000000004},{"interface":"front","average":{"1min":0.628,"5min":0.628,"15min":0.628},"min":{"1min":0.33700000000000002,"5min":0.33700000000000002,"15min":0.33700000000000002},"max":{"1min":0.81399999999999995,"5min":0.81399999999999995,"15min":0.81399999999999995},"last":0.66700000000000004}]},{"osd":6,"last update":"Thu Jan 1 00:00:00 1970","interfaces":[{"interface":"back","average":{"1min":0,"5min":0,"15min":0},"min":{"1min":0,"5min":0,"15min":0},"max":{"1min":0,"5min":0,"15min":0},"last":0.69799999999999995}]},{"osd":7,"last update":"Thu Jan 1 00:00:00 1970","interfaces":[{"interface":"back","average":{"1min":0,"5min":0,"15min":0},"min":{"1min":0,"5min":0,"15min":0},"max":{"1min":0,"5min":0,"15min":0},"last":0.67900000000000005}]}]},{"osd":4,"up_from":28,"seq":120259084305,"num_pgs":0,"num_osds":1,"num_per_pool_osds":0,"num_per_pool_omap_osds":0,"kb":20967424,"kb_used":5912,"kb_used_data":464,"kb_used_omap":0,"kb_used_meta":5440,"kb_avail":20961512,"statfs":{"total":21470642176,"available":21464588288,"internally_reserved":0,"allocated":475136,"data_stored":192888,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":5570560},"hb_peers":[0,1,2,3,5,6,7],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[],"network_ping_times":[{"osd":0,"last update":"Tue Mar 10 05:46:54 2026","interfaces":[{"interface":"back","average":{"1min":0.52000000000000002,"5min":0.52000000000000002,"15min":0.52000000000000002},"min":{"1min":0.31900000000000001,"5min":0.31900000000000001,"15min":0.31900000000000001},"max":{"1min":0.98499999999999999,"5min":0.98499999999999999,"15min":0.98499999999999999},"last":0.72499999999999998},{"interface":"front","average":{"1min":0.54800000000000004,"5min":0.54800000000000004,"15min":0.54800000000000004},"min":{"1min":0.312,"5min":0.312,"15min":0.312},"max":{"1min":1.3169999999999999,"5min":1.3169999999999999,"15min":1.3169999999999999},"last":0.60499999999999998}]},{"osd":1,"last update":"Tue Mar 10 05:46:54 2026","interfaces":[{"interface":"back","average":{"1min":0.58699999999999997,"5min":0.58699999999999997,"15min":0.58699999999999997},"min":{"1min":0.34000000000000002,"5min":0.34000000000000002,"15min":0.34000000000000002},"max":{"1min":0.97299999999999998,"5min":0.97299999999999998,"15min":0.97299999999999998},"last":0.56899999999999995},{"interface":"front","average":{"1min":0.55500000000000005,"5min":0.55500000000000005,"15min":0.55500000000000005},"min":{"1min":0.29899999999999999,"5min":0.29899999999999999,"15min":0.29899999999999999},"max":{"1min":0.999,"5min":0.999,"15min":0.999},"last":0.67800000000000005}]},{"osd":2,"last update":"Tue Mar 10 05:46:54 2026","interfaces":[{"interface":"back","average":{"1min":0.59099999999999997,"5min":0.59099999999999997,"15min":0.59099999999999997},"min":{"1min":0.32100000000000001,"5min":0.32100000000000001,"15min":0.32100000000000001},"max":{"1min":0.99199999999999999,"5min":0.99199999999999999,"15min":0.99199999999999999},"last":0.746},{"interface":"front","average":{"1min":0.56399999999999995,"5min":0.56399999999999995,"15min":0.56399999999999995},"min":{"1min":0.23999999999999999,"5min":0.23999999999999999,"15min":0.23999999999999999},"max":{"1min":0.79200000000000004,"5min":0.79200000000000004,"15min":0.79200000000000004},"last":0.629}]},{"osd":3,"last update":"Tue Mar 10 05:46:54 2026","interfaces":[{"interface":"back","average":{"1min":0.59299999999999997,"5min":0.59299999999999997,"15min":0.59299999999999997},"min":{"1min":0.41299999999999998,"5min":0.41299999999999998,"15min":0.41299999999999998},"max":{"1min":0.79900000000000004,"5min":0.79900000000000004,"15min":0.79900000000000004},"last":0.65400000000000003},{"interface":"front","average":{"1min":0.61299999999999999,"5min":0.61299999999999999,"15min":0.61299999999999999},"min":{"1min":0.34699999999999998,"5min":0.34699999999999998,"15min":0.34699999999999998},"max":{"1min":0.94399999999999995,"5min":0.94399999999999995,"15min":0.94399999999999995},"last":0.59299999999999997}]},{"osd":5,"last update":"Tue Mar 10 05:47:09 2026","interfaces":[{"interface":"back","average":{"1min":0.52100000000000002,"5min":0.52100000000000002,"15min":0.52100000000000002},"min":{"1min":0.27500000000000002,"5min":0.27500000000000002,"15min":0.27500000000000002},"max":{"1min":0.95399999999999996,"5min":0.95399999999999996,"15min":0.95399999999999996},"last":0.55600000000000005},{"interface":"front","average":{"1min":0.50900000000000001,"5min":0.50900000000000001,"15min":0.50900000000000001},"min":{"1min":0.249,"5min":0.249,"15min":0.249},"max":{"1min":1.002,"5min":1.002,"15min":1.002},"last":0.54400000000000004}]},{"osd":6,"last update":"Thu Jan 1 00:00:00 1970","interfaces":[{"interface":"back","average":{"1min":0,"5min":0,"15min":0},"min":{"1min":0,"5min":0,"15min":0},"max":{"1min":0,"5min":0,"15min":0},"last":0.73699999999999999}]},{"osd":7,"last update":"Thu Jan 1 00:00:00 1970","interfaces":[{"interface":"back","average":{"1min":0,"5min":0,"15min":0},"min":{"1min":0,"5min":0,"15min":0},"max":{"1min":0,"5min":0,"15min":0},"last":0.61299999999999999}]}]},{"osd":5,"up_from":34,"seq":146028888078,"num_pgs":0,"num_osds":1,"num_per_pool_osds":0,"num_per_pool_omap_osds":0,"kb":20967424,"kb_used":5848,"kb_used_data":464,"kb_used_omap":0,"kb_used_meta":5376,"kb_avail":20961576,"statfs":{"total":21470642176,"available":21464653824,"internally_reserved":0,"allocated":475136,"data_stored":192888,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":5505024},"hb_peers":[0,1,2,3,4,6,7],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[],"network_ping_times":[{"osd":0,"last update":"Tue Mar 10 05:47:08 2026","interfaces":[{"interface":"back","average":{"1min":0.67300000000000004,"5min":0.67300000000000004,"15min":0.67300000000000004},"min":{"1min":0.46899999999999997,"5min":0.46899999999999997,"15min":0.46899999999999997},"max":{"1min":0.91100000000000003,"5min":0.91100000000000003,"15min":0.91100000000000003},"last":0.48399999999999999},{"interface":"front","average":{"1min":0.56899999999999995,"5min":0.56899999999999995,"15min":0.56899999999999995},"min":{"1min":0.34100000000000003,"5min":0.34100000000000003,"15min":0.34100000000000003},"max":{"1min":0.78600000000000003,"5min":0.78600000000000003,"15min":0.78600000000000003},"last":0.76800000000000002}]},{"osd":1,"last update":"Tue Mar 10 05:47:08 2026","interfaces":[{"interface":"back","average":{"1min":0.60299999999999998,"5min":0.60299999999999998,"15min":0.60299999999999998},"min":{"1min":0.39900000000000002,"5min":0.39900000000000002,"15min":0.39900000000000002},"max":{"1min":0.79300000000000004,"5min":0.79300000000000004,"15min":0.79300000000000004},"last":0.66000000000000003},{"interface":"front","average":{"1min":0.58999999999999997,"5min":0.58999999999999997,"15min":0.58999999999999997},"min":{"1min":0.32500000000000001,"5min":0.32500000000000001,"15min":0.32500000000000001},"max":{"1min":0.877,"5min":0.877,"15min":0.877},"last":0.46000000000000002}]},{"osd":2,"last update":"Tue Mar 10 05:47:08 2026","interfaces":[{"interface":"back","average":{"1min":0.69799999999999995,"5min":0.69799999999999995,"15min":0.69799999999999995},"min":{"1min":0.48599999999999999,"5min":0.48599999999999999,"15min":0.48599999999999999},"max":{"1min":1.04,"5min":1.04,"15min":1.04},"last":0.61599999999999999},{"interface":"front","average":{"1min":0.65000000000000002,"5min":0.65000000000000002,"15min":0.65000000000000002},"min":{"1min":0.40300000000000002,"5min":0.40300000000000002,"15min":0.40300000000000002},"max":{"1min":0.84999999999999998,"5min":0.84999999999999998,"15min":0.84999999999999998},"last":0.52500000000000002}]},{"osd":3,"last update":"Tue Mar 10 05:47:08 2026","interfaces":[{"interface":"back","average":{"1min":0.68999999999999995,"5min":0.68999999999999995,"15min":0.68999999999999995},"min":{"1min":0.40999999999999998,"5min":0.40999999999999998,"15min":0.40999999999999998},"max":{"1min":1.111,"5min":1.111,"15min":1.111},"last":0.79900000000000004},{"interface":"front","average":{"1min":0.627,"5min":0.627,"15min":0.627},"min":{"1min":0.38300000000000001,"5min":0.38300000000000001,"15min":0.38300000000000001},"max":{"1min":0.83299999999999996,"5min":0.83299999999999996,"15min":0.83299999999999996},"last":0.67500000000000004}]},{"osd":4,"last update":"Tue Mar 10 05:47:08 2026","interfaces":[{"interface":"back","average":{"1min":0.47299999999999998,"5min":0.47299999999999998,"15min":0.47299999999999998},"min":{"1min":0.23499999999999999,"5min":0.23499999999999999,"15min":0.23499999999999999},"max":{"1min":0.79200000000000004,"5min":0.79200000000000004,"15min":0.79200000000000004},"last":0.44800000000000001},{"interface":"front","average":{"1min":0.59399999999999997,"5min":0.59399999999999997,"15min":0.59399999999999997},"min":{"1min":0.29399999999999998,"5min":0.29399999999999998,"15min":0.29399999999999998},"max":{"1min":1.075,"5min":1.075,"15min":1.075},"last":0.82599999999999996}]},{"osd":6,"last update":"Thu Jan 1 00:00:00 1970","interfaces":[{"interface":"back","average":{"1min":0,"5min":0,"15min":0},"min":{"1min":0,"5min":0,"15min":0},"max":{"1min":0,"5min":0,"15min":0},"last":0.42999999999999999}]},{"osd":7,"last update":"Thu Jan 1 00:00:00 1970","interfaces":[{"interface":"back","average":{"1min":0,"5min":0,"15min":0},"min":{"1min":0,"5min":0,"15min":0},"max":{"1min":0,"5min":0,"15min":0},"last":0.39700000000000002}]}]}],"pool_statfs":[{"poolid":1,"osd":0,"total":0,"available":0,"internally_reserved":0,"allocated":401408,"data_stored":397840,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":1,"osd":1,"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":1,"osd":6,"total":0,"available":0,"internally_reserved":0,"allocated":401408,"data_stored":397840,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":1,"osd":7,"total":0,"available":0,"internally_reserved":0,"allocated":401408,"data_stored":397840,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0}]}} 2026-03-10T05:47:15.124 INFO:tasks.cephadm.ceph_manager.ceph:clean! 2026-03-10T05:47:15.124 INFO:tasks.ceph:Waiting until ceph cluster ceph is healthy... 2026-03-10T05:47:15.124 INFO:tasks.cephadm.ceph_manager.ceph:wait_until_healthy 2026-03-10T05:47:15.124 DEBUG:teuthology.orchestra.run.vm02:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 shell --fsid 107483ae-1c44-11f1-b530-c1172cd6122a -- ceph health --format=json 2026-03-10T05:47:15.258 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:47:14 vm05 bash[17864]: audit 2026-03-10T05:47:13.029564+0000 mgr.y (mgr.14409) 31 : audit [DBG] from='client.24427 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-10T05:47:15.258 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:47:14 vm05 bash[17864]: cluster 2026-03-10T05:47:13.682556+0000 mgr.y (mgr.14409) 32 : cluster [DBG] pgmap v14: 1 pgs: 1 active+clean; 449 KiB data, 48 MiB used, 160 GiB / 160 GiB avail 2026-03-10T05:47:16.746 INFO:teuthology.orchestra.run.vm02.stderr:Inferring config /var/lib/ceph/107483ae-1c44-11f1-b530-c1172cd6122a/mon.c/config 2026-03-10T05:47:17.065 INFO:teuthology.orchestra.run.vm02.stdout: 2026-03-10T05:47:17.065 INFO:teuthology.orchestra.run.vm02.stdout:{"status":"HEALTH_OK","checks":{},"mutes":[]} 2026-03-10T05:47:17.075 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:47:16 vm02 bash[17462]: audit 2026-03-10T05:47:15.071213+0000 mgr.y (mgr.14409) 33 : audit [DBG] from='client.14529 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-10T05:47:17.075 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:47:16 vm02 bash[17462]: cluster 2026-03-10T05:47:15.682807+0000 mgr.y (mgr.14409) 34 : cluster [DBG] pgmap v15: 1 pgs: 1 active+clean; 449 KiB data, 48 MiB used, 160 GiB / 160 GiB avail 2026-03-10T05:47:17.077 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:47:16 vm02 bash[22526]: audit 2026-03-10T05:47:15.071213+0000 mgr.y (mgr.14409) 33 : audit [DBG] from='client.14529 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-10T05:47:17.077 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:47:16 vm02 bash[22526]: cluster 2026-03-10T05:47:15.682807+0000 mgr.y (mgr.14409) 34 : cluster [DBG] pgmap v15: 1 pgs: 1 active+clean; 449 KiB data, 48 MiB used, 160 GiB / 160 GiB avail 2026-03-10T05:47:17.077 INFO:journalctl@ceph.alertmanager.a.vm02.stdout:Mar 10 05:47:16 vm02 bash[39873]: level=info ts=2026-03-10T05:47:16.729Z caller=cluster.go:688 component=cluster msg="gossip settled; proceeding" elapsed=10.003098057s 2026-03-10T05:47:17.119 INFO:tasks.cephadm.ceph_manager.ceph:wait_until_healthy done 2026-03-10T05:47:17.119 INFO:tasks.cephadm:Setup complete, yielding 2026-03-10T05:47:17.119 INFO:teuthology.run_tasks:Running task cephadm.shell... 2026-03-10T05:47:17.121 INFO:tasks.cephadm:Running commands on role mon.a host ubuntu@vm02.local 2026-03-10T05:47:17.121 DEBUG:teuthology.orchestra.run.vm02:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 107483ae-1c44-11f1-b530-c1172cd6122a -- bash -c 'ceph config set mgr mgr/cephadm/use_repo_digest false --force' 2026-03-10T05:47:17.258 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:47:16 vm05 bash[17864]: audit 2026-03-10T05:47:15.071213+0000 mgr.y (mgr.14409) 33 : audit [DBG] from='client.14529 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-10T05:47:17.258 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:47:16 vm05 bash[17864]: cluster 2026-03-10T05:47:15.682807+0000 mgr.y (mgr.14409) 34 : cluster [DBG] pgmap v15: 1 pgs: 1 active+clean; 449 KiB data, 48 MiB used, 160 GiB / 160 GiB avail 2026-03-10T05:47:17.334 INFO:journalctl@ceph.mgr.y.vm02.stdout:Mar 10 05:47:17 vm02 bash[17731]: ::ffff:192.168.123.105 - - [10/Mar/2026:05:47:17] "GET /metrics HTTP/1.1" 200 191103 "" "Prometheus/2.33.4" 2026-03-10T05:47:17.536 INFO:teuthology.run_tasks:Running task cephadm.shell... 2026-03-10T05:47:17.538 INFO:tasks.cephadm:Running commands on role mon.a host ubuntu@vm02.local 2026-03-10T05:47:17.538 DEBUG:teuthology.orchestra.run.vm02:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 107483ae-1c44-11f1-b530-c1172cd6122a -e sha1=e911bdebe5c8faa3800735d1568fcdca65db60df -- bash -c 'radosgw-admin realm create --rgw-realm=r --default' 2026-03-10T05:47:18.084 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:47:17 vm02 bash[17462]: audit 2026-03-10T05:47:17.059952+0000 mon.b (mon.2) 25 : audit [DBG] from='client.? 192.168.123.102:0/3442309710' entity='client.admin' cmd=[{"prefix": "health", "format": "json"}]: dispatch 2026-03-10T05:47:18.084 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:47:17 vm02 bash[17462]: audit 2026-03-10T05:47:17.489617+0000 mon.a (mon.0) 609 : audit [INF] from='client.? ' entity='client.admin' 2026-03-10T05:47:18.084 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:47:17 vm02 bash[22526]: audit 2026-03-10T05:47:17.059952+0000 mon.b (mon.2) 25 : audit [DBG] from='client.? 192.168.123.102:0/3442309710' entity='client.admin' cmd=[{"prefix": "health", "format": "json"}]: dispatch 2026-03-10T05:47:18.084 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:47:17 vm02 bash[22526]: audit 2026-03-10T05:47:17.489617+0000 mon.a (mon.0) 609 : audit [INF] from='client.? ' entity='client.admin' 2026-03-10T05:47:18.258 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:47:17 vm05 bash[17864]: audit 2026-03-10T05:47:17.059952+0000 mon.b (mon.2) 25 : audit [DBG] from='client.? 192.168.123.102:0/3442309710' entity='client.admin' cmd=[{"prefix": "health", "format": "json"}]: dispatch 2026-03-10T05:47:18.258 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:47:17 vm05 bash[17864]: audit 2026-03-10T05:47:17.489617+0000 mon.a (mon.0) 609 : audit [INF] from='client.? ' entity='client.admin' 2026-03-10T05:47:19.084 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:47:18 vm02 bash[17462]: cluster 2026-03-10T05:47:17.683050+0000 mgr.y (mgr.14409) 35 : cluster [DBG] pgmap v16: 1 pgs: 1 active+clean; 449 KiB data, 48 MiB used, 160 GiB / 160 GiB avail 2026-03-10T05:47:19.084 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:47:18 vm02 bash[22526]: cluster 2026-03-10T05:47:17.683050+0000 mgr.y (mgr.14409) 35 : cluster [DBG] pgmap v16: 1 pgs: 1 active+clean; 449 KiB data, 48 MiB used, 160 GiB / 160 GiB avail 2026-03-10T05:47:19.258 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:47:18 vm05 bash[17864]: cluster 2026-03-10T05:47:17.683050+0000 mgr.y (mgr.14409) 35 : cluster [DBG] pgmap v16: 1 pgs: 1 active+clean; 449 KiB data, 48 MiB used, 160 GiB / 160 GiB avail 2026-03-10T05:47:19.847 INFO:teuthology.orchestra.run.vm02.stdout:{ 2026-03-10T05:47:19.847 INFO:teuthology.orchestra.run.vm02.stdout: "id": "fb11b072-4c9f-4af9-80fd-732a30fdcbae", 2026-03-10T05:47:19.847 INFO:teuthology.orchestra.run.vm02.stdout: "name": "r", 2026-03-10T05:47:19.847 INFO:teuthology.orchestra.run.vm02.stdout: "current_period": "263cffb0-ab49-4d9f-a14d-00088d94d487", 2026-03-10T05:47:19.847 INFO:teuthology.orchestra.run.vm02.stdout: "epoch": 1 2026-03-10T05:47:19.847 INFO:teuthology.orchestra.run.vm02.stdout:} 2026-03-10T05:47:19.896 DEBUG:teuthology.orchestra.run.vm02:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 107483ae-1c44-11f1-b530-c1172cd6122a -e sha1=e911bdebe5c8faa3800735d1568fcdca65db60df -- bash -c 'radosgw-admin zonegroup create --rgw-zonegroup=default --master --default' 2026-03-10T05:47:20.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:47:19 vm02 bash[17462]: audit 2026-03-10T05:47:18.806533+0000 mon.b (mon.2) 26 : audit [INF] from='client.? 192.168.123.102:0/2898639240' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]: dispatch 2026-03-10T05:47:20.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:47:19 vm02 bash[17462]: cluster 2026-03-10T05:47:18.808075+0000 mon.a (mon.0) 610 : cluster [DBG] osdmap e50: 8 total, 8 up, 8 in 2026-03-10T05:47:20.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:47:19 vm02 bash[17462]: audit 2026-03-10T05:47:18.811907+0000 mon.a (mon.0) 611 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]: dispatch 2026-03-10T05:47:20.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:47:19 vm02 bash[22526]: audit 2026-03-10T05:47:18.806533+0000 mon.b (mon.2) 26 : audit [INF] from='client.? 192.168.123.102:0/2898639240' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]: dispatch 2026-03-10T05:47:20.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:47:19 vm02 bash[22526]: cluster 2026-03-10T05:47:18.808075+0000 mon.a (mon.0) 610 : cluster [DBG] osdmap e50: 8 total, 8 up, 8 in 2026-03-10T05:47:20.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:47:19 vm02 bash[22526]: audit 2026-03-10T05:47:18.811907+0000 mon.a (mon.0) 611 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]: dispatch 2026-03-10T05:47:20.195 INFO:teuthology.orchestra.run.vm02.stdout:{ 2026-03-10T05:47:20.195 INFO:teuthology.orchestra.run.vm02.stdout: "id": "9c21ed15-ac42-4b2f-9d98-2a55e5899bb4", 2026-03-10T05:47:20.195 INFO:teuthology.orchestra.run.vm02.stdout: "name": "default", 2026-03-10T05:47:20.195 INFO:teuthology.orchestra.run.vm02.stdout: "api_name": "default", 2026-03-10T05:47:20.195 INFO:teuthology.orchestra.run.vm02.stdout: "is_master": "true", 2026-03-10T05:47:20.195 INFO:teuthology.orchestra.run.vm02.stdout: "endpoints": [], 2026-03-10T05:47:20.195 INFO:teuthology.orchestra.run.vm02.stdout: "hostnames": [], 2026-03-10T05:47:20.195 INFO:teuthology.orchestra.run.vm02.stdout: "hostnames_s3website": [], 2026-03-10T05:47:20.195 INFO:teuthology.orchestra.run.vm02.stdout: "master_zone": "", 2026-03-10T05:47:20.195 INFO:teuthology.orchestra.run.vm02.stdout: "zones": [], 2026-03-10T05:47:20.195 INFO:teuthology.orchestra.run.vm02.stdout: "placement_targets": [], 2026-03-10T05:47:20.195 INFO:teuthology.orchestra.run.vm02.stdout: "default_placement": "", 2026-03-10T05:47:20.195 INFO:teuthology.orchestra.run.vm02.stdout: "realm_id": "fb11b072-4c9f-4af9-80fd-732a30fdcbae", 2026-03-10T05:47:20.195 INFO:teuthology.orchestra.run.vm02.stdout: "sync_policy": { 2026-03-10T05:47:20.195 INFO:teuthology.orchestra.run.vm02.stdout: "groups": [] 2026-03-10T05:47:20.195 INFO:teuthology.orchestra.run.vm02.stdout: } 2026-03-10T05:47:20.195 INFO:teuthology.orchestra.run.vm02.stdout:} 2026-03-10T05:47:20.232 DEBUG:teuthology.orchestra.run.vm02:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 107483ae-1c44-11f1-b530-c1172cd6122a -e sha1=e911bdebe5c8faa3800735d1568fcdca65db60df -- bash -c 'radosgw-admin zone create --rgw-zonegroup=default --rgw-zone=z --master --default' 2026-03-10T05:47:20.258 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:47:19 vm05 bash[17864]: audit 2026-03-10T05:47:18.806533+0000 mon.b (mon.2) 26 : audit [INF] from='client.? 192.168.123.102:0/2898639240' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]: dispatch 2026-03-10T05:47:20.258 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:47:19 vm05 bash[17864]: cluster 2026-03-10T05:47:18.808075+0000 mon.a (mon.0) 610 : cluster [DBG] osdmap e50: 8 total, 8 up, 8 in 2026-03-10T05:47:20.258 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:47:19 vm05 bash[17864]: audit 2026-03-10T05:47:18.811907+0000 mon.a (mon.0) 611 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]: dispatch 2026-03-10T05:47:20.576 INFO:teuthology.orchestra.run.vm02.stdout:{ 2026-03-10T05:47:20.576 INFO:teuthology.orchestra.run.vm02.stdout: "id": "b64c4367-f368-4901-9b9b-cd9fd0e3497b", 2026-03-10T05:47:20.576 INFO:teuthology.orchestra.run.vm02.stdout: "name": "z", 2026-03-10T05:47:20.576 INFO:teuthology.orchestra.run.vm02.stdout: "domain_root": "z.rgw.meta:root", 2026-03-10T05:47:20.576 INFO:teuthology.orchestra.run.vm02.stdout: "control_pool": "z.rgw.control", 2026-03-10T05:47:20.576 INFO:teuthology.orchestra.run.vm02.stdout: "gc_pool": "z.rgw.log:gc", 2026-03-10T05:47:20.576 INFO:teuthology.orchestra.run.vm02.stdout: "lc_pool": "z.rgw.log:lc", 2026-03-10T05:47:20.576 INFO:teuthology.orchestra.run.vm02.stdout: "log_pool": "z.rgw.log", 2026-03-10T05:47:20.576 INFO:teuthology.orchestra.run.vm02.stdout: "intent_log_pool": "z.rgw.log:intent", 2026-03-10T05:47:20.576 INFO:teuthology.orchestra.run.vm02.stdout: "usage_log_pool": "z.rgw.log:usage", 2026-03-10T05:47:20.576 INFO:teuthology.orchestra.run.vm02.stdout: "roles_pool": "z.rgw.meta:roles", 2026-03-10T05:47:20.576 INFO:teuthology.orchestra.run.vm02.stdout: "reshard_pool": "z.rgw.log:reshard", 2026-03-10T05:47:20.576 INFO:teuthology.orchestra.run.vm02.stdout: "user_keys_pool": "z.rgw.meta:users.keys", 2026-03-10T05:47:20.577 INFO:teuthology.orchestra.run.vm02.stdout: "user_email_pool": "z.rgw.meta:users.email", 2026-03-10T05:47:20.577 INFO:teuthology.orchestra.run.vm02.stdout: "user_swift_pool": "z.rgw.meta:users.swift", 2026-03-10T05:47:20.577 INFO:teuthology.orchestra.run.vm02.stdout: "user_uid_pool": "z.rgw.meta:users.uid", 2026-03-10T05:47:20.577 INFO:teuthology.orchestra.run.vm02.stdout: "otp_pool": "z.rgw.otp", 2026-03-10T05:47:20.577 INFO:teuthology.orchestra.run.vm02.stdout: "system_key": { 2026-03-10T05:47:20.577 INFO:teuthology.orchestra.run.vm02.stdout: "access_key": "", 2026-03-10T05:47:20.577 INFO:teuthology.orchestra.run.vm02.stdout: "secret_key": "" 2026-03-10T05:47:20.577 INFO:teuthology.orchestra.run.vm02.stdout: }, 2026-03-10T05:47:20.577 INFO:teuthology.orchestra.run.vm02.stdout: "placement_pools": [ 2026-03-10T05:47:20.577 INFO:teuthology.orchestra.run.vm02.stdout: { 2026-03-10T05:47:20.577 INFO:teuthology.orchestra.run.vm02.stdout: "key": "default-placement", 2026-03-10T05:47:20.577 INFO:teuthology.orchestra.run.vm02.stdout: "val": { 2026-03-10T05:47:20.577 INFO:teuthology.orchestra.run.vm02.stdout: "index_pool": "z.rgw.buckets.index", 2026-03-10T05:47:20.577 INFO:teuthology.orchestra.run.vm02.stdout: "storage_classes": { 2026-03-10T05:47:20.577 INFO:teuthology.orchestra.run.vm02.stdout: "STANDARD": { 2026-03-10T05:47:20.577 INFO:teuthology.orchestra.run.vm02.stdout: "data_pool": "z.rgw.buckets.data" 2026-03-10T05:47:20.577 INFO:teuthology.orchestra.run.vm02.stdout: } 2026-03-10T05:47:20.577 INFO:teuthology.orchestra.run.vm02.stdout: }, 2026-03-10T05:47:20.577 INFO:teuthology.orchestra.run.vm02.stdout: "data_extra_pool": "z.rgw.buckets.non-ec", 2026-03-10T05:47:20.577 INFO:teuthology.orchestra.run.vm02.stdout: "index_type": 0 2026-03-10T05:47:20.577 INFO:teuthology.orchestra.run.vm02.stdout: } 2026-03-10T05:47:20.577 INFO:teuthology.orchestra.run.vm02.stdout: } 2026-03-10T05:47:20.577 INFO:teuthology.orchestra.run.vm02.stdout: ], 2026-03-10T05:47:20.577 INFO:teuthology.orchestra.run.vm02.stdout: "realm_id": "fb11b072-4c9f-4af9-80fd-732a30fdcbae", 2026-03-10T05:47:20.577 INFO:teuthology.orchestra.run.vm02.stdout: "notif_pool": "z.rgw.log:notif" 2026-03-10T05:47:20.577 INFO:teuthology.orchestra.run.vm02.stdout:} 2026-03-10T05:47:20.617 DEBUG:teuthology.orchestra.run.vm02:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 107483ae-1c44-11f1-b530-c1172cd6122a -e sha1=e911bdebe5c8faa3800735d1568fcdca65db60df -- bash -c 'radosgw-admin period update --rgw-realm=r --commit' 2026-03-10T05:47:21.084 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:47:20 vm02 bash[17462]: cluster 2026-03-10T05:47:19.683285+0000 mgr.y (mgr.14409) 36 : cluster [DBG] pgmap v18: 33 pgs: 32 unknown, 1 active+clean; 449 KiB data, 48 MiB used, 160 GiB / 160 GiB avail 2026-03-10T05:47:21.084 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:47:20 vm02 bash[17462]: audit 2026-03-10T05:47:19.804736+0000 mon.a (mon.0) 612 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]': finished 2026-03-10T05:47:21.084 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:47:20 vm02 bash[17462]: cluster 2026-03-10T05:47:19.804769+0000 mon.a (mon.0) 613 : cluster [DBG] osdmap e51: 8 total, 8 up, 8 in 2026-03-10T05:47:21.084 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:47:20 vm02 bash[17462]: cluster 2026-03-10T05:47:20.806051+0000 mon.a (mon.0) 614 : cluster [DBG] osdmap e52: 8 total, 8 up, 8 in 2026-03-10T05:47:21.084 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:47:20 vm02 bash[22526]: cluster 2026-03-10T05:47:19.683285+0000 mgr.y (mgr.14409) 36 : cluster [DBG] pgmap v18: 33 pgs: 32 unknown, 1 active+clean; 449 KiB data, 48 MiB used, 160 GiB / 160 GiB avail 2026-03-10T05:47:21.084 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:47:20 vm02 bash[22526]: audit 2026-03-10T05:47:19.804736+0000 mon.a (mon.0) 612 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]': finished 2026-03-10T05:47:21.084 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:47:20 vm02 bash[22526]: cluster 2026-03-10T05:47:19.804769+0000 mon.a (mon.0) 613 : cluster [DBG] osdmap e51: 8 total, 8 up, 8 in 2026-03-10T05:47:21.084 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:47:20 vm02 bash[22526]: cluster 2026-03-10T05:47:20.806051+0000 mon.a (mon.0) 614 : cluster [DBG] osdmap e52: 8 total, 8 up, 8 in 2026-03-10T05:47:21.258 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:47:20 vm05 bash[17864]: cluster 2026-03-10T05:47:19.683285+0000 mgr.y (mgr.14409) 36 : cluster [DBG] pgmap v18: 33 pgs: 32 unknown, 1 active+clean; 449 KiB data, 48 MiB used, 160 GiB / 160 GiB avail 2026-03-10T05:47:21.258 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:47:20 vm05 bash[17864]: audit 2026-03-10T05:47:19.804736+0000 mon.a (mon.0) 612 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]': finished 2026-03-10T05:47:21.258 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:47:20 vm05 bash[17864]: cluster 2026-03-10T05:47:19.804769+0000 mon.a (mon.0) 613 : cluster [DBG] osdmap e51: 8 total, 8 up, 8 in 2026-03-10T05:47:21.258 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:47:20 vm05 bash[17864]: cluster 2026-03-10T05:47:20.806051+0000 mon.a (mon.0) 614 : cluster [DBG] osdmap e52: 8 total, 8 up, 8 in 2026-03-10T05:47:23.258 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:47:22 vm05 bash[17864]: cluster 2026-03-10T05:47:21.683532+0000 mgr.y (mgr.14409) 37 : cluster [DBG] pgmap v21: 33 pgs: 32 unknown, 1 active+clean; 449 KiB data, 48 MiB used, 160 GiB / 160 GiB avail 2026-03-10T05:47:23.258 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:47:22 vm05 bash[17864]: cluster 2026-03-10T05:47:21.848230+0000 mon.a (mon.0) 615 : cluster [DBG] osdmap e53: 8 total, 8 up, 8 in 2026-03-10T05:47:23.258 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:47:22 vm05 bash[17864]: audit 2026-03-10T05:47:21.855629+0000 mon.b (mon.2) 27 : audit [INF] from='client.? 192.168.123.102:0/3743224131' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "z.rgw.log","app": "rgw"}]: dispatch 2026-03-10T05:47:23.258 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:47:22 vm05 bash[17864]: audit 2026-03-10T05:47:21.860900+0000 mon.a (mon.0) 616 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "z.rgw.log","app": "rgw"}]: dispatch 2026-03-10T05:47:23.258 INFO:journalctl@ceph.mgr.x.vm05.stdout:Mar 10 05:47:22 vm05 bash[18520]: ::ffff:192.168.123.105 - - [10/Mar/2026:05:47:22] "GET /metrics HTTP/1.1" 200 - "" "Prometheus/2.33.4" 2026-03-10T05:47:23.334 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:47:22 vm02 bash[17462]: cluster 2026-03-10T05:47:21.683532+0000 mgr.y (mgr.14409) 37 : cluster [DBG] pgmap v21: 33 pgs: 32 unknown, 1 active+clean; 449 KiB data, 48 MiB used, 160 GiB / 160 GiB avail 2026-03-10T05:47:23.334 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:47:22 vm02 bash[17462]: cluster 2026-03-10T05:47:21.848230+0000 mon.a (mon.0) 615 : cluster [DBG] osdmap e53: 8 total, 8 up, 8 in 2026-03-10T05:47:23.334 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:47:22 vm02 bash[17462]: audit 2026-03-10T05:47:21.855629+0000 mon.b (mon.2) 27 : audit [INF] from='client.? 192.168.123.102:0/3743224131' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "z.rgw.log","app": "rgw"}]: dispatch 2026-03-10T05:47:23.334 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:47:22 vm02 bash[17462]: audit 2026-03-10T05:47:21.860900+0000 mon.a (mon.0) 616 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "z.rgw.log","app": "rgw"}]: dispatch 2026-03-10T05:47:23.334 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:47:22 vm02 bash[22526]: cluster 2026-03-10T05:47:21.683532+0000 mgr.y (mgr.14409) 37 : cluster [DBG] pgmap v21: 33 pgs: 32 unknown, 1 active+clean; 449 KiB data, 48 MiB used, 160 GiB / 160 GiB avail 2026-03-10T05:47:23.334 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:47:22 vm02 bash[22526]: cluster 2026-03-10T05:47:21.848230+0000 mon.a (mon.0) 615 : cluster [DBG] osdmap e53: 8 total, 8 up, 8 in 2026-03-10T05:47:23.334 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:47:22 vm02 bash[22526]: audit 2026-03-10T05:47:21.855629+0000 mon.b (mon.2) 27 : audit [INF] from='client.? 192.168.123.102:0/3743224131' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "z.rgw.log","app": "rgw"}]: dispatch 2026-03-10T05:47:23.334 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:47:22 vm02 bash[22526]: audit 2026-03-10T05:47:21.860900+0000 mon.a (mon.0) 616 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "z.rgw.log","app": "rgw"}]: dispatch 2026-03-10T05:47:24.334 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:47:24 vm02 bash[17462]: audit 2026-03-10T05:47:22.859357+0000 mon.a (mon.0) 617 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "z.rgw.log","app": "rgw"}]': finished 2026-03-10T05:47:24.334 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:47:24 vm02 bash[17462]: cluster 2026-03-10T05:47:22.859426+0000 mon.a (mon.0) 618 : cluster [DBG] osdmap e54: 8 total, 8 up, 8 in 2026-03-10T05:47:24.334 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:47:24 vm02 bash[22526]: audit 2026-03-10T05:47:22.859357+0000 mon.a (mon.0) 617 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "z.rgw.log","app": "rgw"}]': finished 2026-03-10T05:47:24.334 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:47:24 vm02 bash[22526]: cluster 2026-03-10T05:47:22.859426+0000 mon.a (mon.0) 618 : cluster [DBG] osdmap e54: 8 total, 8 up, 8 in 2026-03-10T05:47:24.508 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:47:24 vm05 bash[17864]: audit 2026-03-10T05:47:22.859357+0000 mon.a (mon.0) 617 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "z.rgw.log","app": "rgw"}]': finished 2026-03-10T05:47:24.508 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:47:24 vm05 bash[17864]: cluster 2026-03-10T05:47:22.859426+0000 mon.a (mon.0) 618 : cluster [DBG] osdmap e54: 8 total, 8 up, 8 in 2026-03-10T05:47:25.509 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:47:25 vm05 bash[17864]: cluster 2026-03-10T05:47:23.683839+0000 mgr.y (mgr.14409) 38 : cluster [DBG] pgmap v24: 65 pgs: 32 creating+peering, 33 active+clean; 451 KiB data, 50 MiB used, 160 GiB / 160 GiB avail; 7.5 KiB/s rd, 4.0 KiB/s wr, 12 op/s 2026-03-10T05:47:25.509 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:47:25 vm05 bash[17864]: cluster 2026-03-10T05:47:24.016301+0000 mon.a (mon.0) 619 : cluster [DBG] osdmap e55: 8 total, 8 up, 8 in 2026-03-10T05:47:25.509 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:47:25 vm05 bash[17864]: audit 2026-03-10T05:47:24.021459+0000 mon.b (mon.2) 28 : audit [INF] from='client.? 192.168.123.102:0/3743224131' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "z.rgw.control","app": "rgw"}]: dispatch 2026-03-10T05:47:25.509 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:47:25 vm05 bash[17864]: audit 2026-03-10T05:47:24.031137+0000 mon.a (mon.0) 620 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "z.rgw.control","app": "rgw"}]: dispatch 2026-03-10T05:47:25.584 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:47:25 vm02 bash[17462]: cluster 2026-03-10T05:47:23.683839+0000 mgr.y (mgr.14409) 38 : cluster [DBG] pgmap v24: 65 pgs: 32 creating+peering, 33 active+clean; 451 KiB data, 50 MiB used, 160 GiB / 160 GiB avail; 7.5 KiB/s rd, 4.0 KiB/s wr, 12 op/s 2026-03-10T05:47:25.584 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:47:25 vm02 bash[17462]: cluster 2026-03-10T05:47:24.016301+0000 mon.a (mon.0) 619 : cluster [DBG] osdmap e55: 8 total, 8 up, 8 in 2026-03-10T05:47:25.584 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:47:25 vm02 bash[17462]: audit 2026-03-10T05:47:24.021459+0000 mon.b (mon.2) 28 : audit [INF] from='client.? 192.168.123.102:0/3743224131' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "z.rgw.control","app": "rgw"}]: dispatch 2026-03-10T05:47:25.584 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:47:25 vm02 bash[17462]: audit 2026-03-10T05:47:24.031137+0000 mon.a (mon.0) 620 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "z.rgw.control","app": "rgw"}]: dispatch 2026-03-10T05:47:25.584 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:47:25 vm02 bash[22526]: cluster 2026-03-10T05:47:23.683839+0000 mgr.y (mgr.14409) 38 : cluster [DBG] pgmap v24: 65 pgs: 32 creating+peering, 33 active+clean; 451 KiB data, 50 MiB used, 160 GiB / 160 GiB avail; 7.5 KiB/s rd, 4.0 KiB/s wr, 12 op/s 2026-03-10T05:47:25.584 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:47:25 vm02 bash[22526]: cluster 2026-03-10T05:47:24.016301+0000 mon.a (mon.0) 619 : cluster [DBG] osdmap e55: 8 total, 8 up, 8 in 2026-03-10T05:47:25.584 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:47:25 vm02 bash[22526]: audit 2026-03-10T05:47:24.021459+0000 mon.b (mon.2) 28 : audit [INF] from='client.? 192.168.123.102:0/3743224131' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "z.rgw.control","app": "rgw"}]: dispatch 2026-03-10T05:47:25.584 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:47:25 vm02 bash[22526]: audit 2026-03-10T05:47:24.031137+0000 mon.a (mon.0) 620 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "z.rgw.control","app": "rgw"}]: dispatch 2026-03-10T05:47:26.094 INFO:journalctl@ceph.prometheus.a.vm05.stdout:Mar 10 05:47:25 vm05 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:47:26.094 INFO:journalctl@ceph.prometheus.a.vm05.stdout:Mar 10 05:47:26 vm05 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:47:26.095 INFO:journalctl@ceph.node-exporter.b.vm05.stdout:Mar 10 05:47:25 vm05 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:47:26.095 INFO:journalctl@ceph.node-exporter.b.vm05.stdout:Mar 10 05:47:26 vm05 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:47:26.096 INFO:journalctl@ceph.mgr.x.vm05.stdout:Mar 10 05:47:25 vm05 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:47:26.096 INFO:journalctl@ceph.mgr.x.vm05.stdout:Mar 10 05:47:26 vm05 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:47:26.096 INFO:journalctl@ceph.osd.4.vm05.stdout:Mar 10 05:47:25 vm05 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:47:26.096 INFO:journalctl@ceph.osd.4.vm05.stdout:Mar 10 05:47:26 vm05 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:47:26.097 INFO:journalctl@ceph.osd.6.vm05.stdout:Mar 10 05:47:25 vm05 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:47:26.097 INFO:journalctl@ceph.osd.6.vm05.stdout:Mar 10 05:47:26 vm05 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:47:26.097 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:47:25 vm05 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:47:26.097 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:47:26 vm05 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:47:26.097 INFO:journalctl@ceph.osd.5.vm05.stdout:Mar 10 05:47:25 vm05 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:47:26.097 INFO:journalctl@ceph.osd.5.vm05.stdout:Mar 10 05:47:26 vm05 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:47:26.097 INFO:journalctl@ceph.osd.7.vm05.stdout:Mar 10 05:47:25 vm05 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:47:26.097 INFO:journalctl@ceph.osd.7.vm05.stdout:Mar 10 05:47:26 vm05 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:47:26.097 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:47:25 vm05 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:47:26.097 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:47:25 vm05 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:47:26.097 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:47:25 vm05 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:47:26.097 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:47:25 vm05 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:47:26.097 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:47:26 vm05 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:47:26.097 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:47:26 vm05 systemd[1]: Started Ceph grafana.a for 107483ae-1c44-11f1-b530-c1172cd6122a. 2026-03-10T05:47:26.347 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:47:26 vm05 bash[17864]: audit 2026-03-10T05:47:25.075618+0000 mon.a (mon.0) 621 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "z.rgw.control","app": "rgw"}]': finished 2026-03-10T05:47:26.347 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:47:26 vm05 bash[17864]: cluster 2026-03-10T05:47:25.075700+0000 mon.a (mon.0) 622 : cluster [DBG] osdmap e56: 8 total, 8 up, 8 in 2026-03-10T05:47:26.347 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:47:26 vm05 bash[17864]: cluster 2026-03-10T05:47:25.684121+0000 mgr.y (mgr.14409) 39 : cluster [DBG] pgmap v27: 97 pgs: 32 unknown, 32 creating+peering, 33 active+clean; 451 KiB data, 50 MiB used, 160 GiB / 160 GiB avail; 7.5 KiB/s rd, 4.0 KiB/s wr, 12 op/s 2026-03-10T05:47:26.347 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:47:26 vm05 bash[17864]: cluster 2026-03-10T05:47:26.087581+0000 mon.a (mon.0) 623 : cluster [DBG] osdmap e57: 8 total, 8 up, 8 in 2026-03-10T05:47:26.347 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:47:26 vm05 bash[17864]: audit 2026-03-10T05:47:26.090005+0000 mon.c (mon.1) 56 : audit [INF] from='client.? 192.168.123.102:0/2911771414' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "z.rgw.meta","app": "rgw"}]: dispatch 2026-03-10T05:47:26.347 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:47:26 vm05 bash[17864]: audit 2026-03-10T05:47:26.090874+0000 mon.a (mon.0) 624 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "z.rgw.meta","app": "rgw"}]: dispatch 2026-03-10T05:47:26.347 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:47:26 vm05 bash[33387]: t=2026-03-10T05:47:26+0000 lvl=info msg="The state of unified alerting is still not defined. The decision will be made during as we run the database migrations" logger=settings 2026-03-10T05:47:26.347 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:47:26 vm05 bash[33387]: t=2026-03-10T05:47:26+0000 lvl=warn msg="falling back to legacy setting of 'min_interval_seconds'; please use the configuration option in the `unified_alerting` section if Grafana 8 alerts are enabled." logger=settings 2026-03-10T05:47:26.347 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:47:26 vm05 bash[33387]: t=2026-03-10T05:47:26+0000 lvl=info msg="Config loaded from" logger=settings file=/usr/share/grafana/conf/defaults.ini 2026-03-10T05:47:26.347 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:47:26 vm05 bash[33387]: t=2026-03-10T05:47:26+0000 lvl=info msg="Config loaded from" logger=settings file=/etc/grafana/grafana.ini 2026-03-10T05:47:26.347 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:47:26 vm05 bash[33387]: t=2026-03-10T05:47:26+0000 lvl=info msg="Config overridden from Environment variable" logger=settings var="GF_PATHS_DATA=/var/lib/grafana" 2026-03-10T05:47:26.347 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:47:26 vm05 bash[33387]: t=2026-03-10T05:47:26+0000 lvl=info msg="Config overridden from Environment variable" logger=settings var="GF_PATHS_LOGS=/var/log/grafana" 2026-03-10T05:47:26.347 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:47:26 vm05 bash[33387]: t=2026-03-10T05:47:26+0000 lvl=info msg="Config overridden from Environment variable" logger=settings var="GF_PATHS_PLUGINS=/var/lib/grafana/plugins" 2026-03-10T05:47:26.347 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:47:26 vm05 bash[33387]: t=2026-03-10T05:47:26+0000 lvl=info msg="Config overridden from Environment variable" logger=settings var="GF_PATHS_PROVISIONING=/etc/grafana/provisioning" 2026-03-10T05:47:26.347 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:47:26 vm05 bash[33387]: t=2026-03-10T05:47:26+0000 lvl=info msg="Path Home" logger=settings path=/usr/share/grafana 2026-03-10T05:47:26.347 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:47:26 vm05 bash[33387]: t=2026-03-10T05:47:26+0000 lvl=info msg="Path Data" logger=settings path=/var/lib/grafana 2026-03-10T05:47:26.347 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:47:26 vm05 bash[33387]: t=2026-03-10T05:47:26+0000 lvl=info msg="Path Logs" logger=settings path=/var/log/grafana 2026-03-10T05:47:26.347 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:47:26 vm05 bash[33387]: t=2026-03-10T05:47:26+0000 lvl=info msg="Path Plugins" logger=settings path=/var/lib/grafana/plugins 2026-03-10T05:47:26.347 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:47:26 vm05 bash[33387]: t=2026-03-10T05:47:26+0000 lvl=info msg="Path Provisioning" logger=settings path=/etc/grafana/provisioning 2026-03-10T05:47:26.347 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:47:26 vm05 bash[33387]: t=2026-03-10T05:47:26+0000 lvl=info msg="App mode production" logger=settings 2026-03-10T05:47:26.347 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:47:26 vm05 bash[33387]: t=2026-03-10T05:47:26+0000 lvl=info msg="Connecting to DB" logger=sqlstore dbtype=sqlite3 2026-03-10T05:47:26.347 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:47:26 vm05 bash[33387]: t=2026-03-10T05:47:26+0000 lvl=warn msg="SQLite database file has broader permissions than it should" logger=sqlstore path=/var/lib/grafana/grafana.db mode=-rw-r--r-- expected=-rw-r----- 2026-03-10T05:47:26.347 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:47:26 vm05 bash[33387]: t=2026-03-10T05:47:26+0000 lvl=info msg="Starting DB migrations" logger=migrator 2026-03-10T05:47:26.347 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:47:26 vm05 bash[33387]: t=2026-03-10T05:47:26+0000 lvl=info msg="Executing migration" logger=migrator id="create migration_log table" 2026-03-10T05:47:26.347 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:47:26 vm05 bash[33387]: t=2026-03-10T05:47:26+0000 lvl=info msg="Executing migration" logger=migrator id="create user table" 2026-03-10T05:47:26.347 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:47:26 vm05 bash[33387]: t=2026-03-10T05:47:26+0000 lvl=info msg="Executing migration" logger=migrator id="add unique index user.login" 2026-03-10T05:47:26.347 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:47:26 vm05 bash[33387]: t=2026-03-10T05:47:26+0000 lvl=info msg="Executing migration" logger=migrator id="add unique index user.email" 2026-03-10T05:47:26.347 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:47:26 vm05 bash[33387]: t=2026-03-10T05:47:26+0000 lvl=info msg="Executing migration" logger=migrator id="drop index UQE_user_login - v1" 2026-03-10T05:47:26.347 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:47:26 vm05 bash[33387]: t=2026-03-10T05:47:26+0000 lvl=info msg="Executing migration" logger=migrator id="drop index UQE_user_email - v1" 2026-03-10T05:47:26.347 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:47:26 vm05 bash[33387]: t=2026-03-10T05:47:26+0000 lvl=info msg="Executing migration" logger=migrator id="Rename table user to user_v1 - v1" 2026-03-10T05:47:26.347 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:47:26 vm05 bash[33387]: t=2026-03-10T05:47:26+0000 lvl=info msg="Executing migration" logger=migrator id="create user table v2" 2026-03-10T05:47:26.347 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:47:26 vm05 bash[33387]: t=2026-03-10T05:47:26+0000 lvl=info msg="Executing migration" logger=migrator id="create index UQE_user_login - v2" 2026-03-10T05:47:26.347 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:47:26 vm05 bash[33387]: t=2026-03-10T05:47:26+0000 lvl=info msg="Executing migration" logger=migrator id="create index UQE_user_email - v2" 2026-03-10T05:47:26.348 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:47:26 vm05 bash[33387]: t=2026-03-10T05:47:26+0000 lvl=info msg="Executing migration" logger=migrator id="copy data_source v1 to v2" 2026-03-10T05:47:26.348 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:47:26 vm05 bash[33387]: t=2026-03-10T05:47:26+0000 lvl=info msg="Executing migration" logger=migrator id="Drop old table user_v1" 2026-03-10T05:47:26.348 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:47:26 vm05 bash[33387]: t=2026-03-10T05:47:26+0000 lvl=info msg="Executing migration" logger=migrator id="Add column help_flags1 to user table" 2026-03-10T05:47:26.348 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:47:26 vm05 bash[33387]: t=2026-03-10T05:47:26+0000 lvl=info msg="Executing migration" logger=migrator id="Update user table charset" 2026-03-10T05:47:26.348 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:47:26 vm05 bash[33387]: t=2026-03-10T05:47:26+0000 lvl=info msg="Executing migration" logger=migrator id="Add last_seen_at column to user" 2026-03-10T05:47:26.348 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:47:26 vm05 bash[33387]: t=2026-03-10T05:47:26+0000 lvl=info msg="Executing migration" logger=migrator id="Add missing user data" 2026-03-10T05:47:26.348 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:47:26 vm05 bash[33387]: t=2026-03-10T05:47:26+0000 lvl=info msg="Executing migration" logger=migrator id="Add is_disabled column to user" 2026-03-10T05:47:26.348 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:47:26 vm05 bash[33387]: t=2026-03-10T05:47:26+0000 lvl=info msg="Executing migration" logger=migrator id="Add index user.login/user.email" 2026-03-10T05:47:26.348 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:47:26 vm05 bash[33387]: t=2026-03-10T05:47:26+0000 lvl=info msg="Executing migration" logger=migrator id="Add is_service_account column to user" 2026-03-10T05:47:26.348 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:47:26 vm05 bash[33387]: t=2026-03-10T05:47:26+0000 lvl=info msg="Executing migration" logger=migrator id="create temp user table v1-7" 2026-03-10T05:47:26.348 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:47:26 vm05 bash[33387]: t=2026-03-10T05:47:26+0000 lvl=info msg="Executing migration" logger=migrator id="create index IDX_temp_user_email - v1-7" 2026-03-10T05:47:26.348 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:47:26 vm05 bash[33387]: t=2026-03-10T05:47:26+0000 lvl=info msg="Executing migration" logger=migrator id="create index IDX_temp_user_org_id - v1-7" 2026-03-10T05:47:26.348 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:47:26 vm05 bash[33387]: t=2026-03-10T05:47:26+0000 lvl=info msg="Executing migration" logger=migrator id="create index IDX_temp_user_code - v1-7" 2026-03-10T05:47:26.348 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:47:26 vm05 bash[33387]: t=2026-03-10T05:47:26+0000 lvl=info msg="Executing migration" logger=migrator id="create index IDX_temp_user_status - v1-7" 2026-03-10T05:47:26.348 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:47:26 vm05 bash[33387]: t=2026-03-10T05:47:26+0000 lvl=info msg="Executing migration" logger=migrator id="Update temp_user table charset" 2026-03-10T05:47:26.348 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:47:26 vm05 bash[33387]: t=2026-03-10T05:47:26+0000 lvl=info msg="Executing migration" logger=migrator id="drop index IDX_temp_user_email - v1" 2026-03-10T05:47:26.348 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:47:26 vm05 bash[33387]: t=2026-03-10T05:47:26+0000 lvl=info msg="Executing migration" logger=migrator id="drop index IDX_temp_user_org_id - v1" 2026-03-10T05:47:26.348 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:47:26 vm05 bash[33387]: t=2026-03-10T05:47:26+0000 lvl=info msg="Executing migration" logger=migrator id="drop index IDX_temp_user_code - v1" 2026-03-10T05:47:26.348 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:47:26 vm05 bash[33387]: t=2026-03-10T05:47:26+0000 lvl=info msg="Executing migration" logger=migrator id="drop index IDX_temp_user_status - v1" 2026-03-10T05:47:26.348 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:47:26 vm05 bash[33387]: t=2026-03-10T05:47:26+0000 lvl=info msg="Executing migration" logger=migrator id="Rename table temp_user to temp_user_tmp_qwerty - v1" 2026-03-10T05:47:26.348 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:47:26 vm05 bash[33387]: t=2026-03-10T05:47:26+0000 lvl=info msg="Executing migration" logger=migrator id="create temp_user v2" 2026-03-10T05:47:26.348 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:47:26 vm05 bash[33387]: t=2026-03-10T05:47:26+0000 lvl=info msg="Executing migration" logger=migrator id="create index IDX_temp_user_email - v2" 2026-03-10T05:47:26.348 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:47:26 vm05 bash[33387]: t=2026-03-10T05:47:26+0000 lvl=info msg="Executing migration" logger=migrator id="create index IDX_temp_user_org_id - v2" 2026-03-10T05:47:26.348 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:47:26 vm05 bash[33387]: t=2026-03-10T05:47:26+0000 lvl=info msg="Executing migration" logger=migrator id="create index IDX_temp_user_code - v2" 2026-03-10T05:47:26.348 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:47:26 vm05 bash[33387]: t=2026-03-10T05:47:26+0000 lvl=info msg="Executing migration" logger=migrator id="create index IDX_temp_user_status - v2" 2026-03-10T05:47:26.348 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:47:26 vm05 bash[33387]: t=2026-03-10T05:47:26+0000 lvl=info msg="Executing migration" logger=migrator id="copy temp_user v1 to v2" 2026-03-10T05:47:26.348 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:47:26 vm05 bash[33387]: t=2026-03-10T05:47:26+0000 lvl=info msg="Executing migration" logger=migrator id="drop temp_user_tmp_qwerty" 2026-03-10T05:47:26.348 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:47:26 vm05 bash[33387]: t=2026-03-10T05:47:26+0000 lvl=info msg="Executing migration" logger=migrator id="Set created for temp users that will otherwise prematurely expire" 2026-03-10T05:47:26.348 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:47:26 vm05 bash[33387]: t=2026-03-10T05:47:26+0000 lvl=info msg="Executing migration" logger=migrator id="create star table" 2026-03-10T05:47:26.348 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:47:26 vm05 bash[33387]: t=2026-03-10T05:47:26+0000 lvl=info msg="Executing migration" logger=migrator id="add unique index star.user_id_dashboard_id" 2026-03-10T05:47:26.348 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:47:26 vm05 bash[33387]: t=2026-03-10T05:47:26+0000 lvl=info msg="Executing migration" logger=migrator id="create org table v1" 2026-03-10T05:47:26.348 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:47:26 vm05 bash[33387]: t=2026-03-10T05:47:26+0000 lvl=info msg="Executing migration" logger=migrator id="create index UQE_org_name - v1" 2026-03-10T05:47:26.348 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:47:26 vm05 bash[33387]: t=2026-03-10T05:47:26+0000 lvl=info msg="Executing migration" logger=migrator id="create org_user table v1" 2026-03-10T05:47:26.348 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:47:26 vm05 bash[33387]: t=2026-03-10T05:47:26+0000 lvl=info msg="Executing migration" logger=migrator id="create index IDX_org_user_org_id - v1" 2026-03-10T05:47:26.348 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:47:26 vm05 bash[33387]: t=2026-03-10T05:47:26+0000 lvl=info msg="Executing migration" logger=migrator id="create index UQE_org_user_org_id_user_id - v1" 2026-03-10T05:47:26.348 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:47:26 vm05 bash[33387]: t=2026-03-10T05:47:26+0000 lvl=info msg="Executing migration" logger=migrator id="create index IDX_org_user_user_id - v1" 2026-03-10T05:47:26.348 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:47:26 vm05 bash[33387]: t=2026-03-10T05:47:26+0000 lvl=info msg="Executing migration" logger=migrator id="Update org table charset" 2026-03-10T05:47:26.348 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:47:26 vm05 bash[33387]: t=2026-03-10T05:47:26+0000 lvl=info msg="Executing migration" logger=migrator id="Update org_user table charset" 2026-03-10T05:47:26.348 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:47:26 vm05 bash[33387]: t=2026-03-10T05:47:26+0000 lvl=info msg="Executing migration" logger=migrator id="Migrate all Read Only Viewers to Viewers" 2026-03-10T05:47:26.348 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:47:26 vm05 bash[33387]: t=2026-03-10T05:47:26+0000 lvl=info msg="Executing migration" logger=migrator id="create dashboard table" 2026-03-10T05:47:26.348 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:47:26 vm05 bash[33387]: t=2026-03-10T05:47:26+0000 lvl=info msg="Executing migration" logger=migrator id="add index dashboard.account_id" 2026-03-10T05:47:26.348 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:47:26 vm05 bash[33387]: t=2026-03-10T05:47:26+0000 lvl=info msg="Executing migration" logger=migrator id="add unique index dashboard_account_id_slug" 2026-03-10T05:47:26.348 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:47:26 vm05 bash[33387]: t=2026-03-10T05:47:26+0000 lvl=info msg="Executing migration" logger=migrator id="create dashboard_tag table" 2026-03-10T05:47:26.348 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:47:26 vm05 bash[33387]: t=2026-03-10T05:47:26+0000 lvl=info msg="Executing migration" logger=migrator id="add unique index dashboard_tag.dasboard_id_term" 2026-03-10T05:47:26.348 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:47:26 vm05 bash[33387]: t=2026-03-10T05:47:26+0000 lvl=info msg="Executing migration" logger=migrator id="drop index UQE_dashboard_tag_dashboard_id_term - v1" 2026-03-10T05:47:26.348 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:47:26 vm05 bash[33387]: t=2026-03-10T05:47:26+0000 lvl=info msg="Executing migration" logger=migrator id="Rename table dashboard to dashboard_v1 - v1" 2026-03-10T05:47:26.348 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:47:26 vm05 bash[33387]: t=2026-03-10T05:47:26+0000 lvl=info msg="Executing migration" logger=migrator id="create dashboard v2" 2026-03-10T05:47:26.348 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:47:26 vm05 bash[33387]: t=2026-03-10T05:47:26+0000 lvl=info msg="Executing migration" logger=migrator id="create index IDX_dashboard_org_id - v2" 2026-03-10T05:47:26.348 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:47:26 vm05 bash[33387]: t=2026-03-10T05:47:26+0000 lvl=info msg="Executing migration" logger=migrator id="create index UQE_dashboard_org_id_slug - v2" 2026-03-10T05:47:26.348 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:47:26 vm05 bash[33387]: t=2026-03-10T05:47:26+0000 lvl=info msg="Executing migration" logger=migrator id="copy dashboard v1 to v2" 2026-03-10T05:47:26.348 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:47:26 vm05 bash[33387]: t=2026-03-10T05:47:26+0000 lvl=info msg="Executing migration" logger=migrator id="drop table dashboard_v1" 2026-03-10T05:47:26.348 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:47:26 vm05 bash[33387]: t=2026-03-10T05:47:26+0000 lvl=info msg="Executing migration" logger=migrator id="alter dashboard.data to mediumtext v1" 2026-03-10T05:47:26.348 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:47:26 vm05 bash[33387]: t=2026-03-10T05:47:26+0000 lvl=info msg="Executing migration" logger=migrator id="Add column updated_by in dashboard - v2" 2026-03-10T05:47:26.348 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:47:26 vm05 bash[33387]: t=2026-03-10T05:47:26+0000 lvl=info msg="Executing migration" logger=migrator id="Add column created_by in dashboard - v2" 2026-03-10T05:47:26.348 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:47:26 vm05 bash[33387]: t=2026-03-10T05:47:26+0000 lvl=info msg="Executing migration" logger=migrator id="Add column gnetId in dashboard" 2026-03-10T05:47:26.348 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:47:26 vm05 bash[33387]: t=2026-03-10T05:47:26+0000 lvl=info msg="Executing migration" logger=migrator id="Add index for gnetId in dashboard" 2026-03-10T05:47:26.348 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:47:26 vm05 bash[33387]: t=2026-03-10T05:47:26+0000 lvl=info msg="Executing migration" logger=migrator id="Add column plugin_id in dashboard" 2026-03-10T05:47:26.349 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:47:26 vm05 bash[33387]: t=2026-03-10T05:47:26+0000 lvl=info msg="Executing migration" logger=migrator id="Add index for plugin_id in dashboard" 2026-03-10T05:47:26.349 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:47:26 vm05 bash[33387]: t=2026-03-10T05:47:26+0000 lvl=info msg="Executing migration" logger=migrator id="Add index for dashboard_id in dashboard_tag" 2026-03-10T05:47:26.349 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:47:26 vm05 bash[33387]: t=2026-03-10T05:47:26+0000 lvl=info msg="Executing migration" logger=migrator id="Update dashboard table charset" 2026-03-10T05:47:26.349 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:47:26 vm05 bash[33387]: t=2026-03-10T05:47:26+0000 lvl=info msg="Executing migration" logger=migrator id="Update dashboard_tag table charset" 2026-03-10T05:47:26.349 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:47:26 vm05 bash[33387]: t=2026-03-10T05:47:26+0000 lvl=info msg="Executing migration" logger=migrator id="Add column folder_id in dashboard" 2026-03-10T05:47:26.349 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:47:26 vm05 bash[33387]: t=2026-03-10T05:47:26+0000 lvl=info msg="Executing migration" logger=migrator id="Add column isFolder in dashboard" 2026-03-10T05:47:26.349 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:47:26 vm05 bash[33387]: t=2026-03-10T05:47:26+0000 lvl=info msg="Executing migration" logger=migrator id="Add column has_acl in dashboard" 2026-03-10T05:47:26.349 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:47:26 vm05 bash[33387]: t=2026-03-10T05:47:26+0000 lvl=info msg="Executing migration" logger=migrator id="Add column uid in dashboard" 2026-03-10T05:47:26.349 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:47:26 vm05 bash[33387]: t=2026-03-10T05:47:26+0000 lvl=info msg="Executing migration" logger=migrator id="Update uid column values in dashboard" 2026-03-10T05:47:26.349 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:47:26 vm05 bash[33387]: t=2026-03-10T05:47:26+0000 lvl=info msg="Executing migration" logger=migrator id="Add unique index dashboard_org_id_uid" 2026-03-10T05:47:26.349 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:47:26 vm05 bash[33387]: t=2026-03-10T05:47:26+0000 lvl=info msg="Executing migration" logger=migrator id="Remove unique index org_id_slug" 2026-03-10T05:47:26.349 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:47:26 vm05 bash[33387]: t=2026-03-10T05:47:26+0000 lvl=info msg="Executing migration" logger=migrator id="Update dashboard title length" 2026-03-10T05:47:26.349 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:47:26 vm05 bash[33387]: t=2026-03-10T05:47:26+0000 lvl=info msg="Executing migration" logger=migrator id="Add unique index for dashboard_org_id_title_folder_id" 2026-03-10T05:47:26.584 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:47:26 vm02 bash[17462]: audit 2026-03-10T05:47:25.075618+0000 mon.a (mon.0) 621 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "z.rgw.control","app": "rgw"}]': finished 2026-03-10T05:47:26.584 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:47:26 vm02 bash[17462]: cluster 2026-03-10T05:47:25.075700+0000 mon.a (mon.0) 622 : cluster [DBG] osdmap e56: 8 total, 8 up, 8 in 2026-03-10T05:47:26.584 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:47:26 vm02 bash[17462]: cluster 2026-03-10T05:47:25.684121+0000 mgr.y (mgr.14409) 39 : cluster [DBG] pgmap v27: 97 pgs: 32 unknown, 32 creating+peering, 33 active+clean; 451 KiB data, 50 MiB used, 160 GiB / 160 GiB avail; 7.5 KiB/s rd, 4.0 KiB/s wr, 12 op/s 2026-03-10T05:47:26.584 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:47:26 vm02 bash[17462]: cluster 2026-03-10T05:47:26.087581+0000 mon.a (mon.0) 623 : cluster [DBG] osdmap e57: 8 total, 8 up, 8 in 2026-03-10T05:47:26.584 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:47:26 vm02 bash[17462]: audit 2026-03-10T05:47:26.090005+0000 mon.c (mon.1) 56 : audit [INF] from='client.? 192.168.123.102:0/2911771414' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "z.rgw.meta","app": "rgw"}]: dispatch 2026-03-10T05:47:26.584 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:47:26 vm02 bash[17462]: audit 2026-03-10T05:47:26.090874+0000 mon.a (mon.0) 624 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "z.rgw.meta","app": "rgw"}]: dispatch 2026-03-10T05:47:26.584 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:47:26 vm02 bash[22526]: audit 2026-03-10T05:47:25.075618+0000 mon.a (mon.0) 621 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "z.rgw.control","app": "rgw"}]': finished 2026-03-10T05:47:26.584 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:47:26 vm02 bash[22526]: cluster 2026-03-10T05:47:25.075700+0000 mon.a (mon.0) 622 : cluster [DBG] osdmap e56: 8 total, 8 up, 8 in 2026-03-10T05:47:26.584 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:47:26 vm02 bash[22526]: cluster 2026-03-10T05:47:25.684121+0000 mgr.y (mgr.14409) 39 : cluster [DBG] pgmap v27: 97 pgs: 32 unknown, 32 creating+peering, 33 active+clean; 451 KiB data, 50 MiB used, 160 GiB / 160 GiB avail; 7.5 KiB/s rd, 4.0 KiB/s wr, 12 op/s 2026-03-10T05:47:26.584 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:47:26 vm02 bash[22526]: cluster 2026-03-10T05:47:26.087581+0000 mon.a (mon.0) 623 : cluster [DBG] osdmap e57: 8 total, 8 up, 8 in 2026-03-10T05:47:26.584 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:47:26 vm02 bash[22526]: audit 2026-03-10T05:47:26.090005+0000 mon.c (mon.1) 56 : audit [INF] from='client.? 192.168.123.102:0/2911771414' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "z.rgw.meta","app": "rgw"}]: dispatch 2026-03-10T05:47:26.584 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:47:26 vm02 bash[22526]: audit 2026-03-10T05:47:26.090874+0000 mon.a (mon.0) 624 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "z.rgw.meta","app": "rgw"}]: dispatch 2026-03-10T05:47:26.597 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:47:26 vm05 bash[33387]: t=2026-03-10T05:47:26+0000 lvl=info msg="Executing migration" logger=migrator id="create dashboard_provisioning" 2026-03-10T05:47:26.597 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:47:26 vm05 bash[33387]: t=2026-03-10T05:47:26+0000 lvl=info msg="Executing migration" logger=migrator id="Rename table dashboard_provisioning to dashboard_provisioning_tmp_qwerty - v1" 2026-03-10T05:47:26.597 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:47:26 vm05 bash[33387]: t=2026-03-10T05:47:26+0000 lvl=info msg="Executing migration" logger=migrator id="create dashboard_provisioning v2" 2026-03-10T05:47:26.597 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:47:26 vm05 bash[33387]: t=2026-03-10T05:47:26+0000 lvl=info msg="Executing migration" logger=migrator id="create index IDX_dashboard_provisioning_dashboard_id - v2" 2026-03-10T05:47:26.597 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:47:26 vm05 bash[33387]: t=2026-03-10T05:47:26+0000 lvl=info msg="Executing migration" logger=migrator id="create index IDX_dashboard_provisioning_dashboard_id_name - v2" 2026-03-10T05:47:26.597 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:47:26 vm05 bash[33387]: t=2026-03-10T05:47:26+0000 lvl=info msg="Executing migration" logger=migrator id="copy dashboard_provisioning v1 to v2" 2026-03-10T05:47:26.597 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:47:26 vm05 bash[33387]: t=2026-03-10T05:47:26+0000 lvl=info msg="Executing migration" logger=migrator id="drop dashboard_provisioning_tmp_qwerty" 2026-03-10T05:47:26.597 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:47:26 vm05 bash[33387]: t=2026-03-10T05:47:26+0000 lvl=info msg="Executing migration" logger=migrator id="Add check_sum column" 2026-03-10T05:47:26.597 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:47:26 vm05 bash[33387]: t=2026-03-10T05:47:26+0000 lvl=info msg="Executing migration" logger=migrator id="Add index for dashboard_title" 2026-03-10T05:47:26.597 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:47:26 vm05 bash[33387]: t=2026-03-10T05:47:26+0000 lvl=info msg="Executing migration" logger=migrator id="delete tags for deleted dashboards" 2026-03-10T05:47:26.597 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:47:26 vm05 bash[33387]: t=2026-03-10T05:47:26+0000 lvl=info msg="Executing migration" logger=migrator id="delete stars for deleted dashboards" 2026-03-10T05:47:26.597 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:47:26 vm05 bash[33387]: t=2026-03-10T05:47:26+0000 lvl=info msg="Executing migration" logger=migrator id="Add index for dashboard_is_folder" 2026-03-10T05:47:26.597 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:47:26 vm05 bash[33387]: t=2026-03-10T05:47:26+0000 lvl=info msg="Executing migration" logger=migrator id="create data_source table" 2026-03-10T05:47:26.597 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:47:26 vm05 bash[33387]: t=2026-03-10T05:47:26+0000 lvl=info msg="Executing migration" logger=migrator id="add index data_source.account_id" 2026-03-10T05:47:26.597 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:47:26 vm05 bash[33387]: t=2026-03-10T05:47:26+0000 lvl=info msg="Executing migration" logger=migrator id="add unique index data_source.account_id_name" 2026-03-10T05:47:26.597 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:47:26 vm05 bash[33387]: t=2026-03-10T05:47:26+0000 lvl=info msg="Executing migration" logger=migrator id="drop index IDX_data_source_account_id - v1" 2026-03-10T05:47:26.597 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:47:26 vm05 bash[33387]: t=2026-03-10T05:47:26+0000 lvl=info msg="Executing migration" logger=migrator id="drop index UQE_data_source_account_id_name - v1" 2026-03-10T05:47:26.597 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:47:26 vm05 bash[33387]: t=2026-03-10T05:47:26+0000 lvl=info msg="Executing migration" logger=migrator id="Rename table data_source to data_source_v1 - v1" 2026-03-10T05:47:26.597 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:47:26 vm05 bash[33387]: t=2026-03-10T05:47:26+0000 lvl=info msg="Executing migration" logger=migrator id="create data_source table v2" 2026-03-10T05:47:26.597 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:47:26 vm05 bash[33387]: t=2026-03-10T05:47:26+0000 lvl=info msg="Executing migration" logger=migrator id="create index IDX_data_source_org_id - v2" 2026-03-10T05:47:26.597 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:47:26 vm05 bash[33387]: t=2026-03-10T05:47:26+0000 lvl=info msg="Executing migration" logger=migrator id="create index UQE_data_source_org_id_name - v2" 2026-03-10T05:47:26.597 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:47:26 vm05 bash[33387]: t=2026-03-10T05:47:26+0000 lvl=info msg="Executing migration" logger=migrator id="copy data_source v1 to v2" 2026-03-10T05:47:26.597 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:47:26 vm05 bash[33387]: t=2026-03-10T05:47:26+0000 lvl=info msg="Executing migration" logger=migrator id="Drop old table data_source_v1 #2" 2026-03-10T05:47:26.597 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:47:26 vm05 bash[33387]: t=2026-03-10T05:47:26+0000 lvl=info msg="Executing migration" logger=migrator id="Add column with_credentials" 2026-03-10T05:47:26.597 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:47:26 vm05 bash[33387]: t=2026-03-10T05:47:26+0000 lvl=info msg="Executing migration" logger=migrator id="Add secure json data column" 2026-03-10T05:47:26.597 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:47:26 vm05 bash[33387]: t=2026-03-10T05:47:26+0000 lvl=info msg="Executing migration" logger=migrator id="Update data_source table charset" 2026-03-10T05:47:26.598 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:47:26 vm05 bash[33387]: t=2026-03-10T05:47:26+0000 lvl=info msg="Executing migration" logger=migrator id="Update initial version to 1" 2026-03-10T05:47:26.598 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:47:26 vm05 bash[33387]: t=2026-03-10T05:47:26+0000 lvl=info msg="Executing migration" logger=migrator id="Add read_only data column" 2026-03-10T05:47:26.598 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:47:26 vm05 bash[33387]: t=2026-03-10T05:47:26+0000 lvl=info msg="Executing migration" logger=migrator id="Migrate logging ds to loki ds" 2026-03-10T05:47:26.598 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:47:26 vm05 bash[33387]: t=2026-03-10T05:47:26+0000 lvl=info msg="Executing migration" logger=migrator id="Update json_data with nulls" 2026-03-10T05:47:26.598 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:47:26 vm05 bash[33387]: t=2026-03-10T05:47:26+0000 lvl=info msg="Executing migration" logger=migrator id="Add uid column" 2026-03-10T05:47:26.598 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:47:26 vm05 bash[33387]: t=2026-03-10T05:47:26+0000 lvl=info msg="Executing migration" logger=migrator id="Update uid value" 2026-03-10T05:47:26.598 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:47:26 vm05 bash[33387]: t=2026-03-10T05:47:26+0000 lvl=info msg="Executing migration" logger=migrator id="Add unique index datasource_org_id_uid" 2026-03-10T05:47:26.598 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:47:26 vm05 bash[33387]: t=2026-03-10T05:47:26+0000 lvl=info msg="Executing migration" logger=migrator id="add unique index datasource_org_id_is_default" 2026-03-10T05:47:26.598 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:47:26 vm05 bash[33387]: t=2026-03-10T05:47:26+0000 lvl=info msg="Executing migration" logger=migrator id="create api_key table" 2026-03-10T05:47:26.598 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:47:26 vm05 bash[33387]: t=2026-03-10T05:47:26+0000 lvl=info msg="Executing migration" logger=migrator id="add index api_key.account_id" 2026-03-10T05:47:26.598 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:47:26 vm05 bash[33387]: t=2026-03-10T05:47:26+0000 lvl=info msg="Executing migration" logger=migrator id="add index api_key.key" 2026-03-10T05:47:26.598 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:47:26 vm05 bash[33387]: t=2026-03-10T05:47:26+0000 lvl=info msg="Executing migration" logger=migrator id="add index api_key.account_id_name" 2026-03-10T05:47:26.598 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:47:26 vm05 bash[33387]: t=2026-03-10T05:47:26+0000 lvl=info msg="Executing migration" logger=migrator id="drop index IDX_api_key_account_id - v1" 2026-03-10T05:47:26.598 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:47:26 vm05 bash[33387]: t=2026-03-10T05:47:26+0000 lvl=info msg="Executing migration" logger=migrator id="drop index UQE_api_key_key - v1" 2026-03-10T05:47:26.598 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:47:26 vm05 bash[33387]: t=2026-03-10T05:47:26+0000 lvl=info msg="Executing migration" logger=migrator id="drop index UQE_api_key_account_id_name - v1" 2026-03-10T05:47:26.598 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:47:26 vm05 bash[33387]: t=2026-03-10T05:47:26+0000 lvl=info msg="Executing migration" logger=migrator id="Rename table api_key to api_key_v1 - v1" 2026-03-10T05:47:26.598 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:47:26 vm05 bash[33387]: t=2026-03-10T05:47:26+0000 lvl=info msg="Executing migration" logger=migrator id="create api_key table v2" 2026-03-10T05:47:26.598 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:47:26 vm05 bash[33387]: t=2026-03-10T05:47:26+0000 lvl=info msg="Executing migration" logger=migrator id="create index IDX_api_key_org_id - v2" 2026-03-10T05:47:26.598 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:47:26 vm05 bash[33387]: t=2026-03-10T05:47:26+0000 lvl=info msg="Executing migration" logger=migrator id="create index UQE_api_key_key - v2" 2026-03-10T05:47:26.598 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:47:26 vm05 bash[33387]: t=2026-03-10T05:47:26+0000 lvl=info msg="Executing migration" logger=migrator id="create index UQE_api_key_org_id_name - v2" 2026-03-10T05:47:26.598 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:47:26 vm05 bash[33387]: t=2026-03-10T05:47:26+0000 lvl=info msg="Executing migration" logger=migrator id="copy api_key v1 to v2" 2026-03-10T05:47:26.598 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:47:26 vm05 bash[33387]: t=2026-03-10T05:47:26+0000 lvl=info msg="Executing migration" logger=migrator id="Drop old table api_key_v1" 2026-03-10T05:47:26.598 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:47:26 vm05 bash[33387]: t=2026-03-10T05:47:26+0000 lvl=info msg="Executing migration" logger=migrator id="Update api_key table charset" 2026-03-10T05:47:26.598 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:47:26 vm05 bash[33387]: t=2026-03-10T05:47:26+0000 lvl=info msg="Executing migration" logger=migrator id="Add expires to api_key table" 2026-03-10T05:47:26.598 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:47:26 vm05 bash[33387]: t=2026-03-10T05:47:26+0000 lvl=info msg="Executing migration" logger=migrator id="Add service account foreign key" 2026-03-10T05:47:26.598 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:47:26 vm05 bash[33387]: t=2026-03-10T05:47:26+0000 lvl=info msg="Executing migration" logger=migrator id="create dashboard_snapshot table v4" 2026-03-10T05:47:26.598 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:47:26 vm05 bash[33387]: t=2026-03-10T05:47:26+0000 lvl=info msg="Executing migration" logger=migrator id="drop table dashboard_snapshot_v4 #1" 2026-03-10T05:47:26.598 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:47:26 vm05 bash[33387]: t=2026-03-10T05:47:26+0000 lvl=info msg="Executing migration" logger=migrator id="create dashboard_snapshot table v5 #2" 2026-03-10T05:47:26.598 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:47:26 vm05 bash[33387]: t=2026-03-10T05:47:26+0000 lvl=info msg="Executing migration" logger=migrator id="create index UQE_dashboard_snapshot_key - v5" 2026-03-10T05:47:26.598 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:47:26 vm05 bash[33387]: t=2026-03-10T05:47:26+0000 lvl=info msg="Executing migration" logger=migrator id="create index UQE_dashboard_snapshot_delete_key - v5" 2026-03-10T05:47:26.598 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:47:26 vm05 bash[33387]: t=2026-03-10T05:47:26+0000 lvl=info msg="Executing migration" logger=migrator id="create index IDX_dashboard_snapshot_user_id - v5" 2026-03-10T05:47:26.598 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:47:26 vm05 bash[33387]: t=2026-03-10T05:47:26+0000 lvl=info msg="Executing migration" logger=migrator id="alter dashboard_snapshot to mediumtext v2" 2026-03-10T05:47:26.598 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:47:26 vm05 bash[33387]: t=2026-03-10T05:47:26+0000 lvl=info msg="Executing migration" logger=migrator id="Update dashboard_snapshot table charset" 2026-03-10T05:47:26.598 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:47:26 vm05 bash[33387]: t=2026-03-10T05:47:26+0000 lvl=info msg="Executing migration" logger=migrator id="Add column external_delete_url to dashboard_snapshots table" 2026-03-10T05:47:26.598 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:47:26 vm05 bash[33387]: t=2026-03-10T05:47:26+0000 lvl=info msg="Executing migration" logger=migrator id="Add encrypted dashboard json column" 2026-03-10T05:47:26.598 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:47:26 vm05 bash[33387]: t=2026-03-10T05:47:26+0000 lvl=info msg="Executing migration" logger=migrator id="Change dashboard_encrypted column to MEDIUMBLOB" 2026-03-10T05:47:26.598 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:47:26 vm05 bash[33387]: t=2026-03-10T05:47:26+0000 lvl=info msg="Executing migration" logger=migrator id="create quota table v1" 2026-03-10T05:47:26.598 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:47:26 vm05 bash[33387]: t=2026-03-10T05:47:26+0000 lvl=info msg="Executing migration" logger=migrator id="create index UQE_quota_org_id_user_id_target - v1" 2026-03-10T05:47:26.598 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:47:26 vm05 bash[33387]: t=2026-03-10T05:47:26+0000 lvl=info msg="Executing migration" logger=migrator id="Update quota table charset" 2026-03-10T05:47:26.598 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:47:26 vm05 bash[33387]: t=2026-03-10T05:47:26+0000 lvl=info msg="Executing migration" logger=migrator id="create plugin_setting table" 2026-03-10T05:47:26.598 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:47:26 vm05 bash[33387]: t=2026-03-10T05:47:26+0000 lvl=info msg="Executing migration" logger=migrator id="create index UQE_plugin_setting_org_id_plugin_id - v1" 2026-03-10T05:47:26.598 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:47:26 vm05 bash[33387]: t=2026-03-10T05:47:26+0000 lvl=info msg="Executing migration" logger=migrator id="Add column plugin_version to plugin_settings" 2026-03-10T05:47:26.598 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:47:26 vm05 bash[33387]: t=2026-03-10T05:47:26+0000 lvl=info msg="Executing migration" logger=migrator id="Update plugin_setting table charset" 2026-03-10T05:47:26.598 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:47:26 vm05 bash[33387]: t=2026-03-10T05:47:26+0000 lvl=info msg="Executing migration" logger=migrator id="create session table" 2026-03-10T05:47:26.598 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:47:26 vm05 bash[33387]: t=2026-03-10T05:47:26+0000 lvl=info msg="Executing migration" logger=migrator id="Drop old table playlist table" 2026-03-10T05:47:26.598 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:47:26 vm05 bash[33387]: t=2026-03-10T05:47:26+0000 lvl=info msg="Executing migration" logger=migrator id="Drop old table playlist_item table" 2026-03-10T05:47:26.598 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:47:26 vm05 bash[33387]: t=2026-03-10T05:47:26+0000 lvl=info msg="Executing migration" logger=migrator id="create playlist table v2" 2026-03-10T05:47:26.598 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:47:26 vm05 bash[33387]: t=2026-03-10T05:47:26+0000 lvl=info msg="Executing migration" logger=migrator id="create playlist item table v2" 2026-03-10T05:47:26.598 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:47:26 vm05 bash[33387]: t=2026-03-10T05:47:26+0000 lvl=info msg="Executing migration" logger=migrator id="Update playlist table charset" 2026-03-10T05:47:26.599 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:47:26 vm05 bash[33387]: t=2026-03-10T05:47:26+0000 lvl=info msg="Executing migration" logger=migrator id="Update playlist_item table charset" 2026-03-10T05:47:26.599 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:47:26 vm05 bash[33387]: t=2026-03-10T05:47:26+0000 lvl=info msg="Executing migration" logger=migrator id="drop preferences table v2" 2026-03-10T05:47:26.599 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:47:26 vm05 bash[33387]: t=2026-03-10T05:47:26+0000 lvl=info msg="Executing migration" logger=migrator id="drop preferences table v3" 2026-03-10T05:47:26.599 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:47:26 vm05 bash[33387]: t=2026-03-10T05:47:26+0000 lvl=info msg="Executing migration" logger=migrator id="create preferences table v3" 2026-03-10T05:47:26.599 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:47:26 vm05 bash[33387]: t=2026-03-10T05:47:26+0000 lvl=info msg="Executing migration" logger=migrator id="Update preferences table charset" 2026-03-10T05:47:26.599 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:47:26 vm05 bash[33387]: t=2026-03-10T05:47:26+0000 lvl=info msg="Executing migration" logger=migrator id="Add column team_id in preferences" 2026-03-10T05:47:26.599 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:47:26 vm05 bash[33387]: t=2026-03-10T05:47:26+0000 lvl=info msg="Executing migration" logger=migrator id="Update team_id column values in preferences" 2026-03-10T05:47:26.599 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:47:26 vm05 bash[33387]: t=2026-03-10T05:47:26+0000 lvl=info msg="Executing migration" logger=migrator id="Add column week_start in preferences" 2026-03-10T05:47:26.599 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:47:26 vm05 bash[33387]: t=2026-03-10T05:47:26+0000 lvl=info msg="Executing migration" logger=migrator id="create alert table v1" 2026-03-10T05:47:26.599 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:47:26 vm05 bash[33387]: t=2026-03-10T05:47:26+0000 lvl=info msg="Executing migration" logger=migrator id="add index alert org_id & id " 2026-03-10T05:47:26.599 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:47:26 vm05 bash[33387]: t=2026-03-10T05:47:26+0000 lvl=info msg="Executing migration" logger=migrator id="add index alert state" 2026-03-10T05:47:26.599 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:47:26 vm05 bash[33387]: t=2026-03-10T05:47:26+0000 lvl=info msg="Executing migration" logger=migrator id="add index alert dashboard_id" 2026-03-10T05:47:26.599 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:47:26 vm05 bash[33387]: t=2026-03-10T05:47:26+0000 lvl=info msg="Executing migration" logger=migrator id="Create alert_rule_tag table v1" 2026-03-10T05:47:26.599 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:47:26 vm05 bash[33387]: t=2026-03-10T05:47:26+0000 lvl=info msg="Executing migration" logger=migrator id="Add unique index alert_rule_tag.alert_id_tag_id" 2026-03-10T05:47:26.599 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:47:26 vm05 bash[33387]: t=2026-03-10T05:47:26+0000 lvl=info msg="Executing migration" logger=migrator id="drop index UQE_alert_rule_tag_alert_id_tag_id - v1" 2026-03-10T05:47:26.599 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:47:26 vm05 bash[33387]: t=2026-03-10T05:47:26+0000 lvl=info msg="Executing migration" logger=migrator id="Rename table alert_rule_tag to alert_rule_tag_v1 - v1" 2026-03-10T05:47:26.599 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:47:26 vm05 bash[33387]: t=2026-03-10T05:47:26+0000 lvl=info msg="Executing migration" logger=migrator id="Create alert_rule_tag table v2" 2026-03-10T05:47:26.599 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:47:26 vm05 bash[33387]: t=2026-03-10T05:47:26+0000 lvl=info msg="Executing migration" logger=migrator id="create index UQE_alert_rule_tag_alert_id_tag_id - Add unique index alert_rule_tag.alert_id_tag_id V2" 2026-03-10T05:47:26.599 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:47:26 vm05 bash[33387]: t=2026-03-10T05:47:26+0000 lvl=info msg="Executing migration" logger=migrator id="copy alert_rule_tag v1 to v2" 2026-03-10T05:47:26.599 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:47:26 vm05 bash[33387]: t=2026-03-10T05:47:26+0000 lvl=info msg="Executing migration" logger=migrator id="drop table alert_rule_tag_v1" 2026-03-10T05:47:26.599 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:47:26 vm05 bash[33387]: t=2026-03-10T05:47:26+0000 lvl=info msg="Executing migration" logger=migrator id="create alert_notification table v1" 2026-03-10T05:47:26.599 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:47:26 vm05 bash[33387]: t=2026-03-10T05:47:26+0000 lvl=info msg="Executing migration" logger=migrator id="Add column is_default" 2026-03-10T05:47:26.599 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:47:26 vm05 bash[33387]: t=2026-03-10T05:47:26+0000 lvl=info msg="Executing migration" logger=migrator id="Add column frequency" 2026-03-10T05:47:26.599 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:47:26 vm05 bash[33387]: t=2026-03-10T05:47:26+0000 lvl=info msg="Executing migration" logger=migrator id="Add column send_reminder" 2026-03-10T05:47:26.599 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:47:26 vm05 bash[33387]: t=2026-03-10T05:47:26+0000 lvl=info msg="Executing migration" logger=migrator id="Add column disable_resolve_message" 2026-03-10T05:47:26.599 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:47:26 vm05 bash[33387]: t=2026-03-10T05:47:26+0000 lvl=info msg="Executing migration" logger=migrator id="add index alert_notification org_id & name" 2026-03-10T05:47:26.599 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:47:26 vm05 bash[33387]: t=2026-03-10T05:47:26+0000 lvl=info msg="Executing migration" logger=migrator id="Update alert table charset" 2026-03-10T05:47:26.599 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:47:26 vm05 bash[33387]: t=2026-03-10T05:47:26+0000 lvl=info msg="Executing migration" logger=migrator id="Update alert_notification table charset" 2026-03-10T05:47:26.599 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:47:26 vm05 bash[33387]: t=2026-03-10T05:47:26+0000 lvl=info msg="Executing migration" logger=migrator id="create notification_journal table v1" 2026-03-10T05:47:26.599 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:47:26 vm05 bash[33387]: t=2026-03-10T05:47:26+0000 lvl=info msg="Executing migration" logger=migrator id="add index notification_journal org_id & alert_id & notifier_id" 2026-03-10T05:47:26.599 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:47:26 vm05 bash[33387]: t=2026-03-10T05:47:26+0000 lvl=info msg="Executing migration" logger=migrator id="drop alert_notification_journal" 2026-03-10T05:47:26.599 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:47:26 vm05 bash[33387]: t=2026-03-10T05:47:26+0000 lvl=info msg="Executing migration" logger=migrator id="create alert_notification_state table v1" 2026-03-10T05:47:26.599 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:47:26 vm05 bash[33387]: t=2026-03-10T05:47:26+0000 lvl=info msg="Executing migration" logger=migrator id="add index alert_notification_state org_id & alert_id & notifier_id" 2026-03-10T05:47:26.599 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:47:26 vm05 bash[33387]: t=2026-03-10T05:47:26+0000 lvl=info msg="Executing migration" logger=migrator id="Add for to alert table" 2026-03-10T05:47:26.599 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:47:26 vm05 bash[33387]: t=2026-03-10T05:47:26+0000 lvl=info msg="Executing migration" logger=migrator id="Add column uid in alert_notification" 2026-03-10T05:47:26.599 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:47:26 vm05 bash[33387]: t=2026-03-10T05:47:26+0000 lvl=info msg="Executing migration" logger=migrator id="Update uid column values in alert_notification" 2026-03-10T05:47:26.599 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:47:26 vm05 bash[33387]: t=2026-03-10T05:47:26+0000 lvl=info msg="Executing migration" logger=migrator id="Add unique index alert_notification_org_id_uid" 2026-03-10T05:47:26.599 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:47:26 vm05 bash[33387]: t=2026-03-10T05:47:26+0000 lvl=info msg="Executing migration" logger=migrator id="Remove unique index org_id_name" 2026-03-10T05:47:26.599 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:47:26 vm05 bash[33387]: t=2026-03-10T05:47:26+0000 lvl=info msg="Executing migration" logger=migrator id="Add column secure_settings in alert_notification" 2026-03-10T05:47:26.599 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:47:26 vm05 bash[33387]: t=2026-03-10T05:47:26+0000 lvl=info msg="Executing migration" logger=migrator id="alter alert.settings to mediumtext" 2026-03-10T05:47:26.599 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:47:26 vm05 bash[33387]: t=2026-03-10T05:47:26+0000 lvl=info msg="Executing migration" logger=migrator id="Add non-unique index alert_notification_state_alert_id" 2026-03-10T05:47:26.599 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:47:26 vm05 bash[33387]: t=2026-03-10T05:47:26+0000 lvl=info msg="Executing migration" logger=migrator id="Add non-unique index alert_rule_tag_alert_id" 2026-03-10T05:47:26.599 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:47:26 vm05 bash[33387]: t=2026-03-10T05:47:26+0000 lvl=info msg="Executing migration" logger=migrator id="Drop old annotation table v4" 2026-03-10T05:47:26.599 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:47:26 vm05 bash[33387]: t=2026-03-10T05:47:26+0000 lvl=info msg="Executing migration" logger=migrator id="create annotation table v5" 2026-03-10T05:47:26.599 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:47:26 vm05 bash[33387]: t=2026-03-10T05:47:26+0000 lvl=info msg="Executing migration" logger=migrator id="add index annotation 0 v3" 2026-03-10T05:47:26.599 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:47:26 vm05 bash[33387]: t=2026-03-10T05:47:26+0000 lvl=info msg="Executing migration" logger=migrator id="add index annotation 1 v3" 2026-03-10T05:47:26.599 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:47:26 vm05 bash[33387]: t=2026-03-10T05:47:26+0000 lvl=info msg="Executing migration" logger=migrator id="add index annotation 2 v3" 2026-03-10T05:47:26.599 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:47:26 vm05 bash[33387]: t=2026-03-10T05:47:26+0000 lvl=info msg="Executing migration" logger=migrator id="add index annotation 3 v3" 2026-03-10T05:47:26.599 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:47:26 vm05 bash[33387]: t=2026-03-10T05:47:26+0000 lvl=info msg="Executing migration" logger=migrator id="add index annotation 4 v3" 2026-03-10T05:47:26.599 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:47:26 vm05 bash[33387]: t=2026-03-10T05:47:26+0000 lvl=info msg="Executing migration" logger=migrator id="Update annotation table charset" 2026-03-10T05:47:26.599 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:47:26 vm05 bash[33387]: t=2026-03-10T05:47:26+0000 lvl=info msg="Executing migration" logger=migrator id="Add column region_id to annotation table" 2026-03-10T05:47:26.599 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:47:26 vm05 bash[33387]: t=2026-03-10T05:47:26+0000 lvl=info msg="Executing migration" logger=migrator id="Drop category_id index" 2026-03-10T05:47:26.599 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:47:26 vm05 bash[33387]: t=2026-03-10T05:47:26+0000 lvl=info msg="Executing migration" logger=migrator id="Add column tags to annotation table" 2026-03-10T05:47:26.599 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:47:26 vm05 bash[33387]: t=2026-03-10T05:47:26+0000 lvl=info msg="Executing migration" logger=migrator id="Create annotation_tag table v2" 2026-03-10T05:47:26.599 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:47:26 vm05 bash[33387]: t=2026-03-10T05:47:26+0000 lvl=info msg="Executing migration" logger=migrator id="Add unique index annotation_tag.annotation_id_tag_id" 2026-03-10T05:47:26.599 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:47:26 vm05 bash[33387]: t=2026-03-10T05:47:26+0000 lvl=info msg="Executing migration" logger=migrator id="drop index UQE_annotation_tag_annotation_id_tag_id - v2" 2026-03-10T05:47:26.599 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:47:26 vm05 bash[33387]: t=2026-03-10T05:47:26+0000 lvl=info msg="Executing migration" logger=migrator id="Rename table annotation_tag to annotation_tag_v2 - v2" 2026-03-10T05:47:26.599 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:47:26 vm05 bash[33387]: t=2026-03-10T05:47:26+0000 lvl=info msg="Executing migration" logger=migrator id="Create annotation_tag table v3" 2026-03-10T05:47:26.599 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:47:26 vm05 bash[33387]: t=2026-03-10T05:47:26+0000 lvl=info msg="Executing migration" logger=migrator id="create index UQE_annotation_tag_annotation_id_tag_id - Add unique index annotation_tag.annotation_id_tag_id V3" 2026-03-10T05:47:26.599 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:47:26 vm05 bash[33387]: t=2026-03-10T05:47:26+0000 lvl=info msg="Executing migration" logger=migrator id="copy annotation_tag v2 to v3" 2026-03-10T05:47:26.599 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:47:26 vm05 bash[33387]: t=2026-03-10T05:47:26+0000 lvl=info msg="Executing migration" logger=migrator id="drop table annotation_tag_v2" 2026-03-10T05:47:26.599 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:47:26 vm05 bash[33387]: t=2026-03-10T05:47:26+0000 lvl=info msg="Executing migration" logger=migrator id="Update alert annotations and set TEXT to empty" 2026-03-10T05:47:26.599 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:47:26 vm05 bash[33387]: t=2026-03-10T05:47:26+0000 lvl=info msg="Executing migration" logger=migrator id="Add created time to annotation table" 2026-03-10T05:47:26.599 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:47:26 vm05 bash[33387]: t=2026-03-10T05:47:26+0000 lvl=info msg="Executing migration" logger=migrator id="Add updated time to annotation table" 2026-03-10T05:47:26.599 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:47:26 vm05 bash[33387]: t=2026-03-10T05:47:26+0000 lvl=info msg="Executing migration" logger=migrator id="Add index for created in annotation table" 2026-03-10T05:47:26.599 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:47:26 vm05 bash[33387]: t=2026-03-10T05:47:26+0000 lvl=info msg="Executing migration" logger=migrator id="Add index for updated in annotation table" 2026-03-10T05:47:26.600 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:47:26 vm05 bash[33387]: t=2026-03-10T05:47:26+0000 lvl=info msg="Executing migration" logger=migrator id="Convert existing annotations from seconds to milliseconds" 2026-03-10T05:47:26.600 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:47:26 vm05 bash[33387]: t=2026-03-10T05:47:26+0000 lvl=info msg="Executing migration" logger=migrator id="Add epoch_end column" 2026-03-10T05:47:26.600 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:47:26 vm05 bash[33387]: t=2026-03-10T05:47:26+0000 lvl=info msg="Executing migration" logger=migrator id="Add index for epoch_end" 2026-03-10T05:47:26.600 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:47:26 vm05 bash[33387]: t=2026-03-10T05:47:26+0000 lvl=info msg="Executing migration" logger=migrator id="Make epoch_end the same as epoch" 2026-03-10T05:47:26.600 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:47:26 vm05 bash[33387]: t=2026-03-10T05:47:26+0000 lvl=info msg="Executing migration" logger=migrator id="Move region to single row" 2026-03-10T05:47:26.600 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:47:26 vm05 bash[33387]: t=2026-03-10T05:47:26+0000 lvl=info msg="Executing migration" logger=migrator id="Remove index org_id_epoch from annotation table" 2026-03-10T05:47:26.600 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:47:26 vm05 bash[33387]: t=2026-03-10T05:47:26+0000 lvl=info msg="Executing migration" logger=migrator id="Remove index org_id_dashboard_id_panel_id_epoch from annotation table" 2026-03-10T05:47:26.600 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:47:26 vm05 bash[33387]: t=2026-03-10T05:47:26+0000 lvl=info msg="Executing migration" logger=migrator id="Add index for org_id_dashboard_id_epoch_end_epoch on annotation table" 2026-03-10T05:47:26.600 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:47:26 vm05 bash[33387]: t=2026-03-10T05:47:26+0000 lvl=info msg="Executing migration" logger=migrator id="Add index for org_id_epoch_end_epoch on annotation table" 2026-03-10T05:47:26.600 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:47:26 vm05 bash[33387]: t=2026-03-10T05:47:26+0000 lvl=info msg="Executing migration" logger=migrator id="Remove index org_id_epoch_epoch_end from annotation table" 2026-03-10T05:47:26.600 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:47:26 vm05 bash[33387]: t=2026-03-10T05:47:26+0000 lvl=info msg="Executing migration" logger=migrator id="Add index for alert_id on annotation table" 2026-03-10T05:47:26.600 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:47:26 vm05 bash[33387]: t=2026-03-10T05:47:26+0000 lvl=info msg="Executing migration" logger=migrator id="create test_data table" 2026-03-10T05:47:26.600 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:47:26 vm05 bash[33387]: t=2026-03-10T05:47:26+0000 lvl=info msg="Executing migration" logger=migrator id="create dashboard_version table v1" 2026-03-10T05:47:26.600 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:47:26 vm05 bash[33387]: t=2026-03-10T05:47:26+0000 lvl=info msg="Executing migration" logger=migrator id="add index dashboard_version.dashboard_id" 2026-03-10T05:47:26.600 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:47:26 vm05 bash[33387]: t=2026-03-10T05:47:26+0000 lvl=info msg="Executing migration" logger=migrator id="add unique index dashboard_version.dashboard_id and dashboard_version.version" 2026-03-10T05:47:26.600 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:47:26 vm05 bash[33387]: t=2026-03-10T05:47:26+0000 lvl=info msg="Executing migration" logger=migrator id="Set dashboard version to 1 where 0" 2026-03-10T05:47:26.600 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:47:26 vm05 bash[33387]: t=2026-03-10T05:47:26+0000 lvl=info msg="Executing migration" logger=migrator id="save existing dashboard data in dashboard_version table v1" 2026-03-10T05:47:26.600 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:47:26 vm05 bash[33387]: t=2026-03-10T05:47:26+0000 lvl=info msg="Executing migration" logger=migrator id="alter dashboard_version.data to mediumtext v1" 2026-03-10T05:47:26.600 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:47:26 vm05 bash[33387]: t=2026-03-10T05:47:26+0000 lvl=info msg="Executing migration" logger=migrator id="create team table" 2026-03-10T05:47:26.600 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:47:26 vm05 bash[33387]: t=2026-03-10T05:47:26+0000 lvl=info msg="Executing migration" logger=migrator id="add index team.org_id" 2026-03-10T05:47:26.600 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:47:26 vm05 bash[33387]: t=2026-03-10T05:47:26+0000 lvl=info msg="Executing migration" logger=migrator id="add unique index team_org_id_name" 2026-03-10T05:47:26.600 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:47:26 vm05 bash[33387]: t=2026-03-10T05:47:26+0000 lvl=info msg="Executing migration" logger=migrator id="create team member table" 2026-03-10T05:47:26.600 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:47:26 vm05 bash[33387]: t=2026-03-10T05:47:26+0000 lvl=info msg="Executing migration" logger=migrator id="add index team_member.org_id" 2026-03-10T05:47:26.600 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:47:26 vm05 bash[33387]: t=2026-03-10T05:47:26+0000 lvl=info msg="Executing migration" logger=migrator id="add unique index team_member_org_id_team_id_user_id" 2026-03-10T05:47:26.600 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:47:26 vm05 bash[33387]: t=2026-03-10T05:47:26+0000 lvl=info msg="Executing migration" logger=migrator id="add index team_member.team_id" 2026-03-10T05:47:26.600 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:47:26 vm05 bash[33387]: t=2026-03-10T05:47:26+0000 lvl=info msg="Executing migration" logger=migrator id="Add column email to team table" 2026-03-10T05:47:26.600 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:47:26 vm05 bash[33387]: t=2026-03-10T05:47:26+0000 lvl=info msg="Executing migration" logger=migrator id="Add column external to team_member table" 2026-03-10T05:47:26.600 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:47:26 vm05 bash[33387]: t=2026-03-10T05:47:26+0000 lvl=info msg="Executing migration" logger=migrator id="Add column permission to team_member table" 2026-03-10T05:47:26.600 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:47:26 vm05 bash[33387]: t=2026-03-10T05:47:26+0000 lvl=info msg="Executing migration" logger=migrator id="create dashboard acl table" 2026-03-10T05:47:26.600 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:47:26 vm05 bash[33387]: t=2026-03-10T05:47:26+0000 lvl=info msg="Executing migration" logger=migrator id="add index dashboard_acl_dashboard_id" 2026-03-10T05:47:26.600 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:47:26 vm05 bash[33387]: t=2026-03-10T05:47:26+0000 lvl=info msg="Executing migration" logger=migrator id="add unique index dashboard_acl_dashboard_id_user_id" 2026-03-10T05:47:26.600 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:47:26 vm05 bash[33387]: t=2026-03-10T05:47:26+0000 lvl=info msg="Executing migration" logger=migrator id="add unique index dashboard_acl_dashboard_id_team_id" 2026-03-10T05:47:26.600 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:47:26 vm05 bash[33387]: t=2026-03-10T05:47:26+0000 lvl=info msg="Executing migration" logger=migrator id="add index dashboard_acl_user_id" 2026-03-10T05:47:26.600 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:47:26 vm05 bash[33387]: t=2026-03-10T05:47:26+0000 lvl=info msg="Executing migration" logger=migrator id="add index dashboard_acl_team_id" 2026-03-10T05:47:26.600 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:47:26 vm05 bash[33387]: t=2026-03-10T05:47:26+0000 lvl=info msg="Executing migration" logger=migrator id="add index dashboard_acl_org_id_role" 2026-03-10T05:47:26.600 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:47:26 vm05 bash[33387]: t=2026-03-10T05:47:26+0000 lvl=info msg="Executing migration" logger=migrator id="add index dashboard_permission" 2026-03-10T05:47:26.600 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:47:26 vm05 bash[33387]: t=2026-03-10T05:47:26+0000 lvl=info msg="Executing migration" logger=migrator id="save default acl rules in dashboard_acl table" 2026-03-10T05:47:26.600 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:47:26 vm05 bash[33387]: t=2026-03-10T05:47:26+0000 lvl=info msg="Executing migration" logger=migrator id="delete acl rules for deleted dashboards and folders" 2026-03-10T05:47:26.600 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:47:26 vm05 bash[33387]: t=2026-03-10T05:47:26+0000 lvl=info msg="Executing migration" logger=migrator id="create tag table" 2026-03-10T05:47:26.600 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:47:26 vm05 bash[33387]: t=2026-03-10T05:47:26+0000 lvl=info msg="Executing migration" logger=migrator id="add index tag.key_value" 2026-03-10T05:47:26.600 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:47:26 vm05 bash[33387]: t=2026-03-10T05:47:26+0000 lvl=info msg="Executing migration" logger=migrator id="create login attempt table" 2026-03-10T05:47:26.600 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:47:26 vm05 bash[33387]: t=2026-03-10T05:47:26+0000 lvl=info msg="Executing migration" logger=migrator id="add index login_attempt.username" 2026-03-10T05:47:26.600 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:47:26 vm05 bash[33387]: t=2026-03-10T05:47:26+0000 lvl=info msg="Executing migration" logger=migrator id="drop index IDX_login_attempt_username - v1" 2026-03-10T05:47:26.600 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:47:26 vm05 bash[33387]: t=2026-03-10T05:47:26+0000 lvl=info msg="Executing migration" logger=migrator id="Rename table login_attempt to login_attempt_tmp_qwerty - v1" 2026-03-10T05:47:26.600 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:47:26 vm05 bash[33387]: t=2026-03-10T05:47:26+0000 lvl=info msg="Executing migration" logger=migrator id="create login_attempt v2" 2026-03-10T05:47:26.600 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:47:26 vm05 bash[33387]: t=2026-03-10T05:47:26+0000 lvl=info msg="Executing migration" logger=migrator id="create index IDX_login_attempt_username - v2" 2026-03-10T05:47:26.600 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:47:26 vm05 bash[33387]: t=2026-03-10T05:47:26+0000 lvl=info msg="Executing migration" logger=migrator id="copy login_attempt v1 to v2" 2026-03-10T05:47:26.600 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:47:26 vm05 bash[33387]: t=2026-03-10T05:47:26+0000 lvl=info msg="Executing migration" logger=migrator id="drop login_attempt_tmp_qwerty" 2026-03-10T05:47:26.600 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:47:26 vm05 bash[33387]: t=2026-03-10T05:47:26+0000 lvl=info msg="Executing migration" logger=migrator id="create user auth table" 2026-03-10T05:47:26.600 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:47:26 vm05 bash[33387]: t=2026-03-10T05:47:26+0000 lvl=info msg="Executing migration" logger=migrator id="create index IDX_user_auth_auth_module_auth_id - v1" 2026-03-10T05:47:26.600 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:47:26 vm05 bash[33387]: t=2026-03-10T05:47:26+0000 lvl=info msg="Executing migration" logger=migrator id="alter user_auth.auth_id to length 190" 2026-03-10T05:47:26.600 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:47:26 vm05 bash[33387]: t=2026-03-10T05:47:26+0000 lvl=info msg="Executing migration" logger=migrator id="Add OAuth access token to user_auth" 2026-03-10T05:47:26.600 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:47:26 vm05 bash[33387]: t=2026-03-10T05:47:26+0000 lvl=info msg="Executing migration" logger=migrator id="Add OAuth refresh token to user_auth" 2026-03-10T05:47:26.600 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:47:26 vm05 bash[33387]: t=2026-03-10T05:47:26+0000 lvl=info msg="Executing migration" logger=migrator id="Add OAuth token type to user_auth" 2026-03-10T05:47:26.600 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:47:26 vm05 bash[33387]: t=2026-03-10T05:47:26+0000 lvl=info msg="Executing migration" logger=migrator id="Add OAuth expiry to user_auth" 2026-03-10T05:47:26.600 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:47:26 vm05 bash[33387]: t=2026-03-10T05:47:26+0000 lvl=info msg="Executing migration" logger=migrator id="Add index to user_id column in user_auth" 2026-03-10T05:47:26.600 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:47:26 vm05 bash[33387]: t=2026-03-10T05:47:26+0000 lvl=info msg="Executing migration" logger=migrator id="create server_lock table" 2026-03-10T05:47:26.600 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:47:26 vm05 bash[33387]: t=2026-03-10T05:47:26+0000 lvl=info msg="Executing migration" logger=migrator id="add index server_lock.operation_uid" 2026-03-10T05:47:26.600 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:47:26 vm05 bash[33387]: t=2026-03-10T05:47:26+0000 lvl=info msg="Executing migration" logger=migrator id="create user auth token table" 2026-03-10T05:47:26.600 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:47:26 vm05 bash[33387]: t=2026-03-10T05:47:26+0000 lvl=info msg="Executing migration" logger=migrator id="add unique index user_auth_token.auth_token" 2026-03-10T05:47:26.600 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:47:26 vm05 bash[33387]: t=2026-03-10T05:47:26+0000 lvl=info msg="Executing migration" logger=migrator id="add unique index user_auth_token.prev_auth_token" 2026-03-10T05:47:26.600 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:47:26 vm05 bash[33387]: t=2026-03-10T05:47:26+0000 lvl=info msg="Executing migration" logger=migrator id="add index user_auth_token.user_id" 2026-03-10T05:47:26.600 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:47:26 vm05 bash[33387]: t=2026-03-10T05:47:26+0000 lvl=info msg="Executing migration" logger=migrator id="Add revoked_at to the user auth token" 2026-03-10T05:47:26.600 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:47:26 vm05 bash[33387]: t=2026-03-10T05:47:26+0000 lvl=info msg="Executing migration" logger=migrator id="create cache_data table" 2026-03-10T05:47:26.600 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:47:26 vm05 bash[33387]: t=2026-03-10T05:47:26+0000 lvl=info msg="Executing migration" logger=migrator id="add unique index cache_data.cache_key" 2026-03-10T05:47:26.600 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:47:26 vm05 bash[33387]: t=2026-03-10T05:47:26+0000 lvl=info msg="Executing migration" logger=migrator id="create short_url table v1" 2026-03-10T05:47:26.601 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:47:26 vm05 bash[33387]: t=2026-03-10T05:47:26+0000 lvl=info msg="Executing migration" logger=migrator id="add index short_url.org_id-uid" 2026-03-10T05:47:26.601 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:47:26 vm05 bash[33387]: t=2026-03-10T05:47:26+0000 lvl=info msg="Executing migration" logger=migrator id="delete alert_definition table" 2026-03-10T05:47:26.601 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:47:26 vm05 bash[33387]: t=2026-03-10T05:47:26+0000 lvl=info msg="Executing migration" logger=migrator id="recreate alert_definition table" 2026-03-10T05:47:26.601 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:47:26 vm05 bash[33387]: t=2026-03-10T05:47:26+0000 lvl=info msg="Executing migration" logger=migrator id="add index in alert_definition on org_id and title columns" 2026-03-10T05:47:26.601 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:47:26 vm05 bash[33387]: t=2026-03-10T05:47:26+0000 lvl=info msg="Executing migration" logger=migrator id="add index in alert_definition on org_id and uid columns" 2026-03-10T05:47:27.008 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:47:26 vm05 bash[33387]: t=2026-03-10T05:47:26+0000 lvl=info msg="Executing migration" logger=migrator id="alter alert_definition table data column to mediumtext in mysql" 2026-03-10T05:47:27.008 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:47:26 vm05 bash[33387]: t=2026-03-10T05:47:26+0000 lvl=info msg="Executing migration" logger=migrator id="drop index in alert_definition on org_id and title columns" 2026-03-10T05:47:27.008 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:47:26 vm05 bash[33387]: t=2026-03-10T05:47:26+0000 lvl=info msg="Executing migration" logger=migrator id="drop index in alert_definition on org_id and uid columns" 2026-03-10T05:47:27.008 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:47:26 vm05 bash[33387]: t=2026-03-10T05:47:26+0000 lvl=info msg="Executing migration" logger=migrator id="add unique index in alert_definition on org_id and title columns" 2026-03-10T05:47:27.008 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:47:26 vm05 bash[33387]: t=2026-03-10T05:47:26+0000 lvl=info msg="Executing migration" logger=migrator id="add unique index in alert_definition on org_id and uid columns" 2026-03-10T05:47:27.008 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:47:26 vm05 bash[33387]: t=2026-03-10T05:47:26+0000 lvl=info msg="Executing migration" logger=migrator id="Add column paused in alert_definition" 2026-03-10T05:47:27.008 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:47:26 vm05 bash[33387]: t=2026-03-10T05:47:26+0000 lvl=info msg="Executing migration" logger=migrator id="drop alert_definition table" 2026-03-10T05:47:27.008 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:47:26 vm05 bash[33387]: t=2026-03-10T05:47:26+0000 lvl=info msg="Executing migration" logger=migrator id="delete alert_definition_version table" 2026-03-10T05:47:27.008 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:47:26 vm05 bash[33387]: t=2026-03-10T05:47:26+0000 lvl=info msg="Executing migration" logger=migrator id="recreate alert_definition_version table" 2026-03-10T05:47:27.008 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:47:26 vm05 bash[33387]: t=2026-03-10T05:47:26+0000 lvl=info msg="Executing migration" logger=migrator id="add index in alert_definition_version table on alert_definition_id and version columns" 2026-03-10T05:47:27.008 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:47:26 vm05 bash[33387]: t=2026-03-10T05:47:26+0000 lvl=info msg="Executing migration" logger=migrator id="add index in alert_definition_version table on alert_definition_uid and version columns" 2026-03-10T05:47:27.009 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:47:26 vm05 bash[33387]: t=2026-03-10T05:47:26+0000 lvl=info msg="Executing migration" logger=migrator id="alter alert_definition_version table data column to mediumtext in mysql" 2026-03-10T05:47:27.009 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:47:26 vm05 bash[33387]: t=2026-03-10T05:47:26+0000 lvl=info msg="Executing migration" logger=migrator id="drop alert_definition_version table" 2026-03-10T05:47:27.009 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:47:26 vm05 bash[33387]: t=2026-03-10T05:47:26+0000 lvl=info msg="Executing migration" logger=migrator id="create alert_instance table" 2026-03-10T05:47:27.009 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:47:26 vm05 bash[33387]: t=2026-03-10T05:47:26+0000 lvl=info msg="Executing migration" logger=migrator id="add index in alert_instance table on def_org_id, def_uid and current_state columns" 2026-03-10T05:47:27.009 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:47:26 vm05 bash[33387]: t=2026-03-10T05:47:26+0000 lvl=info msg="Executing migration" logger=migrator id="add index in alert_instance table on def_org_id, current_state columns" 2026-03-10T05:47:27.009 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:47:26 vm05 bash[33387]: t=2026-03-10T05:47:26+0000 lvl=info msg="Executing migration" logger=migrator id="add column current_state_end to alert_instance" 2026-03-10T05:47:27.009 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:47:26 vm05 bash[33387]: t=2026-03-10T05:47:26+0000 lvl=info msg="Executing migration" logger=migrator id="remove index def_org_id, def_uid, current_state on alert_instance" 2026-03-10T05:47:27.009 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:47:26 vm05 bash[33387]: t=2026-03-10T05:47:26+0000 lvl=info msg="Executing migration" logger=migrator id="remove index def_org_id, current_state on alert_instance" 2026-03-10T05:47:27.009 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:47:26 vm05 bash[33387]: t=2026-03-10T05:47:26+0000 lvl=info msg="Executing migration" logger=migrator id="rename def_org_id to rule_org_id in alert_instance" 2026-03-10T05:47:27.009 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:47:26 vm05 bash[33387]: t=2026-03-10T05:47:26+0000 lvl=info msg="Executing migration" logger=migrator id="rename def_uid to rule_uid in alert_instance" 2026-03-10T05:47:27.009 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:47:26 vm05 bash[33387]: t=2026-03-10T05:47:26+0000 lvl=info msg="Executing migration" logger=migrator id="add index rule_org_id, rule_uid, current_state on alert_instance" 2026-03-10T05:47:27.009 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:47:26 vm05 bash[33387]: t=2026-03-10T05:47:26+0000 lvl=info msg="Executing migration" logger=migrator id="add index rule_org_id, current_state on alert_instance" 2026-03-10T05:47:27.009 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:47:26 vm05 bash[33387]: t=2026-03-10T05:47:26+0000 lvl=info msg="Executing migration" logger=migrator id="create alert_rule table" 2026-03-10T05:47:27.009 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:47:26 vm05 bash[33387]: t=2026-03-10T05:47:26+0000 lvl=info msg="Executing migration" logger=migrator id="add index in alert_rule on org_id and title columns" 2026-03-10T05:47:27.009 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:47:26 vm05 bash[33387]: t=2026-03-10T05:47:26+0000 lvl=info msg="Executing migration" logger=migrator id="add index in alert_rule on org_id and uid columns" 2026-03-10T05:47:27.009 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:47:26 vm05 bash[33387]: t=2026-03-10T05:47:26+0000 lvl=info msg="Executing migration" logger=migrator id="add index in alert_rule on org_id, namespace_uid, group_uid columns" 2026-03-10T05:47:27.009 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:47:26 vm05 bash[33387]: t=2026-03-10T05:47:26+0000 lvl=info msg="Executing migration" logger=migrator id="alter alert_rule table data column to mediumtext in mysql" 2026-03-10T05:47:27.009 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:47:26 vm05 bash[33387]: t=2026-03-10T05:47:26+0000 lvl=info msg="Executing migration" logger=migrator id="add column for to alert_rule" 2026-03-10T05:47:27.009 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:47:26 vm05 bash[33387]: t=2026-03-10T05:47:26+0000 lvl=info msg="Executing migration" logger=migrator id="add column annotations to alert_rule" 2026-03-10T05:47:27.009 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:47:26 vm05 bash[33387]: t=2026-03-10T05:47:26+0000 lvl=info msg="Executing migration" logger=migrator id="add column labels to alert_rule" 2026-03-10T05:47:27.009 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:47:26 vm05 bash[33387]: t=2026-03-10T05:47:26+0000 lvl=info msg="Executing migration" logger=migrator id="remove unique index from alert_rule on org_id, title columns" 2026-03-10T05:47:27.009 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:47:26 vm05 bash[33387]: t=2026-03-10T05:47:26+0000 lvl=info msg="Executing migration" logger=migrator id="add index in alert_rule on org_id, namespase_uid and title columns" 2026-03-10T05:47:27.009 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:47:26 vm05 bash[33387]: t=2026-03-10T05:47:26+0000 lvl=info msg="Executing migration" logger=migrator id="add dashboard_uid column to alert_rule" 2026-03-10T05:47:27.009 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:47:26 vm05 bash[33387]: t=2026-03-10T05:47:26+0000 lvl=info msg="Executing migration" logger=migrator id="add panel_id column to alert_rule" 2026-03-10T05:47:27.009 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:47:26 vm05 bash[33387]: t=2026-03-10T05:47:26+0000 lvl=info msg="Executing migration" logger=migrator id="add index in alert_rule on org_id, dashboard_uid and panel_id columns" 2026-03-10T05:47:27.009 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:47:26 vm05 bash[33387]: t=2026-03-10T05:47:26+0000 lvl=info msg="Executing migration" logger=migrator id="create alert_rule_version table" 2026-03-10T05:47:27.009 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:47:26 vm05 bash[33387]: t=2026-03-10T05:47:26+0000 lvl=info msg="Executing migration" logger=migrator id="add index in alert_rule_version table on rule_org_id, rule_uid and version columns" 2026-03-10T05:47:27.009 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:47:26 vm05 bash[33387]: t=2026-03-10T05:47:26+0000 lvl=info msg="Executing migration" logger=migrator id="add index in alert_rule_version table on rule_org_id, rule_namespace_uid and rule_group columns" 2026-03-10T05:47:27.009 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:47:26 vm05 bash[33387]: t=2026-03-10T05:47:26+0000 lvl=info msg="Executing migration" logger=migrator id="alter alert_rule_version table data column to mediumtext in mysql" 2026-03-10T05:47:27.009 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:47:26 vm05 bash[33387]: t=2026-03-10T05:47:26+0000 lvl=info msg="Executing migration" logger=migrator id="add column for to alert_rule_version" 2026-03-10T05:47:27.009 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:47:26 vm05 bash[33387]: t=2026-03-10T05:47:26+0000 lvl=info msg="Executing migration" logger=migrator id="add column annotations to alert_rule_version" 2026-03-10T05:47:27.009 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:47:26 vm05 bash[33387]: t=2026-03-10T05:47:26+0000 lvl=info msg="Executing migration" logger=migrator id="add column labels to alert_rule_version" 2026-03-10T05:47:27.009 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:47:26 vm05 bash[33387]: t=2026-03-10T05:47:26+0000 lvl=info msg="Executing migration" logger=migrator id=create_alert_configuration_table 2026-03-10T05:47:27.009 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:47:26 vm05 bash[33387]: t=2026-03-10T05:47:26+0000 lvl=info msg="Executing migration" logger=migrator id="Add column default in alert_configuration" 2026-03-10T05:47:27.009 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:47:26 vm05 bash[33387]: t=2026-03-10T05:47:26+0000 lvl=info msg="Executing migration" logger=migrator id="alert alert_configuration alertmanager_configuration column from TEXT to MEDIUMTEXT if mysql" 2026-03-10T05:47:27.009 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:47:26 vm05 bash[33387]: t=2026-03-10T05:47:26+0000 lvl=info msg="Executing migration" logger=migrator id="add column org_id in alert_configuration" 2026-03-10T05:47:27.009 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:47:26 vm05 bash[33387]: t=2026-03-10T05:47:26+0000 lvl=info msg="Executing migration" logger=migrator id="add index in alert_configuration table on org_id column" 2026-03-10T05:47:27.009 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:47:26 vm05 bash[33387]: t=2026-03-10T05:47:26+0000 lvl=info msg="Executing migration" logger=migrator id=create_ngalert_configuration_table 2026-03-10T05:47:27.009 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:47:26 vm05 bash[33387]: t=2026-03-10T05:47:26+0000 lvl=info msg="Executing migration" logger=migrator id="add index in ngalert_configuration on org_id column" 2026-03-10T05:47:27.009 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:47:26 vm05 bash[33387]: t=2026-03-10T05:47:26+0000 lvl=info msg="Executing migration" logger=migrator id="clear migration entry \"remove unified alerting data\"" 2026-03-10T05:47:27.009 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:47:26 vm05 bash[33387]: t=2026-03-10T05:47:26+0000 lvl=info msg="Executing migration" logger=migrator id="move dashboard alerts to unified alerting" 2026-03-10T05:47:27.009 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:47:26 vm05 bash[33387]: t=2026-03-10T05:47:26+0000 lvl=info msg="Executing migration" logger=migrator id="create library_element table v1" 2026-03-10T05:47:27.009 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:47:26 vm05 bash[33387]: t=2026-03-10T05:47:26+0000 lvl=info msg="Executing migration" logger=migrator id="add index library_element org_id-folder_id-name-kind" 2026-03-10T05:47:27.009 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:47:26 vm05 bash[33387]: t=2026-03-10T05:47:26+0000 lvl=info msg="Executing migration" logger=migrator id="create library_element_connection table v1" 2026-03-10T05:47:27.009 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:47:26 vm05 bash[33387]: t=2026-03-10T05:47:26+0000 lvl=info msg="Executing migration" logger=migrator id="add index library_element_connection element_id-kind-connection_id" 2026-03-10T05:47:27.009 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:47:26 vm05 bash[33387]: t=2026-03-10T05:47:26+0000 lvl=info msg="Executing migration" logger=migrator id="add unique index library_element org_id_uid" 2026-03-10T05:47:27.009 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:47:26 vm05 bash[33387]: t=2026-03-10T05:47:26+0000 lvl=info msg="Executing migration" logger=migrator id="clone move dashboard alerts to unified alerting" 2026-03-10T05:47:27.009 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:47:26 vm05 bash[33387]: t=2026-03-10T05:47:26+0000 lvl=info msg="Executing migration" logger=migrator id="create data_keys table" 2026-03-10T05:47:27.009 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:47:26 vm05 bash[33387]: t=2026-03-10T05:47:26+0000 lvl=info msg="Executing migration" logger=migrator id="create kv_store table v1" 2026-03-10T05:47:27.009 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:47:26 vm05 bash[33387]: t=2026-03-10T05:47:26+0000 lvl=info msg="Executing migration" logger=migrator id="add index kv_store.org_id-namespace-key" 2026-03-10T05:47:27.009 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:47:26 vm05 bash[33387]: t=2026-03-10T05:47:26+0000 lvl=info msg="Executing migration" logger=migrator id="update dashboard_uid and panel_id from existing annotations" 2026-03-10T05:47:27.009 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:47:26 vm05 bash[33387]: t=2026-03-10T05:47:26+0000 lvl=info msg="Executing migration" logger=migrator id="create permission table" 2026-03-10T05:47:27.009 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:47:26 vm05 bash[33387]: t=2026-03-10T05:47:26+0000 lvl=info msg="Executing migration" logger=migrator id="add unique index permission.role_id" 2026-03-10T05:47:27.009 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:47:26 vm05 bash[33387]: t=2026-03-10T05:47:26+0000 lvl=info msg="Executing migration" logger=migrator id="add unique index role_id_action_scope" 2026-03-10T05:47:27.009 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:47:26 vm05 bash[33387]: t=2026-03-10T05:47:26+0000 lvl=info msg="Executing migration" logger=migrator id="create role table" 2026-03-10T05:47:27.009 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:47:26 vm05 bash[33387]: t=2026-03-10T05:47:26+0000 lvl=info msg="Executing migration" logger=migrator id="add column display_name" 2026-03-10T05:47:27.009 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:47:26 vm05 bash[33387]: t=2026-03-10T05:47:26+0000 lvl=info msg="Executing migration" logger=migrator id="add column group_name" 2026-03-10T05:47:27.009 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:47:26 vm05 bash[33387]: t=2026-03-10T05:47:26+0000 lvl=info msg="Executing migration" logger=migrator id="add index role.org_id" 2026-03-10T05:47:27.009 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:47:26 vm05 bash[33387]: t=2026-03-10T05:47:26+0000 lvl=info msg="Executing migration" logger=migrator id="add unique index role_org_id_name" 2026-03-10T05:47:27.009 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:47:26 vm05 bash[33387]: t=2026-03-10T05:47:26+0000 lvl=info msg="Executing migration" logger=migrator id="add index role_org_id_uid" 2026-03-10T05:47:27.009 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:47:26 vm05 bash[33387]: t=2026-03-10T05:47:26+0000 lvl=info msg="Executing migration" logger=migrator id="create team role table" 2026-03-10T05:47:27.009 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:47:26 vm05 bash[33387]: t=2026-03-10T05:47:26+0000 lvl=info msg="Executing migration" logger=migrator id="add index team_role.org_id" 2026-03-10T05:47:27.009 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:47:26 vm05 bash[33387]: t=2026-03-10T05:47:26+0000 lvl=info msg="Executing migration" logger=migrator id="add unique index team_role_org_id_team_id_role_id" 2026-03-10T05:47:27.010 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:47:26 vm05 bash[33387]: t=2026-03-10T05:47:26+0000 lvl=info msg="Executing migration" logger=migrator id="add index team_role.team_id" 2026-03-10T05:47:27.010 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:47:26 vm05 bash[33387]: t=2026-03-10T05:47:26+0000 lvl=info msg="Executing migration" logger=migrator id="create user role table" 2026-03-10T05:47:27.010 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:47:26 vm05 bash[33387]: t=2026-03-10T05:47:26+0000 lvl=info msg="Executing migration" logger=migrator id="add index user_role.org_id" 2026-03-10T05:47:27.010 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:47:26 vm05 bash[33387]: t=2026-03-10T05:47:26+0000 lvl=info msg="Executing migration" logger=migrator id="add unique index user_role_org_id_user_id_role_id" 2026-03-10T05:47:27.010 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:47:26 vm05 bash[33387]: t=2026-03-10T05:47:26+0000 lvl=info msg="Executing migration" logger=migrator id="add index user_role.user_id" 2026-03-10T05:47:27.010 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:47:26 vm05 bash[33387]: t=2026-03-10T05:47:26+0000 lvl=info msg="Executing migration" logger=migrator id="create builtin role table" 2026-03-10T05:47:27.010 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:47:26 vm05 bash[33387]: t=2026-03-10T05:47:26+0000 lvl=info msg="Executing migration" logger=migrator id="add index builtin_role.role_id" 2026-03-10T05:47:27.010 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:47:26 vm05 bash[33387]: t=2026-03-10T05:47:26+0000 lvl=info msg="Executing migration" logger=migrator id="add index builtin_role.name" 2026-03-10T05:47:27.010 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:47:26 vm05 bash[33387]: t=2026-03-10T05:47:26+0000 lvl=info msg="Executing migration" logger=migrator id="Add column org_id to builtin_role table" 2026-03-10T05:47:27.010 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:47:26 vm05 bash[33387]: t=2026-03-10T05:47:26+0000 lvl=info msg="Executing migration" logger=migrator id="add index builtin_role.org_id" 2026-03-10T05:47:27.010 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:47:26 vm05 bash[33387]: t=2026-03-10T05:47:26+0000 lvl=info msg="Executing migration" logger=migrator id="add unique index builtin_role_org_id_role_id_role" 2026-03-10T05:47:27.010 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:47:26 vm05 bash[33387]: t=2026-03-10T05:47:26+0000 lvl=info msg="Executing migration" logger=migrator id="Remove unique index role_org_id_uid" 2026-03-10T05:47:27.010 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:47:26 vm05 bash[33387]: t=2026-03-10T05:47:26+0000 lvl=info msg="Executing migration" logger=migrator id="add unique index role.uid" 2026-03-10T05:47:27.010 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:47:26 vm05 bash[33387]: t=2026-03-10T05:47:26+0000 lvl=info msg="Executing migration" logger=migrator id="create seed assignment table" 2026-03-10T05:47:27.010 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:47:26 vm05 bash[33387]: t=2026-03-10T05:47:26+0000 lvl=info msg="Executing migration" logger=migrator id="add unique index builtin_role_role_name" 2026-03-10T05:47:27.010 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:47:26 vm05 bash[33387]: t=2026-03-10T05:47:26+0000 lvl=info msg="migrations completed" logger=migrator performed=381 skipped=0 duration=463.421489ms 2026-03-10T05:47:27.010 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:47:26 vm05 bash[33387]: t=2026-03-10T05:47:26+0000 lvl=info msg="Created default organization" logger=sqlstore 2026-03-10T05:47:27.010 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:47:26 vm05 bash[33387]: t=2026-03-10T05:47:26+0000 lvl=info msg="Initialising plugins" logger=plugin.manager 2026-03-10T05:47:27.010 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:47:26 vm05 bash[33387]: t=2026-03-10T05:47:26+0000 lvl=info msg="Plugin registered" logger=plugin.manager pluginId=input 2026-03-10T05:47:27.010 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:47:26 vm05 bash[33387]: t=2026-03-10T05:47:26+0000 lvl=info msg="Plugin registered" logger=plugin.manager pluginId=grafana-piechart-panel 2026-03-10T05:47:27.010 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:47:26 vm05 bash[33387]: t=2026-03-10T05:47:26+0000 lvl=info msg="Plugin registered" logger=plugin.manager pluginId=vonage-status-panel 2026-03-10T05:47:27.010 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:47:26 vm05 bash[33387]: t=2026-03-10T05:47:26+0000 lvl=info msg="Live Push Gateway initialization" logger=live.push_http 2026-03-10T05:47:27.010 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:47:26 vm05 bash[33387]: t=2026-03-10T05:47:26+0000 lvl=warn msg="[Deprecated] the datasource provisioning config is outdated. please upgrade" logger=provisioning.datasources filename=/etc/grafana/provisioning/datasources/ceph-dashboard.yml 2026-03-10T05:47:27.010 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:47:26 vm05 bash[33387]: t=2026-03-10T05:47:26+0000 lvl=info msg="inserting datasource from configuration " logger=provisioning.datasources name=Dashboard1 uid=P43CA22E17D0F9596 2026-03-10T05:47:27.010 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:47:26 vm05 bash[33387]: t=2026-03-10T05:47:26+0000 lvl=info msg="HTTP Server Listen" logger=http.server address=[::]:3000 protocol=https subUrl= socket= 2026-03-10T05:47:27.010 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:47:26 vm05 bash[33387]: t=2026-03-10T05:47:26+0000 lvl=info msg="warming cache for startup" logger=ngalert 2026-03-10T05:47:27.010 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:47:26 vm05 bash[33387]: t=2026-03-10T05:47:26+0000 lvl=info msg="starting MultiOrg Alertmanager" logger=ngalert.multiorg.alertmanager 2026-03-10T05:47:27.508 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:47:27 vm05 bash[17864]: audit 2026-03-10T05:47:26.120149+0000 mon.a (mon.0) 625 : audit [INF] from='mgr.14409 ' entity='mgr.y' 2026-03-10T05:47:27.508 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:47:27 vm05 bash[17864]: audit 2026-03-10T05:47:26.121637+0000 mon.c (mon.1) 57 : audit [DBG] from='mgr.14409 192.168.123.102:0/4073702081' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:47:27.508 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:47:27 vm05 bash[17864]: audit 2026-03-10T05:47:26.122172+0000 mon.c (mon.1) 58 : audit [INF] from='mgr.14409 192.168.123.102:0/4073702081' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T05:47:27.508 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:47:27 vm05 bash[17864]: audit 2026-03-10T05:47:26.757137+0000 mon.a (mon.0) 626 : audit [INF] from='mgr.14409 ' entity='mgr.y' 2026-03-10T05:47:27.508 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:47:27 vm05 bash[17864]: audit 2026-03-10T05:47:27.097467+0000 mon.a (mon.0) 627 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "z.rgw.meta","app": "rgw"}]': finished 2026-03-10T05:47:27.508 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:47:27 vm05 bash[17864]: audit 2026-03-10T05:47:27.098195+0000 mon.c (mon.1) 59 : audit [INF] from='client.? 192.168.123.102:0/2911771414' entity='client.admin' cmd=[{"prefix": "osd pool set", "pool": "z.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch 2026-03-10T05:47:27.508 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:47:27 vm05 bash[17864]: cluster 2026-03-10T05:47:27.098808+0000 mon.a (mon.0) 628 : cluster [DBG] osdmap e58: 8 total, 8 up, 8 in 2026-03-10T05:47:27.508 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:47:27 vm05 bash[17864]: audit 2026-03-10T05:47:27.100593+0000 mon.a (mon.0) 629 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set", "pool": "z.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch 2026-03-10T05:47:27.583 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:47:27 vm02 bash[17462]: audit 2026-03-10T05:47:26.120149+0000 mon.a (mon.0) 625 : audit [INF] from='mgr.14409 ' entity='mgr.y' 2026-03-10T05:47:27.584 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:47:27 vm02 bash[17462]: audit 2026-03-10T05:47:26.121637+0000 mon.c (mon.1) 57 : audit [DBG] from='mgr.14409 192.168.123.102:0/4073702081' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:47:27.584 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:47:27 vm02 bash[17462]: audit 2026-03-10T05:47:26.122172+0000 mon.c (mon.1) 58 : audit [INF] from='mgr.14409 192.168.123.102:0/4073702081' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T05:47:27.584 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:47:27 vm02 bash[17462]: audit 2026-03-10T05:47:26.757137+0000 mon.a (mon.0) 626 : audit [INF] from='mgr.14409 ' entity='mgr.y' 2026-03-10T05:47:27.584 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:47:27 vm02 bash[17462]: audit 2026-03-10T05:47:27.097467+0000 mon.a (mon.0) 627 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "z.rgw.meta","app": "rgw"}]': finished 2026-03-10T05:47:27.584 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:47:27 vm02 bash[17462]: audit 2026-03-10T05:47:27.098195+0000 mon.c (mon.1) 59 : audit [INF] from='client.? 192.168.123.102:0/2911771414' entity='client.admin' cmd=[{"prefix": "osd pool set", "pool": "z.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch 2026-03-10T05:47:27.584 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:47:27 vm02 bash[17462]: cluster 2026-03-10T05:47:27.098808+0000 mon.a (mon.0) 628 : cluster [DBG] osdmap e58: 8 total, 8 up, 8 in 2026-03-10T05:47:27.584 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:47:27 vm02 bash[17462]: audit 2026-03-10T05:47:27.100593+0000 mon.a (mon.0) 629 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set", "pool": "z.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch 2026-03-10T05:47:27.584 INFO:journalctl@ceph.mgr.y.vm02.stdout:Mar 10 05:47:27 vm02 bash[17731]: ::ffff:192.168.123.105 - - [10/Mar/2026:05:47:27] "GET /metrics HTTP/1.1" 200 192127 "" "Prometheus/2.33.4" 2026-03-10T05:47:27.584 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:47:27 vm02 bash[22526]: audit 2026-03-10T05:47:26.120149+0000 mon.a (mon.0) 625 : audit [INF] from='mgr.14409 ' entity='mgr.y' 2026-03-10T05:47:27.584 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:47:27 vm02 bash[22526]: audit 2026-03-10T05:47:26.121637+0000 mon.c (mon.1) 57 : audit [DBG] from='mgr.14409 192.168.123.102:0/4073702081' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:47:27.584 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:47:27 vm02 bash[22526]: audit 2026-03-10T05:47:26.122172+0000 mon.c (mon.1) 58 : audit [INF] from='mgr.14409 192.168.123.102:0/4073702081' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T05:47:27.584 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:47:27 vm02 bash[22526]: audit 2026-03-10T05:47:26.757137+0000 mon.a (mon.0) 626 : audit [INF] from='mgr.14409 ' entity='mgr.y' 2026-03-10T05:47:27.584 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:47:27 vm02 bash[22526]: audit 2026-03-10T05:47:27.097467+0000 mon.a (mon.0) 627 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "z.rgw.meta","app": "rgw"}]': finished 2026-03-10T05:47:27.584 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:47:27 vm02 bash[22526]: audit 2026-03-10T05:47:27.098195+0000 mon.c (mon.1) 59 : audit [INF] from='client.? 192.168.123.102:0/2911771414' entity='client.admin' cmd=[{"prefix": "osd pool set", "pool": "z.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch 2026-03-10T05:47:27.584 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:47:27 vm02 bash[22526]: cluster 2026-03-10T05:47:27.098808+0000 mon.a (mon.0) 628 : cluster [DBG] osdmap e58: 8 total, 8 up, 8 in 2026-03-10T05:47:27.584 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:47:27 vm02 bash[22526]: audit 2026-03-10T05:47:27.100593+0000 mon.a (mon.0) 629 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set", "pool": "z.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch 2026-03-10T05:47:29.008 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:47:28 vm05 bash[17864]: cluster 2026-03-10T05:47:27.684363+0000 mgr.y (mgr.14409) 40 : cluster [DBG] pgmap v30: 129 pgs: 64 unknown, 32 creating+peering, 33 active+clean; 451 KiB data, 50 MiB used, 160 GiB / 160 GiB avail 2026-03-10T05:47:29.008 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:47:28 vm05 bash[17864]: audit 2026-03-10T05:47:28.079682+0000 mon.a (mon.0) 630 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set", "pool": "z.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished 2026-03-10T05:47:29.008 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:47:28 vm05 bash[17864]: cluster 2026-03-10T05:47:28.079723+0000 mon.a (mon.0) 631 : cluster [DBG] osdmap e59: 8 total, 8 up, 8 in 2026-03-10T05:47:29.008 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:47:28 vm05 bash[17864]: audit 2026-03-10T05:47:28.083055+0000 mon.c (mon.1) 60 : audit [INF] from='client.? 192.168.123.102:0/2911771414' entity='client.admin' cmd=[{"prefix": "osd pool set", "pool": "z.rgw.meta", "var": "pg_num_min", "val": "8"}]: dispatch 2026-03-10T05:47:29.008 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:47:28 vm05 bash[17864]: audit 2026-03-10T05:47:28.084164+0000 mon.a (mon.0) 632 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set", "pool": "z.rgw.meta", "var": "pg_num_min", "val": "8"}]: dispatch 2026-03-10T05:47:29.055 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:47:28 vm02 bash[17462]: cluster 2026-03-10T05:47:27.684363+0000 mgr.y (mgr.14409) 40 : cluster [DBG] pgmap v30: 129 pgs: 64 unknown, 32 creating+peering, 33 active+clean; 451 KiB data, 50 MiB used, 160 GiB / 160 GiB avail 2026-03-10T05:47:29.056 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:47:28 vm02 bash[17462]: audit 2026-03-10T05:47:28.079682+0000 mon.a (mon.0) 630 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set", "pool": "z.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished 2026-03-10T05:47:29.056 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:47:28 vm02 bash[17462]: cluster 2026-03-10T05:47:28.079723+0000 mon.a (mon.0) 631 : cluster [DBG] osdmap e59: 8 total, 8 up, 8 in 2026-03-10T05:47:29.056 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:47:28 vm02 bash[17462]: audit 2026-03-10T05:47:28.083055+0000 mon.c (mon.1) 60 : audit [INF] from='client.? 192.168.123.102:0/2911771414' entity='client.admin' cmd=[{"prefix": "osd pool set", "pool": "z.rgw.meta", "var": "pg_num_min", "val": "8"}]: dispatch 2026-03-10T05:47:29.056 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:47:28 vm02 bash[17462]: audit 2026-03-10T05:47:28.084164+0000 mon.a (mon.0) 632 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set", "pool": "z.rgw.meta", "var": "pg_num_min", "val": "8"}]: dispatch 2026-03-10T05:47:29.056 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:47:28 vm02 bash[22526]: cluster 2026-03-10T05:47:27.684363+0000 mgr.y (mgr.14409) 40 : cluster [DBG] pgmap v30: 129 pgs: 64 unknown, 32 creating+peering, 33 active+clean; 451 KiB data, 50 MiB used, 160 GiB / 160 GiB avail 2026-03-10T05:47:29.056 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:47:28 vm02 bash[22526]: audit 2026-03-10T05:47:28.079682+0000 mon.a (mon.0) 630 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set", "pool": "z.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished 2026-03-10T05:47:29.056 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:47:28 vm02 bash[22526]: cluster 2026-03-10T05:47:28.079723+0000 mon.a (mon.0) 631 : cluster [DBG] osdmap e59: 8 total, 8 up, 8 in 2026-03-10T05:47:29.056 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:47:28 vm02 bash[22526]: audit 2026-03-10T05:47:28.083055+0000 mon.c (mon.1) 60 : audit [INF] from='client.? 192.168.123.102:0/2911771414' entity='client.admin' cmd=[{"prefix": "osd pool set", "pool": "z.rgw.meta", "var": "pg_num_min", "val": "8"}]: dispatch 2026-03-10T05:47:29.056 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:47:28 vm02 bash[22526]: audit 2026-03-10T05:47:28.084164+0000 mon.a (mon.0) 632 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set", "pool": "z.rgw.meta", "var": "pg_num_min", "val": "8"}]: dispatch 2026-03-10T05:47:29.191 INFO:teuthology.orchestra.run.vm02.stdout:{ 2026-03-10T05:47:29.191 INFO:teuthology.orchestra.run.vm02.stdout: "id": "7b2f6345-5725-4445-98b3-4123c3e4364d", 2026-03-10T05:47:29.191 INFO:teuthology.orchestra.run.vm02.stdout: "epoch": 1, 2026-03-10T05:47:29.191 INFO:teuthology.orchestra.run.vm02.stdout: "predecessor_uuid": "263cffb0-ab49-4d9f-a14d-00088d94d487", 2026-03-10T05:47:29.191 INFO:teuthology.orchestra.run.vm02.stdout: "sync_status": [], 2026-03-10T05:47:29.191 INFO:teuthology.orchestra.run.vm02.stdout: "period_map": { 2026-03-10T05:47:29.191 INFO:teuthology.orchestra.run.vm02.stdout: "id": "7b2f6345-5725-4445-98b3-4123c3e4364d", 2026-03-10T05:47:29.191 INFO:teuthology.orchestra.run.vm02.stdout: "zonegroups": [ 2026-03-10T05:47:29.191 INFO:teuthology.orchestra.run.vm02.stdout: { 2026-03-10T05:47:29.191 INFO:teuthology.orchestra.run.vm02.stdout: "id": "9c21ed15-ac42-4b2f-9d98-2a55e5899bb4", 2026-03-10T05:47:29.191 INFO:teuthology.orchestra.run.vm02.stdout: "name": "default", 2026-03-10T05:47:29.191 INFO:teuthology.orchestra.run.vm02.stdout: "api_name": "default", 2026-03-10T05:47:29.191 INFO:teuthology.orchestra.run.vm02.stdout: "is_master": "true", 2026-03-10T05:47:29.191 INFO:teuthology.orchestra.run.vm02.stdout: "endpoints": [], 2026-03-10T05:47:29.191 INFO:teuthology.orchestra.run.vm02.stdout: "hostnames": [], 2026-03-10T05:47:29.191 INFO:teuthology.orchestra.run.vm02.stdout: "hostnames_s3website": [], 2026-03-10T05:47:29.191 INFO:teuthology.orchestra.run.vm02.stdout: "master_zone": "b64c4367-f368-4901-9b9b-cd9fd0e3497b", 2026-03-10T05:47:29.191 INFO:teuthology.orchestra.run.vm02.stdout: "zones": [ 2026-03-10T05:47:29.191 INFO:teuthology.orchestra.run.vm02.stdout: { 2026-03-10T05:47:29.191 INFO:teuthology.orchestra.run.vm02.stdout: "id": "b64c4367-f368-4901-9b9b-cd9fd0e3497b", 2026-03-10T05:47:29.191 INFO:teuthology.orchestra.run.vm02.stdout: "name": "z", 2026-03-10T05:47:29.191 INFO:teuthology.orchestra.run.vm02.stdout: "endpoints": [], 2026-03-10T05:47:29.191 INFO:teuthology.orchestra.run.vm02.stdout: "log_meta": "false", 2026-03-10T05:47:29.191 INFO:teuthology.orchestra.run.vm02.stdout: "log_data": "false", 2026-03-10T05:47:29.191 INFO:teuthology.orchestra.run.vm02.stdout: "bucket_index_max_shards": 11, 2026-03-10T05:47:29.191 INFO:teuthology.orchestra.run.vm02.stdout: "read_only": "false", 2026-03-10T05:47:29.191 INFO:teuthology.orchestra.run.vm02.stdout: "tier_type": "", 2026-03-10T05:47:29.191 INFO:teuthology.orchestra.run.vm02.stdout: "sync_from_all": "true", 2026-03-10T05:47:29.192 INFO:teuthology.orchestra.run.vm02.stdout: "sync_from": [], 2026-03-10T05:47:29.192 INFO:teuthology.orchestra.run.vm02.stdout: "redirect_zone": "" 2026-03-10T05:47:29.192 INFO:teuthology.orchestra.run.vm02.stdout: } 2026-03-10T05:47:29.192 INFO:teuthology.orchestra.run.vm02.stdout: ], 2026-03-10T05:47:29.192 INFO:teuthology.orchestra.run.vm02.stdout: "placement_targets": [ 2026-03-10T05:47:29.192 INFO:teuthology.orchestra.run.vm02.stdout: { 2026-03-10T05:47:29.192 INFO:teuthology.orchestra.run.vm02.stdout: "name": "default-placement", 2026-03-10T05:47:29.192 INFO:teuthology.orchestra.run.vm02.stdout: "tags": [], 2026-03-10T05:47:29.192 INFO:teuthology.orchestra.run.vm02.stdout: "storage_classes": [ 2026-03-10T05:47:29.192 INFO:teuthology.orchestra.run.vm02.stdout: "STANDARD" 2026-03-10T05:47:29.192 INFO:teuthology.orchestra.run.vm02.stdout: ] 2026-03-10T05:47:29.192 INFO:teuthology.orchestra.run.vm02.stdout: } 2026-03-10T05:47:29.192 INFO:teuthology.orchestra.run.vm02.stdout: ], 2026-03-10T05:47:29.192 INFO:teuthology.orchestra.run.vm02.stdout: "default_placement": "default-placement", 2026-03-10T05:47:29.192 INFO:teuthology.orchestra.run.vm02.stdout: "realm_id": "fb11b072-4c9f-4af9-80fd-732a30fdcbae", 2026-03-10T05:47:29.192 INFO:teuthology.orchestra.run.vm02.stdout: "sync_policy": { 2026-03-10T05:47:29.192 INFO:teuthology.orchestra.run.vm02.stdout: "groups": [] 2026-03-10T05:47:29.192 INFO:teuthology.orchestra.run.vm02.stdout: } 2026-03-10T05:47:29.192 INFO:teuthology.orchestra.run.vm02.stdout: } 2026-03-10T05:47:29.192 INFO:teuthology.orchestra.run.vm02.stdout: ], 2026-03-10T05:47:29.192 INFO:teuthology.orchestra.run.vm02.stdout: "short_zone_ids": [ 2026-03-10T05:47:29.192 INFO:teuthology.orchestra.run.vm02.stdout: { 2026-03-10T05:47:29.192 INFO:teuthology.orchestra.run.vm02.stdout: "key": "b64c4367-f368-4901-9b9b-cd9fd0e3497b", 2026-03-10T05:47:29.192 INFO:teuthology.orchestra.run.vm02.stdout: "val": 1740710798 2026-03-10T05:47:29.192 INFO:teuthology.orchestra.run.vm02.stdout: } 2026-03-10T05:47:29.192 INFO:teuthology.orchestra.run.vm02.stdout: ] 2026-03-10T05:47:29.192 INFO:teuthology.orchestra.run.vm02.stdout: }, 2026-03-10T05:47:29.192 INFO:teuthology.orchestra.run.vm02.stdout: "master_zonegroup": "9c21ed15-ac42-4b2f-9d98-2a55e5899bb4", 2026-03-10T05:47:29.192 INFO:teuthology.orchestra.run.vm02.stdout: "master_zone": "b64c4367-f368-4901-9b9b-cd9fd0e3497b", 2026-03-10T05:47:29.192 INFO:teuthology.orchestra.run.vm02.stdout: "period_config": { 2026-03-10T05:47:29.192 INFO:teuthology.orchestra.run.vm02.stdout: "bucket_quota": { 2026-03-10T05:47:29.192 INFO:teuthology.orchestra.run.vm02.stdout: "enabled": false, 2026-03-10T05:47:29.192 INFO:teuthology.orchestra.run.vm02.stdout: "check_on_raw": false, 2026-03-10T05:47:29.192 INFO:teuthology.orchestra.run.vm02.stdout: "max_size": -1, 2026-03-10T05:47:29.192 INFO:teuthology.orchestra.run.vm02.stdout: "max_size_kb": 0, 2026-03-10T05:47:29.192 INFO:teuthology.orchestra.run.vm02.stdout: "max_objects": -1 2026-03-10T05:47:29.192 INFO:teuthology.orchestra.run.vm02.stdout: }, 2026-03-10T05:47:29.192 INFO:teuthology.orchestra.run.vm02.stdout: "user_quota": { 2026-03-10T05:47:29.192 INFO:teuthology.orchestra.run.vm02.stdout: "enabled": false, 2026-03-10T05:47:29.192 INFO:teuthology.orchestra.run.vm02.stdout: "check_on_raw": false, 2026-03-10T05:47:29.192 INFO:teuthology.orchestra.run.vm02.stdout: "max_size": -1, 2026-03-10T05:47:29.192 INFO:teuthology.orchestra.run.vm02.stdout: "max_size_kb": 0, 2026-03-10T05:47:29.192 INFO:teuthology.orchestra.run.vm02.stdout: "max_objects": -1 2026-03-10T05:47:29.192 INFO:teuthology.orchestra.run.vm02.stdout: }, 2026-03-10T05:47:29.192 INFO:teuthology.orchestra.run.vm02.stdout: "user_ratelimit": { 2026-03-10T05:47:29.192 INFO:teuthology.orchestra.run.vm02.stdout: "max_read_ops": 0, 2026-03-10T05:47:29.192 INFO:teuthology.orchestra.run.vm02.stdout: "max_write_ops": 0, 2026-03-10T05:47:29.192 INFO:teuthology.orchestra.run.vm02.stdout: "max_read_bytes": 0, 2026-03-10T05:47:29.192 INFO:teuthology.orchestra.run.vm02.stdout: "max_write_bytes": 0, 2026-03-10T05:47:29.192 INFO:teuthology.orchestra.run.vm02.stdout: "enabled": false 2026-03-10T05:47:29.192 INFO:teuthology.orchestra.run.vm02.stdout: }, 2026-03-10T05:47:29.192 INFO:teuthology.orchestra.run.vm02.stdout: "bucket_ratelimit": { 2026-03-10T05:47:29.192 INFO:teuthology.orchestra.run.vm02.stdout: "max_read_ops": 0, 2026-03-10T05:47:29.192 INFO:teuthology.orchestra.run.vm02.stdout: "max_write_ops": 0, 2026-03-10T05:47:29.192 INFO:teuthology.orchestra.run.vm02.stdout: "max_read_bytes": 0, 2026-03-10T05:47:29.192 INFO:teuthology.orchestra.run.vm02.stdout: "max_write_bytes": 0, 2026-03-10T05:47:29.192 INFO:teuthology.orchestra.run.vm02.stdout: "enabled": false 2026-03-10T05:47:29.192 INFO:teuthology.orchestra.run.vm02.stdout: }, 2026-03-10T05:47:29.192 INFO:teuthology.orchestra.run.vm02.stdout: "anonymous_ratelimit": { 2026-03-10T05:47:29.192 INFO:teuthology.orchestra.run.vm02.stdout: "max_read_ops": 0, 2026-03-10T05:47:29.192 INFO:teuthology.orchestra.run.vm02.stdout: "max_write_ops": 0, 2026-03-10T05:47:29.192 INFO:teuthology.orchestra.run.vm02.stdout: "max_read_bytes": 0, 2026-03-10T05:47:29.192 INFO:teuthology.orchestra.run.vm02.stdout: "max_write_bytes": 0, 2026-03-10T05:47:29.192 INFO:teuthology.orchestra.run.vm02.stdout: "enabled": false 2026-03-10T05:47:29.193 INFO:teuthology.orchestra.run.vm02.stdout: } 2026-03-10T05:47:29.193 INFO:teuthology.orchestra.run.vm02.stdout: }, 2026-03-10T05:47:29.193 INFO:teuthology.orchestra.run.vm02.stdout: "realm_id": "fb11b072-4c9f-4af9-80fd-732a30fdcbae", 2026-03-10T05:47:29.193 INFO:teuthology.orchestra.run.vm02.stdout: "realm_name": "r", 2026-03-10T05:47:29.193 INFO:teuthology.orchestra.run.vm02.stdout: "realm_epoch": 2 2026-03-10T05:47:29.193 INFO:teuthology.orchestra.run.vm02.stdout:} 2026-03-10T05:47:29.272 DEBUG:teuthology.orchestra.run.vm02:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 107483ae-1c44-11f1-b530-c1172cd6122a -e sha1=e911bdebe5c8faa3800735d1568fcdca65db60df -- bash -c 'ceph orch apply rgw foo --realm r --zone z --placement=2 --port=8000' 2026-03-10T05:47:29.584 INFO:journalctl@ceph.alertmanager.a.vm02.stdout:Mar 10 05:47:29 vm02 systemd[1]: Stopping Ceph alertmanager.a for 107483ae-1c44-11f1-b530-c1172cd6122a... 2026-03-10T05:47:29.898 INFO:teuthology.orchestra.run.vm02.stdout:Scheduled rgw.foo update... 2026-03-10T05:47:29.909 INFO:journalctl@ceph.alertmanager.a.vm02.stdout:Mar 10 05:47:29 vm02 bash[43289]: Error response from daemon: No such container: ceph-107483ae-1c44-11f1-b530-c1172cd6122a-alertmanager.a 2026-03-10T05:47:29.909 INFO:journalctl@ceph.alertmanager.a.vm02.stdout:Mar 10 05:47:29 vm02 bash[39873]: level=info ts=2026-03-10T05:47:29.604Z caller=main.go:557 msg="Received SIGTERM, exiting gracefully..." 2026-03-10T05:47:29.909 INFO:journalctl@ceph.alertmanager.a.vm02.stdout:Mar 10 05:47:29 vm02 bash[43297]: ceph-107483ae-1c44-11f1-b530-c1172cd6122a-alertmanager-a 2026-03-10T05:47:29.909 INFO:journalctl@ceph.alertmanager.a.vm02.stdout:Mar 10 05:47:29 vm02 bash[43369]: Error response from daemon: No such container: ceph-107483ae-1c44-11f1-b530-c1172cd6122a-alertmanager.a 2026-03-10T05:47:29.909 INFO:journalctl@ceph.alertmanager.a.vm02.stdout:Mar 10 05:47:29 vm02 systemd[1]: ceph-107483ae-1c44-11f1-b530-c1172cd6122a@alertmanager.a.service: Deactivated successfully. 2026-03-10T05:47:29.909 INFO:journalctl@ceph.alertmanager.a.vm02.stdout:Mar 10 05:47:29 vm02 systemd[1]: Stopped Ceph alertmanager.a for 107483ae-1c44-11f1-b530-c1172cd6122a. 2026-03-10T05:47:29.909 INFO:journalctl@ceph.alertmanager.a.vm02.stdout:Mar 10 05:47:29 vm02 systemd[1]: Started Ceph alertmanager.a for 107483ae-1c44-11f1-b530-c1172cd6122a. 2026-03-10T05:47:29.909 INFO:journalctl@ceph.alertmanager.a.vm02.stdout:Mar 10 05:47:29 vm02 bash[43400]: level=info ts=2026-03-10T05:47:29.800Z caller=main.go:225 msg="Starting Alertmanager" version="(version=0.23.0, branch=HEAD, revision=61046b17771a57cfd4c4a51be370ab930a4d7d54)" 2026-03-10T05:47:29.909 INFO:journalctl@ceph.alertmanager.a.vm02.stdout:Mar 10 05:47:29 vm02 bash[43400]: level=info ts=2026-03-10T05:47:29.800Z caller=main.go:226 build_context="(go=go1.16.7, user=root@e21a959be8d2, date=20210825-10:48:55)" 2026-03-10T05:47:29.909 INFO:journalctl@ceph.alertmanager.a.vm02.stdout:Mar 10 05:47:29 vm02 bash[43400]: level=info ts=2026-03-10T05:47:29.802Z caller=cluster.go:184 component=cluster msg="setting advertise address explicitly" addr=192.168.123.102 port=9094 2026-03-10T05:47:29.909 INFO:journalctl@ceph.alertmanager.a.vm02.stdout:Mar 10 05:47:29 vm02 bash[43400]: level=info ts=2026-03-10T05:47:29.802Z caller=cluster.go:671 component=cluster msg="Waiting for gossip to settle..." interval=2s 2026-03-10T05:47:29.909 INFO:journalctl@ceph.alertmanager.a.vm02.stdout:Mar 10 05:47:29 vm02 bash[43400]: level=info ts=2026-03-10T05:47:29.829Z caller=coordinator.go:113 component=configuration msg="Loading configuration file" file=/etc/alertmanager/alertmanager.yml 2026-03-10T05:47:29.909 INFO:journalctl@ceph.alertmanager.a.vm02.stdout:Mar 10 05:47:29 vm02 bash[43400]: level=info ts=2026-03-10T05:47:29.829Z caller=coordinator.go:126 component=configuration msg="Completed loading of configuration file" file=/etc/alertmanager/alertmanager.yml 2026-03-10T05:47:29.909 INFO:journalctl@ceph.alertmanager.a.vm02.stdout:Mar 10 05:47:29 vm02 bash[43400]: level=info ts=2026-03-10T05:47:29.830Z caller=main.go:518 msg=Listening address=:9093 2026-03-10T05:47:29.909 INFO:journalctl@ceph.alertmanager.a.vm02.stdout:Mar 10 05:47:29 vm02 bash[43400]: level=info ts=2026-03-10T05:47:29.830Z caller=tls_config.go:191 msg="TLS is disabled." http2=false 2026-03-10T05:47:29.956 DEBUG:teuthology.orchestra.run.vm02:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 107483ae-1c44-11f1-b530-c1172cd6122a -e sha1=e911bdebe5c8faa3800735d1568fcdca65db60df -- bash -c 'ceph orch apply rgw smpl' 2026-03-10T05:47:30.226 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:47:30 vm02 bash[17462]: audit 2026-03-10T05:47:29.065567+0000 mon.a (mon.0) 633 : audit [INF] from='mgr.14409 ' entity='mgr.y' 2026-03-10T05:47:30.226 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:47:30 vm02 bash[17462]: audit 2026-03-10T05:47:29.087203+0000 mon.a (mon.0) 634 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set", "pool": "z.rgw.meta", "var": "pg_num_min", "val": "8"}]': finished 2026-03-10T05:47:30.226 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:47:30 vm02 bash[17462]: cluster 2026-03-10T05:47:29.087323+0000 mon.a (mon.0) 635 : cluster [DBG] osdmap e60: 8 total, 8 up, 8 in 2026-03-10T05:47:30.226 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:47:30 vm02 bash[17462]: audit 2026-03-10T05:47:29.188340+0000 mon.a (mon.0) 636 : audit [INF] from='mgr.14409 ' entity='mgr.y' 2026-03-10T05:47:30.226 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:47:30 vm02 bash[17462]: audit 2026-03-10T05:47:29.205796+0000 mon.a (mon.0) 637 : audit [INF] from='mgr.14409 ' entity='mgr.y' 2026-03-10T05:47:30.226 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:47:30 vm02 bash[17462]: cephadm 2026-03-10T05:47:29.207615+0000 mgr.y (mgr.14409) 41 : cephadm [INF] Reconfiguring alertmanager.a (dependencies changed)... 2026-03-10T05:47:30.226 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:47:30 vm02 bash[17462]: cephadm 2026-03-10T05:47:29.209194+0000 mgr.y (mgr.14409) 42 : cephadm [INF] Reconfiguring daemon alertmanager.a on vm02 2026-03-10T05:47:30.226 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:47:30 vm02 bash[17462]: audit 2026-03-10T05:47:29.681662+0000 mon.a (mon.0) 638 : audit [INF] from='mgr.14409 ' entity='mgr.y' 2026-03-10T05:47:30.226 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:47:30 vm02 bash[17462]: cluster 2026-03-10T05:47:29.684601+0000 mgr.y (mgr.14409) 43 : cluster [DBG] pgmap v33: 129 pgs: 129 active+clean; 451 KiB data, 53 MiB used, 160 GiB / 160 GiB avail; 255 B/s rd, 511 B/s wr, 1 op/s 2026-03-10T05:47:30.226 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:47:30 vm02 bash[17462]: cephadm 2026-03-10T05:47:29.685068+0000 mgr.y (mgr.14409) 44 : cephadm [INF] Reconfiguring prometheus.a (dependencies changed)... 2026-03-10T05:47:30.226 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:47:30 vm02 bash[17462]: cephadm 2026-03-10T05:47:29.688205+0000 mgr.y (mgr.14409) 45 : cephadm [INF] Reconfiguring daemon prometheus.a on vm05 2026-03-10T05:47:30.226 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:47:30 vm02 bash[17462]: audit 2026-03-10T05:47:29.895751+0000 mon.a (mon.0) 639 : audit [INF] from='mgr.14409 ' entity='mgr.y' 2026-03-10T05:47:30.226 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:47:30 vm02 bash[17462]: audit 2026-03-10T05:47:30.060118+0000 mon.a (mon.0) 640 : audit [INF] from='mgr.14409 ' entity='mgr.y' 2026-03-10T05:47:30.226 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:47:30 vm02 bash[22526]: audit 2026-03-10T05:47:29.065567+0000 mon.a (mon.0) 633 : audit [INF] from='mgr.14409 ' entity='mgr.y' 2026-03-10T05:47:30.226 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:47:30 vm02 bash[22526]: audit 2026-03-10T05:47:29.087203+0000 mon.a (mon.0) 634 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set", "pool": "z.rgw.meta", "var": "pg_num_min", "val": "8"}]': finished 2026-03-10T05:47:30.226 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:47:30 vm02 bash[22526]: cluster 2026-03-10T05:47:29.087323+0000 mon.a (mon.0) 635 : cluster [DBG] osdmap e60: 8 total, 8 up, 8 in 2026-03-10T05:47:30.226 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:47:30 vm02 bash[22526]: audit 2026-03-10T05:47:29.188340+0000 mon.a (mon.0) 636 : audit [INF] from='mgr.14409 ' entity='mgr.y' 2026-03-10T05:47:30.226 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:47:30 vm02 bash[22526]: audit 2026-03-10T05:47:29.205796+0000 mon.a (mon.0) 637 : audit [INF] from='mgr.14409 ' entity='mgr.y' 2026-03-10T05:47:30.226 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:47:30 vm02 bash[22526]: cephadm 2026-03-10T05:47:29.207615+0000 mgr.y (mgr.14409) 41 : cephadm [INF] Reconfiguring alertmanager.a (dependencies changed)... 2026-03-10T05:47:30.226 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:47:30 vm02 bash[22526]: cephadm 2026-03-10T05:47:29.209194+0000 mgr.y (mgr.14409) 42 : cephadm [INF] Reconfiguring daemon alertmanager.a on vm02 2026-03-10T05:47:30.226 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:47:30 vm02 bash[22526]: audit 2026-03-10T05:47:29.681662+0000 mon.a (mon.0) 638 : audit [INF] from='mgr.14409 ' entity='mgr.y' 2026-03-10T05:47:30.226 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:47:30 vm02 bash[22526]: cluster 2026-03-10T05:47:29.684601+0000 mgr.y (mgr.14409) 43 : cluster [DBG] pgmap v33: 129 pgs: 129 active+clean; 451 KiB data, 53 MiB used, 160 GiB / 160 GiB avail; 255 B/s rd, 511 B/s wr, 1 op/s 2026-03-10T05:47:30.226 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:47:30 vm02 bash[22526]: cephadm 2026-03-10T05:47:29.685068+0000 mgr.y (mgr.14409) 44 : cephadm [INF] Reconfiguring prometheus.a (dependencies changed)... 2026-03-10T05:47:30.226 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:47:30 vm02 bash[22526]: cephadm 2026-03-10T05:47:29.688205+0000 mgr.y (mgr.14409) 45 : cephadm [INF] Reconfiguring daemon prometheus.a on vm05 2026-03-10T05:47:30.226 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:47:30 vm02 bash[22526]: audit 2026-03-10T05:47:29.895751+0000 mon.a (mon.0) 639 : audit [INF] from='mgr.14409 ' entity='mgr.y' 2026-03-10T05:47:30.226 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:47:30 vm02 bash[22526]: audit 2026-03-10T05:47:30.060118+0000 mon.a (mon.0) 640 : audit [INF] from='mgr.14409 ' entity='mgr.y' 2026-03-10T05:47:30.258 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:47:30 vm05 bash[17864]: audit 2026-03-10T05:47:29.065567+0000 mon.a (mon.0) 633 : audit [INF] from='mgr.14409 ' entity='mgr.y' 2026-03-10T05:47:30.258 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:47:30 vm05 bash[17864]: audit 2026-03-10T05:47:29.087203+0000 mon.a (mon.0) 634 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set", "pool": "z.rgw.meta", "var": "pg_num_min", "val": "8"}]': finished 2026-03-10T05:47:30.258 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:47:30 vm05 bash[17864]: cluster 2026-03-10T05:47:29.087323+0000 mon.a (mon.0) 635 : cluster [DBG] osdmap e60: 8 total, 8 up, 8 in 2026-03-10T05:47:30.258 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:47:30 vm05 bash[17864]: audit 2026-03-10T05:47:29.188340+0000 mon.a (mon.0) 636 : audit [INF] from='mgr.14409 ' entity='mgr.y' 2026-03-10T05:47:30.258 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:47:30 vm05 bash[17864]: audit 2026-03-10T05:47:29.205796+0000 mon.a (mon.0) 637 : audit [INF] from='mgr.14409 ' entity='mgr.y' 2026-03-10T05:47:30.258 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:47:30 vm05 bash[17864]: cephadm 2026-03-10T05:47:29.207615+0000 mgr.y (mgr.14409) 41 : cephadm [INF] Reconfiguring alertmanager.a (dependencies changed)... 2026-03-10T05:47:30.258 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:47:30 vm05 bash[17864]: cephadm 2026-03-10T05:47:29.209194+0000 mgr.y (mgr.14409) 42 : cephadm [INF] Reconfiguring daemon alertmanager.a on vm02 2026-03-10T05:47:30.258 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:47:30 vm05 bash[17864]: audit 2026-03-10T05:47:29.681662+0000 mon.a (mon.0) 638 : audit [INF] from='mgr.14409 ' entity='mgr.y' 2026-03-10T05:47:30.258 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:47:30 vm05 bash[17864]: cluster 2026-03-10T05:47:29.684601+0000 mgr.y (mgr.14409) 43 : cluster [DBG] pgmap v33: 129 pgs: 129 active+clean; 451 KiB data, 53 MiB used, 160 GiB / 160 GiB avail; 255 B/s rd, 511 B/s wr, 1 op/s 2026-03-10T05:47:30.258 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:47:30 vm05 bash[17864]: cephadm 2026-03-10T05:47:29.685068+0000 mgr.y (mgr.14409) 44 : cephadm [INF] Reconfiguring prometheus.a (dependencies changed)... 2026-03-10T05:47:30.258 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:47:30 vm05 bash[17864]: cephadm 2026-03-10T05:47:29.688205+0000 mgr.y (mgr.14409) 45 : cephadm [INF] Reconfiguring daemon prometheus.a on vm05 2026-03-10T05:47:30.258 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:47:30 vm05 bash[17864]: audit 2026-03-10T05:47:29.895751+0000 mon.a (mon.0) 639 : audit [INF] from='mgr.14409 ' entity='mgr.y' 2026-03-10T05:47:30.258 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:47:30 vm05 bash[17864]: audit 2026-03-10T05:47:30.060118+0000 mon.a (mon.0) 640 : audit [INF] from='mgr.14409 ' entity='mgr.y' 2026-03-10T05:47:30.258 INFO:journalctl@ceph.prometheus.a.vm05.stdout:Mar 10 05:47:29 vm05 systemd[1]: Stopping Ceph prometheus.a for 107483ae-1c44-11f1-b530-c1172cd6122a... 2026-03-10T05:47:30.258 INFO:journalctl@ceph.prometheus.a.vm05.stdout:Mar 10 05:47:29 vm05 bash[33887]: Error response from daemon: No such container: ceph-107483ae-1c44-11f1-b530-c1172cd6122a-prometheus.a 2026-03-10T05:47:30.258 INFO:journalctl@ceph.prometheus.a.vm05.stdout:Mar 10 05:47:30 vm05 bash[33062]: ts=2026-03-10T05:47:29.995Z caller=main.go:775 level=warn msg="Received SIGTERM, exiting gracefully..." 2026-03-10T05:47:30.258 INFO:journalctl@ceph.prometheus.a.vm05.stdout:Mar 10 05:47:30 vm05 bash[33062]: ts=2026-03-10T05:47:29.995Z caller=main.go:798 level=info msg="Stopping scrape discovery manager..." 2026-03-10T05:47:30.258 INFO:journalctl@ceph.prometheus.a.vm05.stdout:Mar 10 05:47:30 vm05 bash[33062]: ts=2026-03-10T05:47:29.995Z caller=main.go:812 level=info msg="Stopping notify discovery manager..." 2026-03-10T05:47:30.258 INFO:journalctl@ceph.prometheus.a.vm05.stdout:Mar 10 05:47:30 vm05 bash[33062]: ts=2026-03-10T05:47:29.995Z caller=main.go:834 level=info msg="Stopping scrape manager..." 2026-03-10T05:47:30.258 INFO:journalctl@ceph.prometheus.a.vm05.stdout:Mar 10 05:47:30 vm05 bash[33062]: ts=2026-03-10T05:47:29.995Z caller=main.go:794 level=info msg="Scrape discovery manager stopped" 2026-03-10T05:47:30.258 INFO:journalctl@ceph.prometheus.a.vm05.stdout:Mar 10 05:47:30 vm05 bash[33062]: ts=2026-03-10T05:47:29.995Z caller=main.go:808 level=info msg="Notify discovery manager stopped" 2026-03-10T05:47:30.258 INFO:journalctl@ceph.prometheus.a.vm05.stdout:Mar 10 05:47:30 vm05 bash[33062]: ts=2026-03-10T05:47:29.995Z caller=manager.go:945 level=info component="rule manager" msg="Stopping rule manager..." 2026-03-10T05:47:30.259 INFO:journalctl@ceph.prometheus.a.vm05.stdout:Mar 10 05:47:30 vm05 bash[33062]: ts=2026-03-10T05:47:29.995Z caller=manager.go:955 level=info component="rule manager" msg="Rule manager stopped" 2026-03-10T05:47:30.259 INFO:journalctl@ceph.prometheus.a.vm05.stdout:Mar 10 05:47:30 vm05 bash[33062]: ts=2026-03-10T05:47:29.996Z caller=main.go:828 level=info msg="Scrape manager stopped" 2026-03-10T05:47:30.259 INFO:journalctl@ceph.prometheus.a.vm05.stdout:Mar 10 05:47:30 vm05 bash[33062]: ts=2026-03-10T05:47:29.996Z caller=notifier.go:600 level=info component=notifier msg="Stopping notification manager..." 2026-03-10T05:47:30.259 INFO:journalctl@ceph.prometheus.a.vm05.stdout:Mar 10 05:47:30 vm05 bash[33062]: ts=2026-03-10T05:47:29.996Z caller=main.go:1054 level=info msg="Notifier manager stopped" 2026-03-10T05:47:30.259 INFO:journalctl@ceph.prometheus.a.vm05.stdout:Mar 10 05:47:30 vm05 bash[33062]: ts=2026-03-10T05:47:29.996Z caller=main.go:1066 level=info msg="See you next time!" 2026-03-10T05:47:30.259 INFO:journalctl@ceph.prometheus.a.vm05.stdout:Mar 10 05:47:30 vm05 bash[33895]: ceph-107483ae-1c44-11f1-b530-c1172cd6122a-prometheus-a 2026-03-10T05:47:30.259 INFO:journalctl@ceph.prometheus.a.vm05.stdout:Mar 10 05:47:30 vm05 bash[33928]: Error response from daemon: No such container: ceph-107483ae-1c44-11f1-b530-c1172cd6122a-prometheus.a 2026-03-10T05:47:30.259 INFO:journalctl@ceph.prometheus.a.vm05.stdout:Mar 10 05:47:30 vm05 systemd[1]: ceph-107483ae-1c44-11f1-b530-c1172cd6122a@prometheus.a.service: Deactivated successfully. 2026-03-10T05:47:30.259 INFO:journalctl@ceph.prometheus.a.vm05.stdout:Mar 10 05:47:30 vm05 systemd[1]: Stopped Ceph prometheus.a for 107483ae-1c44-11f1-b530-c1172cd6122a. 2026-03-10T05:47:30.259 INFO:journalctl@ceph.prometheus.a.vm05.stdout:Mar 10 05:47:30 vm05 systemd[1]: Started Ceph prometheus.a for 107483ae-1c44-11f1-b530-c1172cd6122a. 2026-03-10T05:47:30.259 INFO:journalctl@ceph.prometheus.a.vm05.stdout:Mar 10 05:47:30 vm05 bash[33954]: ts=2026-03-10T05:47:30.164Z caller=main.go:475 level=info msg="No time or size retention was set so using the default time retention" duration=15d 2026-03-10T05:47:30.259 INFO:journalctl@ceph.prometheus.a.vm05.stdout:Mar 10 05:47:30 vm05 bash[33954]: ts=2026-03-10T05:47:30.164Z caller=main.go:512 level=info msg="Starting Prometheus" version="(version=2.33.4, branch=HEAD, revision=83032011a5d3e6102624fe58241a374a7201fee8)" 2026-03-10T05:47:30.259 INFO:journalctl@ceph.prometheus.a.vm05.stdout:Mar 10 05:47:30 vm05 bash[33954]: ts=2026-03-10T05:47:30.164Z caller=main.go:517 level=info build_context="(go=go1.17.7, user=root@d13bf69e7be8, date=20220222-16:51:28)" 2026-03-10T05:47:30.259 INFO:journalctl@ceph.prometheus.a.vm05.stdout:Mar 10 05:47:30 vm05 bash[33954]: ts=2026-03-10T05:47:30.164Z caller=main.go:518 level=info host_details="(Linux 5.15.0-1092-kvm #97-Ubuntu SMP Fri Jan 23 15:00:24 UTC 2026 x86_64 vm05 (none))" 2026-03-10T05:47:30.259 INFO:journalctl@ceph.prometheus.a.vm05.stdout:Mar 10 05:47:30 vm05 bash[33954]: ts=2026-03-10T05:47:30.164Z caller=main.go:519 level=info fd_limits="(soft=1048576, hard=1048576)" 2026-03-10T05:47:30.259 INFO:journalctl@ceph.prometheus.a.vm05.stdout:Mar 10 05:47:30 vm05 bash[33954]: ts=2026-03-10T05:47:30.164Z caller=main.go:520 level=info vm_limits="(soft=unlimited, hard=unlimited)" 2026-03-10T05:47:30.259 INFO:journalctl@ceph.prometheus.a.vm05.stdout:Mar 10 05:47:30 vm05 bash[33954]: ts=2026-03-10T05:47:30.165Z caller=web.go:570 level=info component=web msg="Start listening for connections" address=:9095 2026-03-10T05:47:30.259 INFO:journalctl@ceph.prometheus.a.vm05.stdout:Mar 10 05:47:30 vm05 bash[33954]: ts=2026-03-10T05:47:30.166Z caller=main.go:923 level=info msg="Starting TSDB ..." 2026-03-10T05:47:30.259 INFO:journalctl@ceph.prometheus.a.vm05.stdout:Mar 10 05:47:30 vm05 bash[33954]: ts=2026-03-10T05:47:30.170Z caller=tls_config.go:195 level=info component=web msg="TLS is disabled." http2=false 2026-03-10T05:47:30.259 INFO:journalctl@ceph.prometheus.a.vm05.stdout:Mar 10 05:47:30 vm05 bash[33954]: ts=2026-03-10T05:47:30.174Z caller=head.go:493 level=info component=tsdb msg="Replaying on-disk memory mappable chunks if any" 2026-03-10T05:47:30.259 INFO:journalctl@ceph.prometheus.a.vm05.stdout:Mar 10 05:47:30 vm05 bash[33954]: ts=2026-03-10T05:47:30.174Z caller=head.go:527 level=info component=tsdb msg="On-disk memory mappable chunks replay completed" duration=1.554µs 2026-03-10T05:47:30.259 INFO:journalctl@ceph.prometheus.a.vm05.stdout:Mar 10 05:47:30 vm05 bash[33954]: ts=2026-03-10T05:47:30.174Z caller=head.go:533 level=info component=tsdb msg="Replaying WAL, this may take a while" 2026-03-10T05:47:30.424 INFO:teuthology.orchestra.run.vm02.stdout:Scheduled rgw.smpl update... 2026-03-10T05:47:30.495 DEBUG:teuthology.orchestra.run.vm02:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 107483ae-1c44-11f1-b530-c1172cd6122a -e sha1=e911bdebe5c8faa3800735d1568fcdca65db60df -- bash -c 'ceph osd pool create foo' 2026-03-10T05:47:31.334 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:47:31 vm02 bash[17462]: audit 2026-03-10T05:47:29.888944+0000 mgr.y (mgr.14409) 46 : audit [DBG] from='client.24461 -' entity='client.admin' cmd=[{"prefix": "orch apply rgw", "svc_id": "foo", "realm": "r", "zone": "z", "placement": "2", "port": 8000, "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T05:47:31.334 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:47:31 vm02 bash[17462]: cephadm 2026-03-10T05:47:29.889919+0000 mgr.y (mgr.14409) 47 : cephadm [INF] Saving service rgw.foo spec with placement count:2 2026-03-10T05:47:31.334 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:47:31 vm02 bash[17462]: audit 2026-03-10T05:47:30.065373+0000 mon.c (mon.1) 61 : audit [DBG] from='mgr.14409 192.168.123.102:0/4073702081' entity='mgr.y' cmd=[{"prefix": "dashboard get-alertmanager-api-host"}]: dispatch 2026-03-10T05:47:31.334 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:47:31 vm02 bash[17462]: audit 2026-03-10T05:47:30.065727+0000 mgr.y (mgr.14409) 48 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard get-alertmanager-api-host"}]: dispatch 2026-03-10T05:47:31.334 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:47:31 vm02 bash[17462]: audit 2026-03-10T05:47:30.076375+0000 mon.c (mon.1) 62 : audit [INF] from='mgr.14409 192.168.123.102:0/4073702081' entity='mgr.y' cmd=[{"prefix": "dashboard set-alertmanager-api-host", "value": "http://192.168.123.102:9093"}]: dispatch 2026-03-10T05:47:31.334 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:47:31 vm02 bash[17462]: audit 2026-03-10T05:47:30.076644+0000 mgr.y (mgr.14409) 49 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard set-alertmanager-api-host", "value": "http://192.168.123.102:9093"}]: dispatch 2026-03-10T05:47:31.334 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:47:31 vm02 bash[17462]: audit 2026-03-10T05:47:30.081690+0000 mon.a (mon.0) 641 : audit [INF] from='mgr.14409 ' entity='mgr.y' 2026-03-10T05:47:31.334 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:47:31 vm02 bash[17462]: audit 2026-03-10T05:47:30.091450+0000 mon.c (mon.1) 63 : audit [DBG] from='mgr.14409 192.168.123.102:0/4073702081' entity='mgr.y' cmd=[{"prefix": "dashboard get-grafana-api-url"}]: dispatch 2026-03-10T05:47:31.334 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:47:31 vm02 bash[17462]: audit 2026-03-10T05:47:30.092013+0000 mgr.y (mgr.14409) 50 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard get-grafana-api-url"}]: dispatch 2026-03-10T05:47:31.334 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:47:31 vm02 bash[17462]: audit 2026-03-10T05:47:30.093736+0000 mon.c (mon.1) 64 : audit [INF] from='mgr.14409 192.168.123.102:0/4073702081' entity='mgr.y' cmd=[{"prefix": "dashboard set-grafana-api-url", "value": "https://192.168.123.105:3000"}]: dispatch 2026-03-10T05:47:31.334 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:47:31 vm02 bash[17462]: audit 2026-03-10T05:47:30.094102+0000 mgr.y (mgr.14409) 51 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard set-grafana-api-url", "value": "https://192.168.123.105:3000"}]: dispatch 2026-03-10T05:47:31.334 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:47:31 vm02 bash[17462]: audit 2026-03-10T05:47:30.100535+0000 mon.a (mon.0) 642 : audit [INF] from='mgr.14409 ' entity='mgr.y' 2026-03-10T05:47:31.334 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:47:31 vm02 bash[17462]: audit 2026-03-10T05:47:30.111171+0000 mon.c (mon.1) 65 : audit [DBG] from='mgr.14409 192.168.123.102:0/4073702081' entity='mgr.y' cmd=[{"prefix": "dashboard get-prometheus-api-host"}]: dispatch 2026-03-10T05:47:31.334 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:47:31 vm02 bash[17462]: audit 2026-03-10T05:47:30.111595+0000 mgr.y (mgr.14409) 52 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard get-prometheus-api-host"}]: dispatch 2026-03-10T05:47:31.334 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:47:31 vm02 bash[17462]: audit 2026-03-10T05:47:30.116327+0000 mon.c (mon.1) 66 : audit [INF] from='mgr.14409 192.168.123.102:0/4073702081' entity='mgr.y' cmd=[{"prefix": "dashboard set-prometheus-api-host", "value": "http://192.168.123.105:9095"}]: dispatch 2026-03-10T05:47:31.334 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:47:31 vm02 bash[17462]: audit 2026-03-10T05:47:30.117284+0000 mgr.y (mgr.14409) 53 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard set-prometheus-api-host", "value": "http://192.168.123.105:9095"}]: dispatch 2026-03-10T05:47:31.334 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:47:31 vm02 bash[17462]: audit 2026-03-10T05:47:30.122214+0000 mon.a (mon.0) 643 : audit [INF] from='mgr.14409 ' entity='mgr.y' 2026-03-10T05:47:31.334 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:47:31 vm02 bash[17462]: audit 2026-03-10T05:47:30.130143+0000 mon.c (mon.1) 67 : audit [DBG] from='mgr.14409 192.168.123.102:0/4073702081' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:47:31.334 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:47:31 vm02 bash[17462]: audit 2026-03-10T05:47:30.131618+0000 mon.c (mon.1) 68 : audit [INF] from='mgr.14409 192.168.123.102:0/4073702081' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T05:47:31.334 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:47:31 vm02 bash[17462]: audit 2026-03-10T05:47:30.416565+0000 mgr.y (mgr.14409) 54 : audit [DBG] from='client.14583 -' entity='client.admin' cmd=[{"prefix": "orch apply rgw", "svc_id": "smpl", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T05:47:31.334 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:47:31 vm02 bash[17462]: cephadm 2026-03-10T05:47:30.417300+0000 mgr.y (mgr.14409) 55 : cephadm [INF] Saving service rgw.smpl spec with placement count:2 2026-03-10T05:47:31.334 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:47:31 vm02 bash[17462]: audit 2026-03-10T05:47:30.423237+0000 mon.a (mon.0) 644 : audit [INF] from='mgr.14409 ' entity='mgr.y' 2026-03-10T05:47:31.334 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:47:31 vm02 bash[17462]: audit 2026-03-10T05:47:30.876425+0000 mon.c (mon.1) 69 : audit [INF] from='client.? 192.168.123.102:0/3130871225' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "foo"}]: dispatch 2026-03-10T05:47:31.334 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:47:31 vm02 bash[17462]: audit 2026-03-10T05:47:30.876693+0000 mon.a (mon.0) 645 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "foo"}]: dispatch 2026-03-10T05:47:31.334 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:47:31 vm02 bash[22526]: audit 2026-03-10T05:47:29.888944+0000 mgr.y (mgr.14409) 46 : audit [DBG] from='client.24461 -' entity='client.admin' cmd=[{"prefix": "orch apply rgw", "svc_id": "foo", "realm": "r", "zone": "z", "placement": "2", "port": 8000, "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T05:47:31.334 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:47:31 vm02 bash[22526]: cephadm 2026-03-10T05:47:29.889919+0000 mgr.y (mgr.14409) 47 : cephadm [INF] Saving service rgw.foo spec with placement count:2 2026-03-10T05:47:31.334 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:47:31 vm02 bash[22526]: audit 2026-03-10T05:47:30.065373+0000 mon.c (mon.1) 61 : audit [DBG] from='mgr.14409 192.168.123.102:0/4073702081' entity='mgr.y' cmd=[{"prefix": "dashboard get-alertmanager-api-host"}]: dispatch 2026-03-10T05:47:31.335 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:47:31 vm02 bash[22526]: audit 2026-03-10T05:47:30.065727+0000 mgr.y (mgr.14409) 48 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard get-alertmanager-api-host"}]: dispatch 2026-03-10T05:47:31.335 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:47:31 vm02 bash[22526]: audit 2026-03-10T05:47:30.076375+0000 mon.c (mon.1) 62 : audit [INF] from='mgr.14409 192.168.123.102:0/4073702081' entity='mgr.y' cmd=[{"prefix": "dashboard set-alertmanager-api-host", "value": "http://192.168.123.102:9093"}]: dispatch 2026-03-10T05:47:31.335 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:47:31 vm02 bash[22526]: audit 2026-03-10T05:47:30.076644+0000 mgr.y (mgr.14409) 49 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard set-alertmanager-api-host", "value": "http://192.168.123.102:9093"}]: dispatch 2026-03-10T05:47:31.335 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:47:31 vm02 bash[22526]: audit 2026-03-10T05:47:30.081690+0000 mon.a (mon.0) 641 : audit [INF] from='mgr.14409 ' entity='mgr.y' 2026-03-10T05:47:31.335 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:47:31 vm02 bash[22526]: audit 2026-03-10T05:47:30.091450+0000 mon.c (mon.1) 63 : audit [DBG] from='mgr.14409 192.168.123.102:0/4073702081' entity='mgr.y' cmd=[{"prefix": "dashboard get-grafana-api-url"}]: dispatch 2026-03-10T05:47:31.335 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:47:31 vm02 bash[22526]: audit 2026-03-10T05:47:30.092013+0000 mgr.y (mgr.14409) 50 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard get-grafana-api-url"}]: dispatch 2026-03-10T05:47:31.335 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:47:31 vm02 bash[22526]: audit 2026-03-10T05:47:30.093736+0000 mon.c (mon.1) 64 : audit [INF] from='mgr.14409 192.168.123.102:0/4073702081' entity='mgr.y' cmd=[{"prefix": "dashboard set-grafana-api-url", "value": "https://192.168.123.105:3000"}]: dispatch 2026-03-10T05:47:31.335 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:47:31 vm02 bash[22526]: audit 2026-03-10T05:47:30.094102+0000 mgr.y (mgr.14409) 51 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard set-grafana-api-url", "value": "https://192.168.123.105:3000"}]: dispatch 2026-03-10T05:47:31.335 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:47:31 vm02 bash[22526]: audit 2026-03-10T05:47:30.100535+0000 mon.a (mon.0) 642 : audit [INF] from='mgr.14409 ' entity='mgr.y' 2026-03-10T05:47:31.335 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:47:31 vm02 bash[22526]: audit 2026-03-10T05:47:30.111171+0000 mon.c (mon.1) 65 : audit [DBG] from='mgr.14409 192.168.123.102:0/4073702081' entity='mgr.y' cmd=[{"prefix": "dashboard get-prometheus-api-host"}]: dispatch 2026-03-10T05:47:31.335 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:47:31 vm02 bash[22526]: audit 2026-03-10T05:47:30.111595+0000 mgr.y (mgr.14409) 52 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard get-prometheus-api-host"}]: dispatch 2026-03-10T05:47:31.335 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:47:31 vm02 bash[22526]: audit 2026-03-10T05:47:30.116327+0000 mon.c (mon.1) 66 : audit [INF] from='mgr.14409 192.168.123.102:0/4073702081' entity='mgr.y' cmd=[{"prefix": "dashboard set-prometheus-api-host", "value": "http://192.168.123.105:9095"}]: dispatch 2026-03-10T05:47:31.335 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:47:31 vm02 bash[22526]: audit 2026-03-10T05:47:30.117284+0000 mgr.y (mgr.14409) 53 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard set-prometheus-api-host", "value": "http://192.168.123.105:9095"}]: dispatch 2026-03-10T05:47:31.335 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:47:31 vm02 bash[22526]: audit 2026-03-10T05:47:30.122214+0000 mon.a (mon.0) 643 : audit [INF] from='mgr.14409 ' entity='mgr.y' 2026-03-10T05:47:31.335 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:47:31 vm02 bash[22526]: audit 2026-03-10T05:47:30.130143+0000 mon.c (mon.1) 67 : audit [DBG] from='mgr.14409 192.168.123.102:0/4073702081' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:47:31.335 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:47:31 vm02 bash[22526]: audit 2026-03-10T05:47:30.131618+0000 mon.c (mon.1) 68 : audit [INF] from='mgr.14409 192.168.123.102:0/4073702081' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T05:47:31.335 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:47:31 vm02 bash[22526]: audit 2026-03-10T05:47:30.416565+0000 mgr.y (mgr.14409) 54 : audit [DBG] from='client.14583 -' entity='client.admin' cmd=[{"prefix": "orch apply rgw", "svc_id": "smpl", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T05:47:31.335 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:47:31 vm02 bash[22526]: cephadm 2026-03-10T05:47:30.417300+0000 mgr.y (mgr.14409) 55 : cephadm [INF] Saving service rgw.smpl spec with placement count:2 2026-03-10T05:47:31.335 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:47:31 vm02 bash[22526]: audit 2026-03-10T05:47:30.423237+0000 mon.a (mon.0) 644 : audit [INF] from='mgr.14409 ' entity='mgr.y' 2026-03-10T05:47:31.335 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:47:31 vm02 bash[22526]: audit 2026-03-10T05:47:30.876425+0000 mon.c (mon.1) 69 : audit [INF] from='client.? 192.168.123.102:0/3130871225' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "foo"}]: dispatch 2026-03-10T05:47:31.335 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:47:31 vm02 bash[22526]: audit 2026-03-10T05:47:30.876693+0000 mon.a (mon.0) 645 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "foo"}]: dispatch 2026-03-10T05:47:31.508 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:47:31 vm05 bash[17864]: audit 2026-03-10T05:47:29.888944+0000 mgr.y (mgr.14409) 46 : audit [DBG] from='client.24461 -' entity='client.admin' cmd=[{"prefix": "orch apply rgw", "svc_id": "foo", "realm": "r", "zone": "z", "placement": "2", "port": 8000, "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T05:47:31.508 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:47:31 vm05 bash[17864]: cephadm 2026-03-10T05:47:29.889919+0000 mgr.y (mgr.14409) 47 : cephadm [INF] Saving service rgw.foo spec with placement count:2 2026-03-10T05:47:31.508 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:47:31 vm05 bash[17864]: audit 2026-03-10T05:47:30.065373+0000 mon.c (mon.1) 61 : audit [DBG] from='mgr.14409 192.168.123.102:0/4073702081' entity='mgr.y' cmd=[{"prefix": "dashboard get-alertmanager-api-host"}]: dispatch 2026-03-10T05:47:31.508 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:47:31 vm05 bash[17864]: audit 2026-03-10T05:47:30.065727+0000 mgr.y (mgr.14409) 48 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard get-alertmanager-api-host"}]: dispatch 2026-03-10T05:47:31.508 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:47:31 vm05 bash[17864]: audit 2026-03-10T05:47:30.076375+0000 mon.c (mon.1) 62 : audit [INF] from='mgr.14409 192.168.123.102:0/4073702081' entity='mgr.y' cmd=[{"prefix": "dashboard set-alertmanager-api-host", "value": "http://192.168.123.102:9093"}]: dispatch 2026-03-10T05:47:31.508 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:47:31 vm05 bash[17864]: audit 2026-03-10T05:47:30.076644+0000 mgr.y (mgr.14409) 49 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard set-alertmanager-api-host", "value": "http://192.168.123.102:9093"}]: dispatch 2026-03-10T05:47:31.508 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:47:31 vm05 bash[17864]: audit 2026-03-10T05:47:30.081690+0000 mon.a (mon.0) 641 : audit [INF] from='mgr.14409 ' entity='mgr.y' 2026-03-10T05:47:31.508 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:47:31 vm05 bash[17864]: audit 2026-03-10T05:47:30.091450+0000 mon.c (mon.1) 63 : audit [DBG] from='mgr.14409 192.168.123.102:0/4073702081' entity='mgr.y' cmd=[{"prefix": "dashboard get-grafana-api-url"}]: dispatch 2026-03-10T05:47:31.508 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:47:31 vm05 bash[17864]: audit 2026-03-10T05:47:30.092013+0000 mgr.y (mgr.14409) 50 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard get-grafana-api-url"}]: dispatch 2026-03-10T05:47:31.508 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:47:31 vm05 bash[17864]: audit 2026-03-10T05:47:30.093736+0000 mon.c (mon.1) 64 : audit [INF] from='mgr.14409 192.168.123.102:0/4073702081' entity='mgr.y' cmd=[{"prefix": "dashboard set-grafana-api-url", "value": "https://192.168.123.105:3000"}]: dispatch 2026-03-10T05:47:31.508 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:47:31 vm05 bash[17864]: audit 2026-03-10T05:47:30.094102+0000 mgr.y (mgr.14409) 51 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard set-grafana-api-url", "value": "https://192.168.123.105:3000"}]: dispatch 2026-03-10T05:47:31.508 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:47:31 vm05 bash[17864]: audit 2026-03-10T05:47:30.100535+0000 mon.a (mon.0) 642 : audit [INF] from='mgr.14409 ' entity='mgr.y' 2026-03-10T05:47:31.508 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:47:31 vm05 bash[17864]: audit 2026-03-10T05:47:30.111171+0000 mon.c (mon.1) 65 : audit [DBG] from='mgr.14409 192.168.123.102:0/4073702081' entity='mgr.y' cmd=[{"prefix": "dashboard get-prometheus-api-host"}]: dispatch 2026-03-10T05:47:31.508 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:47:31 vm05 bash[17864]: audit 2026-03-10T05:47:30.111595+0000 mgr.y (mgr.14409) 52 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard get-prometheus-api-host"}]: dispatch 2026-03-10T05:47:31.508 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:47:31 vm05 bash[17864]: audit 2026-03-10T05:47:30.116327+0000 mon.c (mon.1) 66 : audit [INF] from='mgr.14409 192.168.123.102:0/4073702081' entity='mgr.y' cmd=[{"prefix": "dashboard set-prometheus-api-host", "value": "http://192.168.123.105:9095"}]: dispatch 2026-03-10T05:47:31.508 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:47:31 vm05 bash[17864]: audit 2026-03-10T05:47:30.117284+0000 mgr.y (mgr.14409) 53 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard set-prometheus-api-host", "value": "http://192.168.123.105:9095"}]: dispatch 2026-03-10T05:47:31.508 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:47:31 vm05 bash[17864]: audit 2026-03-10T05:47:30.122214+0000 mon.a (mon.0) 643 : audit [INF] from='mgr.14409 ' entity='mgr.y' 2026-03-10T05:47:31.508 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:47:31 vm05 bash[17864]: audit 2026-03-10T05:47:30.130143+0000 mon.c (mon.1) 67 : audit [DBG] from='mgr.14409 192.168.123.102:0/4073702081' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:47:31.508 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:47:31 vm05 bash[17864]: audit 2026-03-10T05:47:30.131618+0000 mon.c (mon.1) 68 : audit [INF] from='mgr.14409 192.168.123.102:0/4073702081' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T05:47:31.508 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:47:31 vm05 bash[17864]: audit 2026-03-10T05:47:30.416565+0000 mgr.y (mgr.14409) 54 : audit [DBG] from='client.14583 -' entity='client.admin' cmd=[{"prefix": "orch apply rgw", "svc_id": "smpl", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T05:47:31.508 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:47:31 vm05 bash[17864]: cephadm 2026-03-10T05:47:30.417300+0000 mgr.y (mgr.14409) 55 : cephadm [INF] Saving service rgw.smpl spec with placement count:2 2026-03-10T05:47:31.508 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:47:31 vm05 bash[17864]: audit 2026-03-10T05:47:30.423237+0000 mon.a (mon.0) 644 : audit [INF] from='mgr.14409 ' entity='mgr.y' 2026-03-10T05:47:31.508 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:47:31 vm05 bash[17864]: audit 2026-03-10T05:47:30.876425+0000 mon.c (mon.1) 69 : audit [INF] from='client.? 192.168.123.102:0/3130871225' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "foo"}]: dispatch 2026-03-10T05:47:31.508 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:47:31 vm05 bash[17864]: audit 2026-03-10T05:47:30.876693+0000 mon.a (mon.0) 645 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "foo"}]: dispatch 2026-03-10T05:47:31.705 INFO:teuthology.orchestra.run.vm02.stderr:pool 'foo' created 2026-03-10T05:47:31.757 DEBUG:teuthology.orchestra.run.vm02:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 107483ae-1c44-11f1-b530-c1172cd6122a -e sha1=e911bdebe5c8faa3800735d1568fcdca65db60df -- bash -c 'rbd pool init foo' 2026-03-10T05:47:31.973 INFO:journalctl@ceph.alertmanager.a.vm02.stdout:Mar 10 05:47:31 vm02 bash[43400]: level=info ts=2026-03-10T05:47:31.803Z caller=cluster.go:696 component=cluster msg="gossip not settled" polls=0 before=0 now=1 elapsed=2.000536602s 2026-03-10T05:47:32.008 INFO:journalctl@ceph.prometheus.a.vm05.stdout:Mar 10 05:47:31 vm05 bash[33954]: ts=2026-03-10T05:47:31.503Z caller=head.go:604 level=info component=tsdb msg="WAL segment loaded" segment=0 maxSegment=1 2026-03-10T05:47:32.008 INFO:journalctl@ceph.prometheus.a.vm05.stdout:Mar 10 05:47:31 vm05 bash[33954]: ts=2026-03-10T05:47:31.503Z caller=head.go:604 level=info component=tsdb msg="WAL segment loaded" segment=1 maxSegment=1 2026-03-10T05:47:32.008 INFO:journalctl@ceph.prometheus.a.vm05.stdout:Mar 10 05:47:31 vm05 bash[33954]: ts=2026-03-10T05:47:31.503Z caller=head.go:610 level=info component=tsdb msg="WAL replay completed" checkpoint_replay_duration=235.831µs wal_replay_duration=1.32863181s total_replay_duration=1.328881216s 2026-03-10T05:47:32.008 INFO:journalctl@ceph.prometheus.a.vm05.stdout:Mar 10 05:47:31 vm05 bash[33954]: ts=2026-03-10T05:47:31.504Z caller=main.go:944 level=info fs_type=EXT4_SUPER_MAGIC 2026-03-10T05:47:32.008 INFO:journalctl@ceph.prometheus.a.vm05.stdout:Mar 10 05:47:31 vm05 bash[33954]: ts=2026-03-10T05:47:31.504Z caller=main.go:947 level=info msg="TSDB started" 2026-03-10T05:47:32.008 INFO:journalctl@ceph.prometheus.a.vm05.stdout:Mar 10 05:47:31 vm05 bash[33954]: ts=2026-03-10T05:47:31.504Z caller=main.go:1128 level=info msg="Loading configuration file" filename=/etc/prometheus/prometheus.yml 2026-03-10T05:47:32.008 INFO:journalctl@ceph.prometheus.a.vm05.stdout:Mar 10 05:47:31 vm05 bash[33954]: ts=2026-03-10T05:47:31.513Z caller=main.go:1165 level=info msg="Completed loading of configuration file" filename=/etc/prometheus/prometheus.yml totalDuration=8.741673ms db_storage=461ns remote_storage=1.173µs web_handler=211ns query_engine=811ns scrape=688.688µs scrape_sd=42.068µs notify=29.645µs notify_sd=6.071µs rules=7.776156ms 2026-03-10T05:47:32.008 INFO:journalctl@ceph.prometheus.a.vm05.stdout:Mar 10 05:47:31 vm05 bash[33954]: ts=2026-03-10T05:47:31.513Z caller=main.go:896 level=info msg="Server is ready to receive web requests." 2026-03-10T05:47:33.008 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:47:32 vm05 bash[17864]: cluster 2026-03-10T05:47:31.684837+0000 mgr.y (mgr.14409) 56 : cluster [DBG] pgmap v34: 129 pgs: 129 active+clean; 451 KiB data, 53 MiB used, 160 GiB / 160 GiB avail; 182 B/s rd, 365 B/s wr, 1 op/s 2026-03-10T05:47:33.008 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:47:32 vm05 bash[17864]: audit 2026-03-10T05:47:31.695137+0000 mon.a (mon.0) 646 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "foo"}]': finished 2026-03-10T05:47:33.008 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:47:32 vm05 bash[17864]: cluster 2026-03-10T05:47:31.695203+0000 mon.a (mon.0) 647 : cluster [DBG] osdmap e61: 8 total, 8 up, 8 in 2026-03-10T05:47:33.008 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:47:32 vm05 bash[17864]: audit 2026-03-10T05:47:32.007165+0000 mon.a (mon.0) 648 : audit [INF] from='client.? 192.168.123.102:0/4005702109' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "foo","app": "rbd"}]: dispatch 2026-03-10T05:47:33.084 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:47:32 vm02 bash[17462]: cluster 2026-03-10T05:47:31.684837+0000 mgr.y (mgr.14409) 56 : cluster [DBG] pgmap v34: 129 pgs: 129 active+clean; 451 KiB data, 53 MiB used, 160 GiB / 160 GiB avail; 182 B/s rd, 365 B/s wr, 1 op/s 2026-03-10T05:47:33.084 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:47:32 vm02 bash[17462]: audit 2026-03-10T05:47:31.695137+0000 mon.a (mon.0) 646 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "foo"}]': finished 2026-03-10T05:47:33.084 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:47:32 vm02 bash[17462]: cluster 2026-03-10T05:47:31.695203+0000 mon.a (mon.0) 647 : cluster [DBG] osdmap e61: 8 total, 8 up, 8 in 2026-03-10T05:47:33.084 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:47:32 vm02 bash[17462]: audit 2026-03-10T05:47:32.007165+0000 mon.a (mon.0) 648 : audit [INF] from='client.? 192.168.123.102:0/4005702109' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "foo","app": "rbd"}]: dispatch 2026-03-10T05:47:33.084 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:47:32 vm02 bash[22526]: cluster 2026-03-10T05:47:31.684837+0000 mgr.y (mgr.14409) 56 : cluster [DBG] pgmap v34: 129 pgs: 129 active+clean; 451 KiB data, 53 MiB used, 160 GiB / 160 GiB avail; 182 B/s rd, 365 B/s wr, 1 op/s 2026-03-10T05:47:33.084 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:47:32 vm02 bash[22526]: audit 2026-03-10T05:47:31.695137+0000 mon.a (mon.0) 646 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "foo"}]': finished 2026-03-10T05:47:33.084 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:47:32 vm02 bash[22526]: cluster 2026-03-10T05:47:31.695203+0000 mon.a (mon.0) 647 : cluster [DBG] osdmap e61: 8 total, 8 up, 8 in 2026-03-10T05:47:33.084 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:47:32 vm02 bash[22526]: audit 2026-03-10T05:47:32.007165+0000 mon.a (mon.0) 648 : audit [INF] from='client.? 192.168.123.102:0/4005702109' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "foo","app": "rbd"}]: dispatch 2026-03-10T05:47:34.008 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:47:33 vm05 bash[17864]: audit 2026-03-10T05:47:32.699056+0000 mon.a (mon.0) 649 : audit [INF] from='client.? 192.168.123.102:0/4005702109' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "foo","app": "rbd"}]': finished 2026-03-10T05:47:34.008 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:47:33 vm05 bash[17864]: cluster 2026-03-10T05:47:32.699137+0000 mon.a (mon.0) 650 : cluster [DBG] osdmap e62: 8 total, 8 up, 8 in 2026-03-10T05:47:34.008 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:47:33 vm05 bash[17864]: audit 2026-03-10T05:47:33.131486+0000 mon.a (mon.0) 651 : audit [INF] from='mgr.14409 ' entity='mgr.y' 2026-03-10T05:47:34.067 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:47:33 vm02 bash[17462]: audit 2026-03-10T05:47:32.699056+0000 mon.a (mon.0) 649 : audit [INF] from='client.? 192.168.123.102:0/4005702109' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "foo","app": "rbd"}]': finished 2026-03-10T05:47:34.067 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:47:33 vm02 bash[17462]: cluster 2026-03-10T05:47:32.699137+0000 mon.a (mon.0) 650 : cluster [DBG] osdmap e62: 8 total, 8 up, 8 in 2026-03-10T05:47:34.067 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:47:33 vm02 bash[17462]: audit 2026-03-10T05:47:33.131486+0000 mon.a (mon.0) 651 : audit [INF] from='mgr.14409 ' entity='mgr.y' 2026-03-10T05:47:34.067 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:47:33 vm02 bash[22526]: audit 2026-03-10T05:47:32.699056+0000 mon.a (mon.0) 649 : audit [INF] from='client.? 192.168.123.102:0/4005702109' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "foo","app": "rbd"}]': finished 2026-03-10T05:47:34.067 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:47:33 vm02 bash[22526]: cluster 2026-03-10T05:47:32.699137+0000 mon.a (mon.0) 650 : cluster [DBG] osdmap e62: 8 total, 8 up, 8 in 2026-03-10T05:47:34.067 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:47:33 vm02 bash[22526]: audit 2026-03-10T05:47:33.131486+0000 mon.a (mon.0) 651 : audit [INF] from='mgr.14409 ' entity='mgr.y' 2026-03-10T05:47:34.620 INFO:journalctl@ceph.osd.0.vm02.stdout:Mar 10 05:47:34 vm02 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:47:34.620 INFO:journalctl@ceph.osd.1.vm02.stdout:Mar 10 05:47:34 vm02 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:47:34.621 INFO:journalctl@ceph.osd.2.vm02.stdout:Mar 10 05:47:34 vm02 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:47:34.621 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:47:34 vm02 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:47:34.621 INFO:journalctl@ceph.osd.3.vm02.stdout:Mar 10 05:47:34 vm02 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:47:34.621 INFO:journalctl@ceph.mgr.y.vm02.stdout:Mar 10 05:47:34 vm02 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:47:34.621 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:47:34 vm02 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:47:34.621 INFO:journalctl@ceph.node-exporter.a.vm02.stdout:Mar 10 05:47:34 vm02 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:47:34.621 INFO:journalctl@ceph.alertmanager.a.vm02.stdout:Mar 10 05:47:34 vm02 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:47:34.801 DEBUG:teuthology.orchestra.run.vm02:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 107483ae-1c44-11f1-b530-c1172cd6122a -e sha1=e911bdebe5c8faa3800735d1568fcdca65db60df -- bash -c 'ceph orch apply iscsi foo u p' 2026-03-10T05:47:34.972 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:47:34 vm02 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:47:34.972 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:47:34 vm02 bash[22526]: cluster 2026-03-10T05:47:33.685311+0000 mgr.y (mgr.14409) 57 : cluster [DBG] pgmap v37: 161 pgs: 2 creating+peering, 27 creating+activating, 132 active+clean; 453 KiB data, 54 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.2 KiB/s wr, 3 op/s 2026-03-10T05:47:34.972 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:47:34 vm02 bash[22526]: cluster 2026-03-10T05:47:33.720347+0000 mon.a (mon.0) 652 : cluster [DBG] osdmap e63: 8 total, 8 up, 8 in 2026-03-10T05:47:34.972 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:47:34 vm02 bash[22526]: audit 2026-03-10T05:47:34.075376+0000 mon.a (mon.0) 653 : audit [INF] from='mgr.14409 ' entity='mgr.y' 2026-03-10T05:47:34.972 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:47:34 vm02 bash[22526]: audit 2026-03-10T05:47:34.081544+0000 mon.a (mon.0) 654 : audit [INF] from='mgr.14409 ' entity='mgr.y' 2026-03-10T05:47:34.972 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:47:34 vm02 bash[22526]: audit 2026-03-10T05:47:34.088422+0000 mon.a (mon.0) 655 : audit [INF] from='mgr.14409 ' entity='mgr.y' 2026-03-10T05:47:34.972 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:47:34 vm02 bash[22526]: audit 2026-03-10T05:47:34.093013+0000 mon.a (mon.0) 656 : audit [INF] from='mgr.14409 ' entity='mgr.y' 2026-03-10T05:47:34.972 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:47:34 vm02 bash[22526]: audit 2026-03-10T05:47:34.097578+0000 mon.a (mon.0) 657 : audit [INF] from='mgr.14409 ' entity='mgr.y' 2026-03-10T05:47:34.972 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:47:34 vm02 bash[22526]: audit 2026-03-10T05:47:34.099276+0000 mon.c (mon.1) 70 : audit [INF] from='mgr.14409 192.168.123.102:0/4073702081' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.foo.vm02.pbogjd", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch 2026-03-10T05:47:34.972 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:47:34 vm02 bash[22526]: audit 2026-03-10T05:47:34.099491+0000 mon.a (mon.0) 658 : audit [INF] from='mgr.14409 ' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.foo.vm02.pbogjd", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch 2026-03-10T05:47:34.972 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:47:34 vm02 bash[22526]: audit 2026-03-10T05:47:34.101740+0000 mon.a (mon.0) 659 : audit [INF] from='mgr.14409 ' entity='mgr.y' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.foo.vm02.pbogjd", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished 2026-03-10T05:47:34.972 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:47:34 vm02 bash[22526]: audit 2026-03-10T05:47:34.105951+0000 mon.a (mon.0) 660 : audit [INF] from='mgr.14409 ' entity='mgr.y' 2026-03-10T05:47:34.972 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:47:34 vm02 bash[22526]: audit 2026-03-10T05:47:34.108146+0000 mon.c (mon.1) 71 : audit [DBG] from='mgr.14409 192.168.123.102:0/4073702081' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:47:34.972 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:47:34 vm02 bash[22526]: audit 2026-03-10T05:47:34.716286+0000 mon.a (mon.0) 661 : audit [INF] from='mgr.14409 ' entity='mgr.y' 2026-03-10T05:47:34.972 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:47:34 vm02 bash[22526]: audit 2026-03-10T05:47:34.720191+0000 mon.c (mon.1) 72 : audit [INF] from='mgr.14409 192.168.123.102:0/4073702081' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.foo.vm05.hvmsxl", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch 2026-03-10T05:47:34.972 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:47:34 vm02 bash[22526]: cluster 2026-03-10T05:47:34.724394+0000 mon.a (mon.0) 662 : cluster [DBG] osdmap e64: 8 total, 8 up, 8 in 2026-03-10T05:47:34.972 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:47:34 vm02 bash[22526]: audit 2026-03-10T05:47:34.727265+0000 mon.a (mon.0) 663 : audit [INF] from='mgr.14409 ' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.foo.vm05.hvmsxl", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch 2026-03-10T05:47:34.972 INFO:journalctl@ceph.osd.0.vm02.stdout:Mar 10 05:47:34 vm02 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:47:34.972 INFO:journalctl@ceph.osd.1.vm02.stdout:Mar 10 05:47:34 vm02 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:47:34.973 INFO:journalctl@ceph.osd.2.vm02.stdout:Mar 10 05:47:34 vm02 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:47:34.973 INFO:journalctl@ceph.osd.3.vm02.stdout:Mar 10 05:47:34 vm02 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:47:34.973 INFO:journalctl@ceph.alertmanager.a.vm02.stdout:Mar 10 05:47:34 vm02 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:47:34.973 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:47:34 vm02 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:47:34.973 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:47:34 vm02 bash[17462]: cluster 2026-03-10T05:47:33.685311+0000 mgr.y (mgr.14409) 57 : cluster [DBG] pgmap v37: 161 pgs: 2 creating+peering, 27 creating+activating, 132 active+clean; 453 KiB data, 54 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.2 KiB/s wr, 3 op/s 2026-03-10T05:47:34.973 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:47:34 vm02 bash[17462]: cluster 2026-03-10T05:47:33.720347+0000 mon.a (mon.0) 652 : cluster [DBG] osdmap e63: 8 total, 8 up, 8 in 2026-03-10T05:47:34.973 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:47:34 vm02 bash[17462]: audit 2026-03-10T05:47:34.075376+0000 mon.a (mon.0) 653 : audit [INF] from='mgr.14409 ' entity='mgr.y' 2026-03-10T05:47:34.973 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:47:34 vm02 bash[17462]: audit 2026-03-10T05:47:34.081544+0000 mon.a (mon.0) 654 : audit [INF] from='mgr.14409 ' entity='mgr.y' 2026-03-10T05:47:34.973 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:47:34 vm02 bash[17462]: audit 2026-03-10T05:47:34.088422+0000 mon.a (mon.0) 655 : audit [INF] from='mgr.14409 ' entity='mgr.y' 2026-03-10T05:47:34.973 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:47:34 vm02 bash[17462]: audit 2026-03-10T05:47:34.093013+0000 mon.a (mon.0) 656 : audit [INF] from='mgr.14409 ' entity='mgr.y' 2026-03-10T05:47:34.973 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:47:34 vm02 bash[17462]: audit 2026-03-10T05:47:34.097578+0000 mon.a (mon.0) 657 : audit [INF] from='mgr.14409 ' entity='mgr.y' 2026-03-10T05:47:34.973 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:47:34 vm02 bash[17462]: audit 2026-03-10T05:47:34.099276+0000 mon.c (mon.1) 70 : audit [INF] from='mgr.14409 192.168.123.102:0/4073702081' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.foo.vm02.pbogjd", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch 2026-03-10T05:47:34.973 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:47:34 vm02 bash[17462]: audit 2026-03-10T05:47:34.099491+0000 mon.a (mon.0) 658 : audit [INF] from='mgr.14409 ' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.foo.vm02.pbogjd", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch 2026-03-10T05:47:34.973 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:47:34 vm02 bash[17462]: audit 2026-03-10T05:47:34.101740+0000 mon.a (mon.0) 659 : audit [INF] from='mgr.14409 ' entity='mgr.y' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.foo.vm02.pbogjd", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished 2026-03-10T05:47:34.973 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:47:34 vm02 bash[17462]: audit 2026-03-10T05:47:34.105951+0000 mon.a (mon.0) 660 : audit [INF] from='mgr.14409 ' entity='mgr.y' 2026-03-10T05:47:34.973 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:47:34 vm02 bash[17462]: audit 2026-03-10T05:47:34.108146+0000 mon.c (mon.1) 71 : audit [DBG] from='mgr.14409 192.168.123.102:0/4073702081' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:47:34.973 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:47:34 vm02 bash[17462]: audit 2026-03-10T05:47:34.716286+0000 mon.a (mon.0) 661 : audit [INF] from='mgr.14409 ' entity='mgr.y' 2026-03-10T05:47:34.973 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:47:34 vm02 bash[17462]: audit 2026-03-10T05:47:34.720191+0000 mon.c (mon.1) 72 : audit [INF] from='mgr.14409 192.168.123.102:0/4073702081' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.foo.vm05.hvmsxl", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch 2026-03-10T05:47:34.973 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:47:34 vm02 bash[17462]: cluster 2026-03-10T05:47:34.724394+0000 mon.a (mon.0) 662 : cluster [DBG] osdmap e64: 8 total, 8 up, 8 in 2026-03-10T05:47:34.973 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:47:34 vm02 bash[17462]: audit 2026-03-10T05:47:34.727265+0000 mon.a (mon.0) 663 : audit [INF] from='mgr.14409 ' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.foo.vm05.hvmsxl", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch 2026-03-10T05:47:34.973 INFO:journalctl@ceph.mgr.y.vm02.stdout:Mar 10 05:47:34 vm02 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:47:34.973 INFO:journalctl@ceph.node-exporter.a.vm02.stdout:Mar 10 05:47:34 vm02 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:47:35.008 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:47:34 vm05 bash[17864]: cluster 2026-03-10T05:47:33.685311+0000 mgr.y (mgr.14409) 57 : cluster [DBG] pgmap v37: 161 pgs: 2 creating+peering, 27 creating+activating, 132 active+clean; 453 KiB data, 54 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.2 KiB/s wr, 3 op/s 2026-03-10T05:47:35.008 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:47:34 vm05 bash[17864]: cluster 2026-03-10T05:47:33.720347+0000 mon.a (mon.0) 652 : cluster [DBG] osdmap e63: 8 total, 8 up, 8 in 2026-03-10T05:47:35.008 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:47:34 vm05 bash[17864]: audit 2026-03-10T05:47:34.075376+0000 mon.a (mon.0) 653 : audit [INF] from='mgr.14409 ' entity='mgr.y' 2026-03-10T05:47:35.008 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:47:34 vm05 bash[17864]: audit 2026-03-10T05:47:34.081544+0000 mon.a (mon.0) 654 : audit [INF] from='mgr.14409 ' entity='mgr.y' 2026-03-10T05:47:35.008 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:47:34 vm05 bash[17864]: audit 2026-03-10T05:47:34.088422+0000 mon.a (mon.0) 655 : audit [INF] from='mgr.14409 ' entity='mgr.y' 2026-03-10T05:47:35.008 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:47:34 vm05 bash[17864]: audit 2026-03-10T05:47:34.093013+0000 mon.a (mon.0) 656 : audit [INF] from='mgr.14409 ' entity='mgr.y' 2026-03-10T05:47:35.008 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:47:34 vm05 bash[17864]: audit 2026-03-10T05:47:34.097578+0000 mon.a (mon.0) 657 : audit [INF] from='mgr.14409 ' entity='mgr.y' 2026-03-10T05:47:35.008 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:47:34 vm05 bash[17864]: audit 2026-03-10T05:47:34.099276+0000 mon.c (mon.1) 70 : audit [INF] from='mgr.14409 192.168.123.102:0/4073702081' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.foo.vm02.pbogjd", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch 2026-03-10T05:47:35.008 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:47:34 vm05 bash[17864]: audit 2026-03-10T05:47:34.099491+0000 mon.a (mon.0) 658 : audit [INF] from='mgr.14409 ' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.foo.vm02.pbogjd", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch 2026-03-10T05:47:35.008 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:47:34 vm05 bash[17864]: audit 2026-03-10T05:47:34.101740+0000 mon.a (mon.0) 659 : audit [INF] from='mgr.14409 ' entity='mgr.y' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.foo.vm02.pbogjd", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished 2026-03-10T05:47:35.008 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:47:34 vm05 bash[17864]: audit 2026-03-10T05:47:34.105951+0000 mon.a (mon.0) 660 : audit [INF] from='mgr.14409 ' entity='mgr.y' 2026-03-10T05:47:35.008 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:47:34 vm05 bash[17864]: audit 2026-03-10T05:47:34.108146+0000 mon.c (mon.1) 71 : audit [DBG] from='mgr.14409 192.168.123.102:0/4073702081' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:47:35.008 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:47:34 vm05 bash[17864]: audit 2026-03-10T05:47:34.716286+0000 mon.a (mon.0) 661 : audit [INF] from='mgr.14409 ' entity='mgr.y' 2026-03-10T05:47:35.008 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:47:34 vm05 bash[17864]: audit 2026-03-10T05:47:34.720191+0000 mon.c (mon.1) 72 : audit [INF] from='mgr.14409 192.168.123.102:0/4073702081' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.foo.vm05.hvmsxl", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch 2026-03-10T05:47:35.008 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:47:34 vm05 bash[17864]: cluster 2026-03-10T05:47:34.724394+0000 mon.a (mon.0) 662 : cluster [DBG] osdmap e64: 8 total, 8 up, 8 in 2026-03-10T05:47:35.008 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:47:34 vm05 bash[17864]: audit 2026-03-10T05:47:34.727265+0000 mon.a (mon.0) 663 : audit [INF] from='mgr.14409 ' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.foo.vm05.hvmsxl", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch 2026-03-10T05:47:35.304 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:47:35 vm05 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:47:35.304 INFO:journalctl@ceph.mgr.x.vm05.stdout:Mar 10 05:47:35 vm05 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:47:35.305 INFO:journalctl@ceph.osd.4.vm05.stdout:Mar 10 05:47:35 vm05 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:47:35.305 INFO:journalctl@ceph.osd.5.vm05.stdout:Mar 10 05:47:35 vm05 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:47:35.305 INFO:journalctl@ceph.osd.6.vm05.stdout:Mar 10 05:47:35 vm05 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:47:35.305 INFO:journalctl@ceph.osd.7.vm05.stdout:Mar 10 05:47:35 vm05 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:47:35.305 INFO:journalctl@ceph.prometheus.a.vm05.stdout:Mar 10 05:47:35 vm05 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:47:35.305 INFO:journalctl@ceph.node-exporter.b.vm05.stdout:Mar 10 05:47:35 vm05 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:47:35.305 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:47:35 vm05 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:47:35.373 INFO:teuthology.orchestra.run.vm02.stdout:Scheduled iscsi.foo update... 2026-03-10T05:47:35.434 DEBUG:teuthology.orchestra.run.vm02:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 107483ae-1c44-11f1-b530-c1172cd6122a -e sha1=e911bdebe5c8faa3800735d1568fcdca65db60df -- bash -c 'sleep 120' 2026-03-10T05:47:35.632 INFO:journalctl@ceph.mgr.x.vm05.stdout:Mar 10 05:47:35 vm05 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:47:35.633 INFO:journalctl@ceph.osd.4.vm05.stdout:Mar 10 05:47:35 vm05 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:47:35.633 INFO:journalctl@ceph.osd.6.vm05.stdout:Mar 10 05:47:35 vm05 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:47:35.633 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:47:35 vm05 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:47:35.633 INFO:journalctl@ceph.osd.5.vm05.stdout:Mar 10 05:47:35 vm05 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:47:35.633 INFO:journalctl@ceph.osd.7.vm05.stdout:Mar 10 05:47:35 vm05 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:47:35.633 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:47:35 vm05 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:47:35.633 INFO:journalctl@ceph.prometheus.a.vm05.stdout:Mar 10 05:47:35 vm05 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:47:35.633 INFO:journalctl@ceph.node-exporter.b.vm05.stdout:Mar 10 05:47:35 vm05 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:47:35.834 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:47:35 vm02 bash[17462]: cephadm 2026-03-10T05:47:34.094992+0000 mgr.y (mgr.14409) 58 : cephadm [INF] Saving service rgw.foo spec with placement count:2 2026-03-10T05:47:35.834 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:47:35 vm02 bash[17462]: cephadm 2026-03-10T05:47:34.108894+0000 mgr.y (mgr.14409) 59 : cephadm [INF] Deploying daemon rgw.foo.vm02.pbogjd on vm02 2026-03-10T05:47:35.834 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:47:35 vm02 bash[17462]: audit 2026-03-10T05:47:34.733845+0000 mon.a (mon.0) 664 : audit [INF] from='mgr.14409 ' entity='mgr.y' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.foo.vm05.hvmsxl", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished 2026-03-10T05:47:35.834 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:47:35 vm02 bash[17462]: audit 2026-03-10T05:47:34.768230+0000 mon.a (mon.0) 665 : audit [INF] from='mgr.14409 ' entity='mgr.y' 2026-03-10T05:47:35.834 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:47:35 vm02 bash[17462]: audit 2026-03-10T05:47:34.769085+0000 mon.c (mon.1) 73 : audit [DBG] from='mgr.14409 192.168.123.102:0/4073702081' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:47:35.834 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:47:35 vm02 bash[17462]: cephadm 2026-03-10T05:47:34.769756+0000 mgr.y (mgr.14409) 60 : cephadm [INF] Deploying daemon rgw.foo.vm05.hvmsxl on vm05 2026-03-10T05:47:35.834 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:47:35 vm02 bash[17462]: audit 2026-03-10T05:47:35.371877+0000 mon.a (mon.0) 666 : audit [INF] from='mgr.14409 ' entity='mgr.y' 2026-03-10T05:47:35.834 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:47:35 vm02 bash[17462]: audit 2026-03-10T05:47:35.568663+0000 mon.a (mon.0) 667 : audit [INF] from='mgr.14409 ' entity='mgr.y' 2026-03-10T05:47:35.834 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:47:35 vm02 bash[17462]: audit 2026-03-10T05:47:35.585340+0000 mon.a (mon.0) 668 : audit [INF] from='mgr.14409 ' entity='mgr.y' 2026-03-10T05:47:35.834 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:47:35 vm02 bash[17462]: audit 2026-03-10T05:47:35.586403+0000 mon.c (mon.1) 74 : audit [INF] from='mgr.14409 192.168.123.102:0/4073702081' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.smpl.vm02.pglcfm", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch 2026-03-10T05:47:35.834 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:47:35 vm02 bash[17462]: audit 2026-03-10T05:47:35.588101+0000 mon.a (mon.0) 669 : audit [INF] from='mgr.14409 ' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.smpl.vm02.pglcfm", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch 2026-03-10T05:47:35.834 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:47:35 vm02 bash[17462]: audit 2026-03-10T05:47:35.597721+0000 mon.a (mon.0) 670 : audit [INF] from='mgr.14409 ' entity='mgr.y' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.smpl.vm02.pglcfm", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished 2026-03-10T05:47:35.834 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:47:35 vm02 bash[17462]: audit 2026-03-10T05:47:35.607575+0000 mon.a (mon.0) 671 : audit [INF] from='mgr.14409 ' entity='mgr.y' 2026-03-10T05:47:35.834 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:47:35 vm02 bash[17462]: audit 2026-03-10T05:47:35.608520+0000 mon.c (mon.1) 75 : audit [DBG] from='mgr.14409 192.168.123.102:0/4073702081' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:47:35.834 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:47:35 vm02 bash[22526]: cephadm 2026-03-10T05:47:34.094992+0000 mgr.y (mgr.14409) 58 : cephadm [INF] Saving service rgw.foo spec with placement count:2 2026-03-10T05:47:35.834 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:47:35 vm02 bash[22526]: cephadm 2026-03-10T05:47:34.108894+0000 mgr.y (mgr.14409) 59 : cephadm [INF] Deploying daemon rgw.foo.vm02.pbogjd on vm02 2026-03-10T05:47:35.834 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:47:35 vm02 bash[22526]: audit 2026-03-10T05:47:34.733845+0000 mon.a (mon.0) 664 : audit [INF] from='mgr.14409 ' entity='mgr.y' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.foo.vm05.hvmsxl", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished 2026-03-10T05:47:35.834 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:47:35 vm02 bash[22526]: audit 2026-03-10T05:47:34.768230+0000 mon.a (mon.0) 665 : audit [INF] from='mgr.14409 ' entity='mgr.y' 2026-03-10T05:47:35.834 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:47:35 vm02 bash[22526]: audit 2026-03-10T05:47:34.769085+0000 mon.c (mon.1) 73 : audit [DBG] from='mgr.14409 192.168.123.102:0/4073702081' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:47:35.834 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:47:35 vm02 bash[22526]: cephadm 2026-03-10T05:47:34.769756+0000 mgr.y (mgr.14409) 60 : cephadm [INF] Deploying daemon rgw.foo.vm05.hvmsxl on vm05 2026-03-10T05:47:35.834 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:47:35 vm02 bash[22526]: audit 2026-03-10T05:47:35.371877+0000 mon.a (mon.0) 666 : audit [INF] from='mgr.14409 ' entity='mgr.y' 2026-03-10T05:47:35.834 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:47:35 vm02 bash[22526]: audit 2026-03-10T05:47:35.568663+0000 mon.a (mon.0) 667 : audit [INF] from='mgr.14409 ' entity='mgr.y' 2026-03-10T05:47:35.834 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:47:35 vm02 bash[22526]: audit 2026-03-10T05:47:35.585340+0000 mon.a (mon.0) 668 : audit [INF] from='mgr.14409 ' entity='mgr.y' 2026-03-10T05:47:35.834 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:47:35 vm02 bash[22526]: audit 2026-03-10T05:47:35.586403+0000 mon.c (mon.1) 74 : audit [INF] from='mgr.14409 192.168.123.102:0/4073702081' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.smpl.vm02.pglcfm", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch 2026-03-10T05:47:35.834 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:47:35 vm02 bash[22526]: audit 2026-03-10T05:47:35.588101+0000 mon.a (mon.0) 669 : audit [INF] from='mgr.14409 ' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.smpl.vm02.pglcfm", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch 2026-03-10T05:47:35.834 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:47:35 vm02 bash[22526]: audit 2026-03-10T05:47:35.597721+0000 mon.a (mon.0) 670 : audit [INF] from='mgr.14409 ' entity='mgr.y' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.smpl.vm02.pglcfm", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished 2026-03-10T05:47:35.834 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:47:35 vm02 bash[22526]: audit 2026-03-10T05:47:35.607575+0000 mon.a (mon.0) 671 : audit [INF] from='mgr.14409 ' entity='mgr.y' 2026-03-10T05:47:35.834 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:47:35 vm02 bash[22526]: audit 2026-03-10T05:47:35.608520+0000 mon.c (mon.1) 75 : audit [DBG] from='mgr.14409 192.168.123.102:0/4073702081' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:47:35.912 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:47:35 vm05 bash[17864]: cephadm 2026-03-10T05:47:34.094992+0000 mgr.y (mgr.14409) 58 : cephadm [INF] Saving service rgw.foo spec with placement count:2 2026-03-10T05:47:35.912 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:47:35 vm05 bash[17864]: cephadm 2026-03-10T05:47:34.108894+0000 mgr.y (mgr.14409) 59 : cephadm [INF] Deploying daemon rgw.foo.vm02.pbogjd on vm02 2026-03-10T05:47:35.912 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:47:35 vm05 bash[17864]: audit 2026-03-10T05:47:34.733845+0000 mon.a (mon.0) 664 : audit [INF] from='mgr.14409 ' entity='mgr.y' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.foo.vm05.hvmsxl", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished 2026-03-10T05:47:35.912 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:47:35 vm05 bash[17864]: audit 2026-03-10T05:47:34.768230+0000 mon.a (mon.0) 665 : audit [INF] from='mgr.14409 ' entity='mgr.y' 2026-03-10T05:47:35.912 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:47:35 vm05 bash[17864]: audit 2026-03-10T05:47:34.769085+0000 mon.c (mon.1) 73 : audit [DBG] from='mgr.14409 192.168.123.102:0/4073702081' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:47:35.912 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:47:35 vm05 bash[17864]: cephadm 2026-03-10T05:47:34.769756+0000 mgr.y (mgr.14409) 60 : cephadm [INF] Deploying daemon rgw.foo.vm05.hvmsxl on vm05 2026-03-10T05:47:35.912 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:47:35 vm05 bash[17864]: audit 2026-03-10T05:47:35.371877+0000 mon.a (mon.0) 666 : audit [INF] from='mgr.14409 ' entity='mgr.y' 2026-03-10T05:47:35.912 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:47:35 vm05 bash[17864]: audit 2026-03-10T05:47:35.568663+0000 mon.a (mon.0) 667 : audit [INF] from='mgr.14409 ' entity='mgr.y' 2026-03-10T05:47:35.912 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:47:35 vm05 bash[17864]: audit 2026-03-10T05:47:35.585340+0000 mon.a (mon.0) 668 : audit [INF] from='mgr.14409 ' entity='mgr.y' 2026-03-10T05:47:35.912 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:47:35 vm05 bash[17864]: audit 2026-03-10T05:47:35.586403+0000 mon.c (mon.1) 74 : audit [INF] from='mgr.14409 192.168.123.102:0/4073702081' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.smpl.vm02.pglcfm", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch 2026-03-10T05:47:35.912 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:47:35 vm05 bash[17864]: audit 2026-03-10T05:47:35.588101+0000 mon.a (mon.0) 669 : audit [INF] from='mgr.14409 ' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.smpl.vm02.pglcfm", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch 2026-03-10T05:47:35.912 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:47:35 vm05 bash[17864]: audit 2026-03-10T05:47:35.597721+0000 mon.a (mon.0) 670 : audit [INF] from='mgr.14409 ' entity='mgr.y' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.smpl.vm02.pglcfm", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished 2026-03-10T05:47:35.912 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:47:35 vm05 bash[17864]: audit 2026-03-10T05:47:35.607575+0000 mon.a (mon.0) 671 : audit [INF] from='mgr.14409 ' entity='mgr.y' 2026-03-10T05:47:35.912 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:47:35 vm05 bash[17864]: audit 2026-03-10T05:47:35.608520+0000 mon.c (mon.1) 75 : audit [DBG] from='mgr.14409 192.168.123.102:0/4073702081' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:47:36.396 INFO:journalctl@ceph.osd.0.vm02.stdout:Mar 10 05:47:36 vm02 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:47:36.396 INFO:journalctl@ceph.osd.1.vm02.stdout:Mar 10 05:47:36 vm02 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:47:36.396 INFO:journalctl@ceph.osd.2.vm02.stdout:Mar 10 05:47:36 vm02 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:47:36.396 INFO:journalctl@ceph.osd.3.vm02.stdout:Mar 10 05:47:36 vm02 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:47:36.396 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:47:36 vm02 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:47:36.396 INFO:journalctl@ceph.mgr.y.vm02.stdout:Mar 10 05:47:36 vm02 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:47:36.397 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:47:36 vm02 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:47:36.397 INFO:journalctl@ceph.alertmanager.a.vm02.stdout:Mar 10 05:47:36 vm02 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:47:36.397 INFO:journalctl@ceph.node-exporter.a.vm02.stdout:Mar 10 05:47:36 vm02 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:47:36.648 INFO:journalctl@ceph.osd.0.vm02.stdout:Mar 10 05:47:36 vm02 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:47:36.649 INFO:journalctl@ceph.osd.1.vm02.stdout:Mar 10 05:47:36 vm02 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:47:36.649 INFO:journalctl@ceph.osd.2.vm02.stdout:Mar 10 05:47:36 vm02 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:47:36.649 INFO:journalctl@ceph.osd.3.vm02.stdout:Mar 10 05:47:36 vm02 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:47:36.649 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:47:36 vm02 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:47:36.649 INFO:journalctl@ceph.mgr.y.vm02.stdout:Mar 10 05:47:36 vm02 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:47:36.649 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:47:36 vm02 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:47:36.649 INFO:journalctl@ceph.alertmanager.a.vm02.stdout:Mar 10 05:47:36 vm02 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:47:36.649 INFO:journalctl@ceph.node-exporter.a.vm02.stdout:Mar 10 05:47:36 vm02 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:47:36.830 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:47:36 vm05 bash[17864]: audit 2026-03-10T05:47:35.365402+0000 mgr.y (mgr.14409) 61 : audit [DBG] from='client.24491 -' entity='client.admin' cmd=[{"prefix": "orch apply iscsi", "pool": "foo", "api_user": "u", "api_password": "p", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T05:47:36.830 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:47:36 vm05 bash[17864]: cephadm 2026-03-10T05:47:35.366120+0000 mgr.y (mgr.14409) 62 : cephadm [INF] Saving service iscsi.foo spec with placement count:1 2026-03-10T05:47:36.830 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:47:36 vm05 bash[17864]: cephadm 2026-03-10T05:47:35.570948+0000 mgr.y (mgr.14409) 63 : cephadm [INF] Saving service rgw.smpl spec with placement count:2 2026-03-10T05:47:36.830 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:47:36 vm05 bash[17864]: cephadm 2026-03-10T05:47:35.609470+0000 mgr.y (mgr.14409) 64 : cephadm [INF] Deploying daemon rgw.smpl.vm02.pglcfm on vm02 2026-03-10T05:47:36.830 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:47:36 vm05 bash[17864]: cluster 2026-03-10T05:47:35.685623+0000 mgr.y (mgr.14409) 65 : cluster [DBG] pgmap v40: 161 pgs: 2 creating+peering, 27 creating+activating, 132 active+clean; 453 KiB data, 54 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1.7 KiB/s wr, 4 op/s 2026-03-10T05:47:36.830 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:47:36 vm05 bash[17864]: audit 2026-03-10T05:47:36.523923+0000 mon.a (mon.0) 672 : audit [INF] from='mgr.14409 ' entity='mgr.y' 2026-03-10T05:47:36.830 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:47:36 vm05 bash[17864]: audit 2026-03-10T05:47:36.526643+0000 mon.c (mon.1) 76 : audit [INF] from='mgr.14409 192.168.123.102:0/4073702081' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.smpl.vm05.hqqmap", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch 2026-03-10T05:47:36.830 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:47:36 vm05 bash[17864]: audit 2026-03-10T05:47:36.526895+0000 mon.a (mon.0) 673 : audit [INF] from='mgr.14409 ' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.smpl.vm05.hqqmap", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch 2026-03-10T05:47:36.830 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:47:36 vm05 bash[17864]: audit 2026-03-10T05:47:36.531883+0000 mon.a (mon.0) 674 : audit [INF] from='mgr.14409 ' entity='mgr.y' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.smpl.vm05.hqqmap", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished 2026-03-10T05:47:36.830 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:47:36 vm05 bash[17864]: audit 2026-03-10T05:47:36.540568+0000 mon.a (mon.0) 675 : audit [INF] from='mgr.14409 ' entity='mgr.y' 2026-03-10T05:47:36.830 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:47:36 vm05 bash[17864]: audit 2026-03-10T05:47:36.547686+0000 mon.c (mon.1) 77 : audit [DBG] from='mgr.14409 192.168.123.102:0/4073702081' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:47:37.084 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:47:36 vm02 bash[17462]: audit 2026-03-10T05:47:35.365402+0000 mgr.y (mgr.14409) 61 : audit [DBG] from='client.24491 -' entity='client.admin' cmd=[{"prefix": "orch apply iscsi", "pool": "foo", "api_user": "u", "api_password": "p", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T05:47:37.084 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:47:36 vm02 bash[17462]: cephadm 2026-03-10T05:47:35.366120+0000 mgr.y (mgr.14409) 62 : cephadm [INF] Saving service iscsi.foo spec with placement count:1 2026-03-10T05:47:37.084 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:47:36 vm02 bash[17462]: cephadm 2026-03-10T05:47:35.570948+0000 mgr.y (mgr.14409) 63 : cephadm [INF] Saving service rgw.smpl spec with placement count:2 2026-03-10T05:47:37.084 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:47:36 vm02 bash[17462]: cephadm 2026-03-10T05:47:35.609470+0000 mgr.y (mgr.14409) 64 : cephadm [INF] Deploying daemon rgw.smpl.vm02.pglcfm on vm02 2026-03-10T05:47:37.084 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:47:36 vm02 bash[17462]: cluster 2026-03-10T05:47:35.685623+0000 mgr.y (mgr.14409) 65 : cluster [DBG] pgmap v40: 161 pgs: 2 creating+peering, 27 creating+activating, 132 active+clean; 453 KiB data, 54 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1.7 KiB/s wr, 4 op/s 2026-03-10T05:47:37.084 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:47:36 vm02 bash[17462]: audit 2026-03-10T05:47:36.523923+0000 mon.a (mon.0) 672 : audit [INF] from='mgr.14409 ' entity='mgr.y' 2026-03-10T05:47:37.084 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:47:36 vm02 bash[17462]: audit 2026-03-10T05:47:36.526643+0000 mon.c (mon.1) 76 : audit [INF] from='mgr.14409 192.168.123.102:0/4073702081' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.smpl.vm05.hqqmap", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch 2026-03-10T05:47:37.084 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:47:36 vm02 bash[17462]: audit 2026-03-10T05:47:36.526895+0000 mon.a (mon.0) 673 : audit [INF] from='mgr.14409 ' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.smpl.vm05.hqqmap", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch 2026-03-10T05:47:37.084 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:47:36 vm02 bash[17462]: audit 2026-03-10T05:47:36.531883+0000 mon.a (mon.0) 674 : audit [INF] from='mgr.14409 ' entity='mgr.y' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.smpl.vm05.hqqmap", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished 2026-03-10T05:47:37.084 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:47:36 vm02 bash[17462]: audit 2026-03-10T05:47:36.540568+0000 mon.a (mon.0) 675 : audit [INF] from='mgr.14409 ' entity='mgr.y' 2026-03-10T05:47:37.084 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:47:36 vm02 bash[17462]: audit 2026-03-10T05:47:36.547686+0000 mon.c (mon.1) 77 : audit [DBG] from='mgr.14409 192.168.123.102:0/4073702081' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:47:37.084 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:47:36 vm02 bash[22526]: audit 2026-03-10T05:47:35.365402+0000 mgr.y (mgr.14409) 61 : audit [DBG] from='client.24491 -' entity='client.admin' cmd=[{"prefix": "orch apply iscsi", "pool": "foo", "api_user": "u", "api_password": "p", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T05:47:37.084 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:47:36 vm02 bash[22526]: cephadm 2026-03-10T05:47:35.366120+0000 mgr.y (mgr.14409) 62 : cephadm [INF] Saving service iscsi.foo spec with placement count:1 2026-03-10T05:47:37.084 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:47:36 vm02 bash[22526]: cephadm 2026-03-10T05:47:35.570948+0000 mgr.y (mgr.14409) 63 : cephadm [INF] Saving service rgw.smpl spec with placement count:2 2026-03-10T05:47:37.084 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:47:36 vm02 bash[22526]: cephadm 2026-03-10T05:47:35.609470+0000 mgr.y (mgr.14409) 64 : cephadm [INF] Deploying daemon rgw.smpl.vm02.pglcfm on vm02 2026-03-10T05:47:37.084 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:47:36 vm02 bash[22526]: cluster 2026-03-10T05:47:35.685623+0000 mgr.y (mgr.14409) 65 : cluster [DBG] pgmap v40: 161 pgs: 2 creating+peering, 27 creating+activating, 132 active+clean; 453 KiB data, 54 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1.7 KiB/s wr, 4 op/s 2026-03-10T05:47:37.084 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:47:36 vm02 bash[22526]: audit 2026-03-10T05:47:36.523923+0000 mon.a (mon.0) 672 : audit [INF] from='mgr.14409 ' entity='mgr.y' 2026-03-10T05:47:37.084 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:47:36 vm02 bash[22526]: audit 2026-03-10T05:47:36.526643+0000 mon.c (mon.1) 76 : audit [INF] from='mgr.14409 192.168.123.102:0/4073702081' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.smpl.vm05.hqqmap", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch 2026-03-10T05:47:37.084 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:47:36 vm02 bash[22526]: audit 2026-03-10T05:47:36.526895+0000 mon.a (mon.0) 673 : audit [INF] from='mgr.14409 ' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.smpl.vm05.hqqmap", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch 2026-03-10T05:47:37.084 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:47:36 vm02 bash[22526]: audit 2026-03-10T05:47:36.531883+0000 mon.a (mon.0) 674 : audit [INF] from='mgr.14409 ' entity='mgr.y' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.smpl.vm05.hqqmap", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished 2026-03-10T05:47:37.084 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:47:36 vm02 bash[22526]: audit 2026-03-10T05:47:36.540568+0000 mon.a (mon.0) 675 : audit [INF] from='mgr.14409 ' entity='mgr.y' 2026-03-10T05:47:37.084 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:47:36 vm02 bash[22526]: audit 2026-03-10T05:47:36.547686+0000 mon.c (mon.1) 77 : audit [DBG] from='mgr.14409 192.168.123.102:0/4073702081' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:47:37.171 INFO:journalctl@ceph.mgr.x.vm05.stdout:Mar 10 05:47:37 vm05 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:47:37.172 INFO:journalctl@ceph.osd.4.vm05.stdout:Mar 10 05:47:37 vm05 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:47:37.172 INFO:journalctl@ceph.osd.6.vm05.stdout:Mar 10 05:47:37 vm05 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:47:37.172 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:47:37 vm05 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:47:37.172 INFO:journalctl@ceph.osd.5.vm05.stdout:Mar 10 05:47:37 vm05 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:47:37.172 INFO:journalctl@ceph.osd.7.vm05.stdout:Mar 10 05:47:37 vm05 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:47:37.172 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:47:37 vm05 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:47:37.172 INFO:journalctl@ceph.prometheus.a.vm05.stdout:Mar 10 05:47:37 vm05 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:47:37.172 INFO:journalctl@ceph.node-exporter.b.vm05.stdout:Mar 10 05:47:37 vm05 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:47:37.435 INFO:journalctl@ceph.mgr.x.vm05.stdout:Mar 10 05:47:37 vm05 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:47:37.435 INFO:journalctl@ceph.osd.4.vm05.stdout:Mar 10 05:47:37 vm05 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:47:37.435 INFO:journalctl@ceph.osd.6.vm05.stdout:Mar 10 05:47:37 vm05 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:47:37.435 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:47:37 vm05 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:47:37.435 INFO:journalctl@ceph.osd.5.vm05.stdout:Mar 10 05:47:37 vm05 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:47:37.435 INFO:journalctl@ceph.osd.7.vm05.stdout:Mar 10 05:47:37 vm05 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:47:37.436 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:47:37 vm05 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:47:37.436 INFO:journalctl@ceph.prometheus.a.vm05.stdout:Mar 10 05:47:37 vm05 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:47:37.436 INFO:journalctl@ceph.node-exporter.b.vm05.stdout:Mar 10 05:47:37 vm05 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:47:37.479 INFO:journalctl@ceph.mgr.y.vm02.stdout:Mar 10 05:47:37 vm02 bash[17731]: ::ffff:192.168.123.105 - - [10/Mar/2026:05:47:37] "GET /metrics HTTP/1.1" 200 205980 "" "Prometheus/2.33.4" 2026-03-10T05:47:38.037 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:47:37 vm05 bash[17864]: cephadm 2026-03-10T05:47:36.549938+0000 mgr.y (mgr.14409) 66 : cephadm [INF] Deploying daemon rgw.smpl.vm05.hqqmap on vm05 2026-03-10T05:47:38.037 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:47:37 vm05 bash[17864]: audit 2026-03-10T05:47:36.773911+0000 mon.a (mon.0) 676 : audit [INF] from='mgr.14409 ' entity='mgr.y' 2026-03-10T05:47:38.037 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:47:37 vm05 bash[17864]: audit 2026-03-10T05:47:37.459483+0000 mon.a (mon.0) 677 : audit [INF] from='mgr.14409 ' entity='mgr.y' 2026-03-10T05:47:38.037 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:47:37 vm05 bash[17864]: audit 2026-03-10T05:47:37.467788+0000 mon.c (mon.1) 78 : audit [DBG] from='mgr.14409 192.168.123.102:0/4073702081' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:47:38.037 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:47:37 vm05 bash[17864]: audit 2026-03-10T05:47:37.468874+0000 mon.c (mon.1) 79 : audit [INF] from='mgr.14409 192.168.123.102:0/4073702081' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T05:47:38.084 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:47:37 vm02 bash[17462]: cephadm 2026-03-10T05:47:36.549938+0000 mgr.y (mgr.14409) 66 : cephadm [INF] Deploying daemon rgw.smpl.vm05.hqqmap on vm05 2026-03-10T05:47:38.084 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:47:37 vm02 bash[17462]: audit 2026-03-10T05:47:36.773911+0000 mon.a (mon.0) 676 : audit [INF] from='mgr.14409 ' entity='mgr.y' 2026-03-10T05:47:38.084 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:47:37 vm02 bash[17462]: audit 2026-03-10T05:47:37.459483+0000 mon.a (mon.0) 677 : audit [INF] from='mgr.14409 ' entity='mgr.y' 2026-03-10T05:47:38.084 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:47:37 vm02 bash[17462]: audit 2026-03-10T05:47:37.467788+0000 mon.c (mon.1) 78 : audit [DBG] from='mgr.14409 192.168.123.102:0/4073702081' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:47:38.084 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:47:37 vm02 bash[17462]: audit 2026-03-10T05:47:37.468874+0000 mon.c (mon.1) 79 : audit [INF] from='mgr.14409 192.168.123.102:0/4073702081' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T05:47:38.084 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:47:37 vm02 bash[22526]: cephadm 2026-03-10T05:47:36.549938+0000 mgr.y (mgr.14409) 66 : cephadm [INF] Deploying daemon rgw.smpl.vm05.hqqmap on vm05 2026-03-10T05:47:38.084 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:47:37 vm02 bash[22526]: audit 2026-03-10T05:47:36.773911+0000 mon.a (mon.0) 676 : audit [INF] from='mgr.14409 ' entity='mgr.y' 2026-03-10T05:47:38.084 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:47:37 vm02 bash[22526]: audit 2026-03-10T05:47:37.459483+0000 mon.a (mon.0) 677 : audit [INF] from='mgr.14409 ' entity='mgr.y' 2026-03-10T05:47:38.084 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:47:37 vm02 bash[22526]: audit 2026-03-10T05:47:37.467788+0000 mon.c (mon.1) 78 : audit [DBG] from='mgr.14409 192.168.123.102:0/4073702081' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:47:38.084 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:47:37 vm02 bash[22526]: audit 2026-03-10T05:47:37.468874+0000 mon.c (mon.1) 79 : audit [INF] from='mgr.14409 192.168.123.102:0/4073702081' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T05:47:39.084 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:47:38 vm02 bash[17462]: cluster 2026-03-10T05:47:37.685946+0000 mgr.y (mgr.14409) 67 : cluster [DBG] pgmap v41: 161 pgs: 2 creating+peering, 27 creating+activating, 132 active+clean; 453 KiB data, 54 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.2 KiB/s wr, 3 op/s 2026-03-10T05:47:39.084 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:47:38 vm02 bash[22526]: cluster 2026-03-10T05:47:37.685946+0000 mgr.y (mgr.14409) 67 : cluster [DBG] pgmap v41: 161 pgs: 2 creating+peering, 27 creating+activating, 132 active+clean; 453 KiB data, 54 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.2 KiB/s wr, 3 op/s 2026-03-10T05:47:39.258 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:47:38 vm05 bash[17864]: cluster 2026-03-10T05:47:37.685946+0000 mgr.y (mgr.14409) 67 : cluster [DBG] pgmap v41: 161 pgs: 2 creating+peering, 27 creating+activating, 132 active+clean; 453 KiB data, 54 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.2 KiB/s wr, 3 op/s 2026-03-10T05:47:40.084 INFO:journalctl@ceph.alertmanager.a.vm02.stdout:Mar 10 05:47:39 vm02 bash[43400]: level=info ts=2026-03-10T05:47:39.805Z caller=cluster.go:688 component=cluster msg="gossip settled; proceeding" elapsed=10.002848631s 2026-03-10T05:47:40.834 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:47:40 vm02 bash[17462]: cluster 2026-03-10T05:47:39.686625+0000 mgr.y (mgr.14409) 68 : cluster [DBG] pgmap v42: 161 pgs: 161 active+clean; 456 KiB data, 63 MiB used, 160 GiB / 160 GiB avail; 242 KiB/s rd, 6.3 KiB/s wr, 433 op/s 2026-03-10T05:47:40.834 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:47:40 vm02 bash[17462]: audit 2026-03-10T05:47:40.565220+0000 mon.a (mon.0) 678 : audit [INF] from='mgr.14409 ' entity='mgr.y' 2026-03-10T05:47:40.834 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:47:40 vm02 bash[17462]: audit 2026-03-10T05:47:40.688668+0000 mon.a (mon.0) 679 : audit [INF] from='mgr.14409 ' entity='mgr.y' 2026-03-10T05:47:40.834 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:47:40 vm02 bash[22526]: cluster 2026-03-10T05:47:39.686625+0000 mgr.y (mgr.14409) 68 : cluster [DBG] pgmap v42: 161 pgs: 161 active+clean; 456 KiB data, 63 MiB used, 160 GiB / 160 GiB avail; 242 KiB/s rd, 6.3 KiB/s wr, 433 op/s 2026-03-10T05:47:40.834 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:47:40 vm02 bash[22526]: audit 2026-03-10T05:47:40.565220+0000 mon.a (mon.0) 678 : audit [INF] from='mgr.14409 ' entity='mgr.y' 2026-03-10T05:47:40.834 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:47:40 vm02 bash[22526]: audit 2026-03-10T05:47:40.688668+0000 mon.a (mon.0) 679 : audit [INF] from='mgr.14409 ' entity='mgr.y' 2026-03-10T05:47:41.008 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:47:40 vm05 bash[17864]: cluster 2026-03-10T05:47:39.686625+0000 mgr.y (mgr.14409) 68 : cluster [DBG] pgmap v42: 161 pgs: 161 active+clean; 456 KiB data, 63 MiB used, 160 GiB / 160 GiB avail; 242 KiB/s rd, 6.3 KiB/s wr, 433 op/s 2026-03-10T05:47:41.008 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:47:40 vm05 bash[17864]: audit 2026-03-10T05:47:40.565220+0000 mon.a (mon.0) 678 : audit [INF] from='mgr.14409 ' entity='mgr.y' 2026-03-10T05:47:41.008 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:47:40 vm05 bash[17864]: audit 2026-03-10T05:47:40.688668+0000 mon.a (mon.0) 679 : audit [INF] from='mgr.14409 ' entity='mgr.y' 2026-03-10T05:47:41.789 INFO:journalctl@ceph.osd.0.vm02.stdout:Mar 10 05:47:41 vm02 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:47:41.790 INFO:journalctl@ceph.osd.1.vm02.stdout:Mar 10 05:47:41 vm02 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:47:41.790 INFO:journalctl@ceph.osd.2.vm02.stdout:Mar 10 05:47:41 vm02 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:47:41.790 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:47:41 vm02 bash[17462]: cephadm 2026-03-10T05:47:40.691271+0000 mgr.y (mgr.14409) 69 : cephadm [INF] Checking dashboard <-> RGW credentials 2026-03-10T05:47:41.790 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:47:41 vm02 bash[17462]: audit 2026-03-10T05:47:41.231642+0000 mon.a (mon.0) 680 : audit [INF] from='mgr.14409 ' entity='mgr.y' 2026-03-10T05:47:41.790 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:47:41 vm02 bash[17462]: audit 2026-03-10T05:47:41.240142+0000 mon.a (mon.0) 681 : audit [INF] from='mgr.14409 ' entity='mgr.y' 2026-03-10T05:47:41.790 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:47:41 vm02 bash[17462]: audit 2026-03-10T05:47:41.246991+0000 mon.a (mon.0) 682 : audit [INF] from='mgr.14409 ' entity='mgr.y' 2026-03-10T05:47:41.790 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:47:41 vm02 bash[17462]: audit 2026-03-10T05:47:41.258533+0000 mon.c (mon.1) 80 : audit [INF] from='mgr.14409 192.168.123.102:0/4073702081' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "client.iscsi.foo.vm02.mxbwmh", "caps": ["mon", "profile rbd, allow command \"osd blocklist\", allow command \"config-key get\" with \"key\" prefix \"iscsi/\"", "mgr", "allow command \"service status\"", "osd", "allow rwx"]}]: dispatch 2026-03-10T05:47:41.790 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:47:41 vm02 bash[17462]: audit 2026-03-10T05:47:41.258862+0000 mon.a (mon.0) 683 : audit [INF] from='mgr.14409 ' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "client.iscsi.foo.vm02.mxbwmh", "caps": ["mon", "profile rbd, allow command \"osd blocklist\", allow command \"config-key get\" with \"key\" prefix \"iscsi/\"", "mgr", "allow command \"service status\"", "osd", "allow rwx"]}]: dispatch 2026-03-10T05:47:41.790 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:47:41 vm02 bash[17462]: audit 2026-03-10T05:47:41.263870+0000 mon.a (mon.0) 684 : audit [INF] from='mgr.14409 ' entity='mgr.y' cmd='[{"prefix": "auth get-or-create", "entity": "client.iscsi.foo.vm02.mxbwmh", "caps": ["mon", "profile rbd, allow command \"osd blocklist\", allow command \"config-key get\" with \"key\" prefix \"iscsi/\"", "mgr", "allow command \"service status\"", "osd", "allow rwx"]}]': finished 2026-03-10T05:47:41.790 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:47:41 vm02 bash[17462]: audit 2026-03-10T05:47:41.267788+0000 mon.c (mon.1) 81 : audit [DBG] from='mgr.14409 192.168.123.102:0/4073702081' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:47:41.790 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:47:41 vm02 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:47:41.790 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:47:41 vm02 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:47:41.790 INFO:journalctl@ceph.osd.3.vm02.stdout:Mar 10 05:47:41 vm02 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:47:41.790 INFO:journalctl@ceph.node-exporter.a.vm02.stdout:Mar 10 05:47:41 vm02 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:47:42.085 INFO:journalctl@ceph.osd.0.vm02.stdout:Mar 10 05:47:42 vm02 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:47:42.085 INFO:journalctl@ceph.osd.1.vm02.stdout:Mar 10 05:47:42 vm02 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:47:42.085 INFO:journalctl@ceph.osd.2.vm02.stdout:Mar 10 05:47:42 vm02 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:47:42.085 INFO:journalctl@ceph.osd.3.vm02.stdout:Mar 10 05:47:42 vm02 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:47:42.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:47:42 vm02 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:47:42.085 INFO:journalctl@ceph.mgr.y.vm02.stdout:Mar 10 05:47:41 vm02 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:47:42.085 INFO:journalctl@ceph.mgr.y.vm02.stdout:Mar 10 05:47:42 vm02 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:47:42.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:47:41 vm02 bash[22526]: cephadm 2026-03-10T05:47:40.691271+0000 mgr.y (mgr.14409) 69 : cephadm [INF] Checking dashboard <-> RGW credentials 2026-03-10T05:47:42.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:47:41 vm02 bash[22526]: audit 2026-03-10T05:47:41.231642+0000 mon.a (mon.0) 680 : audit [INF] from='mgr.14409 ' entity='mgr.y' 2026-03-10T05:47:42.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:47:41 vm02 bash[22526]: audit 2026-03-10T05:47:41.240142+0000 mon.a (mon.0) 681 : audit [INF] from='mgr.14409 ' entity='mgr.y' 2026-03-10T05:47:42.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:47:41 vm02 bash[22526]: audit 2026-03-10T05:47:41.246991+0000 mon.a (mon.0) 682 : audit [INF] from='mgr.14409 ' entity='mgr.y' 2026-03-10T05:47:42.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:47:41 vm02 bash[22526]: audit 2026-03-10T05:47:41.258533+0000 mon.c (mon.1) 80 : audit [INF] from='mgr.14409 192.168.123.102:0/4073702081' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "client.iscsi.foo.vm02.mxbwmh", "caps": ["mon", "profile rbd, allow command \"osd blocklist\", allow command \"config-key get\" with \"key\" prefix \"iscsi/\"", "mgr", "allow command \"service status\"", "osd", "allow rwx"]}]: dispatch 2026-03-10T05:47:42.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:47:41 vm02 bash[22526]: audit 2026-03-10T05:47:41.258862+0000 mon.a (mon.0) 683 : audit [INF] from='mgr.14409 ' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "client.iscsi.foo.vm02.mxbwmh", "caps": ["mon", "profile rbd, allow command \"osd blocklist\", allow command \"config-key get\" with \"key\" prefix \"iscsi/\"", "mgr", "allow command \"service status\"", "osd", "allow rwx"]}]: dispatch 2026-03-10T05:47:42.086 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:47:41 vm02 bash[22526]: audit 2026-03-10T05:47:41.263870+0000 mon.a (mon.0) 684 : audit [INF] from='mgr.14409 ' entity='mgr.y' cmd='[{"prefix": "auth get-or-create", "entity": "client.iscsi.foo.vm02.mxbwmh", "caps": ["mon", "profile rbd, allow command \"osd blocklist\", allow command \"config-key get\" with \"key\" prefix \"iscsi/\"", "mgr", "allow command \"service status\"", "osd", "allow rwx"]}]': finished 2026-03-10T05:47:42.086 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:47:41 vm02 bash[22526]: audit 2026-03-10T05:47:41.267788+0000 mon.c (mon.1) 81 : audit [DBG] from='mgr.14409 192.168.123.102:0/4073702081' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:47:42.086 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:47:42 vm02 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:47:42.086 INFO:journalctl@ceph.alertmanager.a.vm02.stdout:Mar 10 05:47:41 vm02 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:47:42.086 INFO:journalctl@ceph.alertmanager.a.vm02.stdout:Mar 10 05:47:42 vm02 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:47:42.086 INFO:journalctl@ceph.node-exporter.a.vm02.stdout:Mar 10 05:47:42 vm02 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:47:42.144 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:47:41 vm05 bash[17864]: cephadm 2026-03-10T05:47:40.691271+0000 mgr.y (mgr.14409) 69 : cephadm [INF] Checking dashboard <-> RGW credentials 2026-03-10T05:47:42.144 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:47:41 vm05 bash[17864]: audit 2026-03-10T05:47:41.231642+0000 mon.a (mon.0) 680 : audit [INF] from='mgr.14409 ' entity='mgr.y' 2026-03-10T05:47:42.144 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:47:41 vm05 bash[17864]: audit 2026-03-10T05:47:41.240142+0000 mon.a (mon.0) 681 : audit [INF] from='mgr.14409 ' entity='mgr.y' 2026-03-10T05:47:42.144 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:47:41 vm05 bash[17864]: audit 2026-03-10T05:47:41.246991+0000 mon.a (mon.0) 682 : audit [INF] from='mgr.14409 ' entity='mgr.y' 2026-03-10T05:47:42.144 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:47:41 vm05 bash[17864]: audit 2026-03-10T05:47:41.258533+0000 mon.c (mon.1) 80 : audit [INF] from='mgr.14409 192.168.123.102:0/4073702081' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "client.iscsi.foo.vm02.mxbwmh", "caps": ["mon", "profile rbd, allow command \"osd blocklist\", allow command \"config-key get\" with \"key\" prefix \"iscsi/\"", "mgr", "allow command \"service status\"", "osd", "allow rwx"]}]: dispatch 2026-03-10T05:47:42.144 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:47:41 vm05 bash[17864]: audit 2026-03-10T05:47:41.258862+0000 mon.a (mon.0) 683 : audit [INF] from='mgr.14409 ' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "client.iscsi.foo.vm02.mxbwmh", "caps": ["mon", "profile rbd, allow command \"osd blocklist\", allow command \"config-key get\" with \"key\" prefix \"iscsi/\"", "mgr", "allow command \"service status\"", "osd", "allow rwx"]}]: dispatch 2026-03-10T05:47:42.144 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:47:41 vm05 bash[17864]: audit 2026-03-10T05:47:41.263870+0000 mon.a (mon.0) 684 : audit [INF] from='mgr.14409 ' entity='mgr.y' cmd='[{"prefix": "auth get-or-create", "entity": "client.iscsi.foo.vm02.mxbwmh", "caps": ["mon", "profile rbd, allow command \"osd blocklist\", allow command \"config-key get\" with \"key\" prefix \"iscsi/\"", "mgr", "allow command \"service status\"", "osd", "allow rwx"]}]': finished 2026-03-10T05:47:42.144 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:47:41 vm05 bash[17864]: audit 2026-03-10T05:47:41.267788+0000 mon.c (mon.1) 81 : audit [DBG] from='mgr.14409 192.168.123.102:0/4073702081' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:47:42.882 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:47:42 vm02 bash[17462]: cephadm 2026-03-10T05:47:41.249828+0000 mgr.y (mgr.14409) 70 : cephadm [INF] Checking pool "foo" exists for service iscsi.foo 2026-03-10T05:47:42.882 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:47:42 vm02 bash[17462]: cephadm 2026-03-10T05:47:41.268968+0000 mgr.y (mgr.14409) 71 : cephadm [INF] Deploying daemon iscsi.foo.vm02.mxbwmh on vm02 2026-03-10T05:47:42.882 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:47:42 vm02 bash[17462]: cluster 2026-03-10T05:47:41.687300+0000 mgr.y (mgr.14409) 72 : cluster [DBG] pgmap v43: 161 pgs: 161 active+clean; 456 KiB data, 63 MiB used, 160 GiB / 160 GiB avail; 211 KiB/s rd, 4.6 KiB/s wr, 376 op/s 2026-03-10T05:47:42.882 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:47:42 vm02 bash[17462]: audit 2026-03-10T05:47:41.783834+0000 mon.a (mon.0) 685 : audit [INF] from='mgr.14409 ' entity='mgr.y' 2026-03-10T05:47:42.882 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:47:42 vm02 bash[17462]: audit 2026-03-10T05:47:42.125575+0000 mon.a (mon.0) 686 : audit [INF] from='mgr.14409 ' entity='mgr.y' 2026-03-10T05:47:42.882 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:47:42 vm02 bash[17462]: audit 2026-03-10T05:47:42.129228+0000 mon.c (mon.1) 82 : audit [DBG] from='mgr.14409 192.168.123.102:0/4073702081' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:47:42.882 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:47:42 vm02 bash[17462]: audit 2026-03-10T05:47:42.130591+0000 mon.c (mon.1) 83 : audit [INF] from='mgr.14409 192.168.123.102:0/4073702081' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T05:47:42.882 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:47:42 vm02 bash[17462]: cluster 2026-03-10T05:47:42.308052+0000 mon.a (mon.0) 687 : cluster [DBG] mgrmap e20: y(active, since 50s), standbys: x 2026-03-10T05:47:42.882 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:47:42 vm02 bash[17462]: cluster 2026-03-10T05:47:42.739009+0000 mon.a (mon.0) 688 : cluster [DBG] osdmap e65: 8 total, 8 up, 8 in 2026-03-10T05:47:42.882 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:47:42 vm02 bash[22526]: cephadm 2026-03-10T05:47:41.249828+0000 mgr.y (mgr.14409) 70 : cephadm [INF] Checking pool "foo" exists for service iscsi.foo 2026-03-10T05:47:42.882 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:47:42 vm02 bash[22526]: cephadm 2026-03-10T05:47:41.268968+0000 mgr.y (mgr.14409) 71 : cephadm [INF] Deploying daemon iscsi.foo.vm02.mxbwmh on vm02 2026-03-10T05:47:42.882 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:47:42 vm02 bash[22526]: cluster 2026-03-10T05:47:41.687300+0000 mgr.y (mgr.14409) 72 : cluster [DBG] pgmap v43: 161 pgs: 161 active+clean; 456 KiB data, 63 MiB used, 160 GiB / 160 GiB avail; 211 KiB/s rd, 4.6 KiB/s wr, 376 op/s 2026-03-10T05:47:42.883 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:47:42 vm02 bash[22526]: audit 2026-03-10T05:47:41.783834+0000 mon.a (mon.0) 685 : audit [INF] from='mgr.14409 ' entity='mgr.y' 2026-03-10T05:47:42.883 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:47:42 vm02 bash[22526]: audit 2026-03-10T05:47:42.125575+0000 mon.a (mon.0) 686 : audit [INF] from='mgr.14409 ' entity='mgr.y' 2026-03-10T05:47:42.883 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:47:42 vm02 bash[22526]: audit 2026-03-10T05:47:42.129228+0000 mon.c (mon.1) 82 : audit [DBG] from='mgr.14409 192.168.123.102:0/4073702081' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:47:42.883 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:47:42 vm02 bash[22526]: audit 2026-03-10T05:47:42.130591+0000 mon.c (mon.1) 83 : audit [INF] from='mgr.14409 192.168.123.102:0/4073702081' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T05:47:42.883 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:47:42 vm02 bash[22526]: cluster 2026-03-10T05:47:42.308052+0000 mon.a (mon.0) 687 : cluster [DBG] mgrmap e20: y(active, since 50s), standbys: x 2026-03-10T05:47:42.883 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:47:42 vm02 bash[22526]: cluster 2026-03-10T05:47:42.739009+0000 mon.a (mon.0) 688 : cluster [DBG] osdmap e65: 8 total, 8 up, 8 in 2026-03-10T05:47:43.258 INFO:journalctl@ceph.mgr.x.vm05.stdout:Mar 10 05:47:42 vm05 bash[18520]: ::ffff:192.168.123.105 - - [10/Mar/2026:05:47:42] "GET /metrics HTTP/1.1" 200 - "" "Prometheus/2.33.4" 2026-03-10T05:47:43.258 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:47:42 vm05 bash[17864]: cephadm 2026-03-10T05:47:41.249828+0000 mgr.y (mgr.14409) 70 : cephadm [INF] Checking pool "foo" exists for service iscsi.foo 2026-03-10T05:47:43.258 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:47:42 vm05 bash[17864]: cephadm 2026-03-10T05:47:41.268968+0000 mgr.y (mgr.14409) 71 : cephadm [INF] Deploying daemon iscsi.foo.vm02.mxbwmh on vm02 2026-03-10T05:47:43.258 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:47:42 vm05 bash[17864]: cluster 2026-03-10T05:47:41.687300+0000 mgr.y (mgr.14409) 72 : cluster [DBG] pgmap v43: 161 pgs: 161 active+clean; 456 KiB data, 63 MiB used, 160 GiB / 160 GiB avail; 211 KiB/s rd, 4.6 KiB/s wr, 376 op/s 2026-03-10T05:47:43.258 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:47:42 vm05 bash[17864]: audit 2026-03-10T05:47:41.783834+0000 mon.a (mon.0) 685 : audit [INF] from='mgr.14409 ' entity='mgr.y' 2026-03-10T05:47:43.258 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:47:42 vm05 bash[17864]: audit 2026-03-10T05:47:42.125575+0000 mon.a (mon.0) 686 : audit [INF] from='mgr.14409 ' entity='mgr.y' 2026-03-10T05:47:43.258 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:47:42 vm05 bash[17864]: audit 2026-03-10T05:47:42.129228+0000 mon.c (mon.1) 82 : audit [DBG] from='mgr.14409 192.168.123.102:0/4073702081' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:47:43.258 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:47:42 vm05 bash[17864]: audit 2026-03-10T05:47:42.130591+0000 mon.c (mon.1) 83 : audit [INF] from='mgr.14409 192.168.123.102:0/4073702081' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T05:47:43.258 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:47:42 vm05 bash[17864]: cluster 2026-03-10T05:47:42.308052+0000 mon.a (mon.0) 687 : cluster [DBG] mgrmap e20: y(active, since 50s), standbys: x 2026-03-10T05:47:43.258 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:47:42 vm05 bash[17864]: cluster 2026-03-10T05:47:42.739009+0000 mon.a (mon.0) 688 : cluster [DBG] osdmap e65: 8 total, 8 up, 8 in 2026-03-10T05:47:44.084 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:47:43 vm02 bash[17462]: audit 2026-03-10T05:47:42.870336+0000 mon.a (mon.0) 689 : audit [DBG] from='client.? 192.168.123.102:0/3049506641' entity='client.iscsi.foo.vm02.mxbwmh' cmd=[{"prefix": "osd blocklist ls"}]: dispatch 2026-03-10T05:47:44.084 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:47:43 vm02 bash[17462]: audit 2026-03-10T05:47:43.060121+0000 mon.a (mon.0) 690 : audit [INF] from='client.? 192.168.123.102:0/4270184176' entity='client.iscsi.foo.vm02.mxbwmh' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.102:0/1222859905"}]: dispatch 2026-03-10T05:47:44.084 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:47:43 vm02 bash[22526]: audit 2026-03-10T05:47:42.870336+0000 mon.a (mon.0) 689 : audit [DBG] from='client.? 192.168.123.102:0/3049506641' entity='client.iscsi.foo.vm02.mxbwmh' cmd=[{"prefix": "osd blocklist ls"}]: dispatch 2026-03-10T05:47:44.084 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:47:43 vm02 bash[22526]: audit 2026-03-10T05:47:43.060121+0000 mon.a (mon.0) 690 : audit [INF] from='client.? 192.168.123.102:0/4270184176' entity='client.iscsi.foo.vm02.mxbwmh' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.102:0/1222859905"}]: dispatch 2026-03-10T05:47:44.258 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:47:43 vm05 bash[17864]: audit 2026-03-10T05:47:42.870336+0000 mon.a (mon.0) 689 : audit [DBG] from='client.? 192.168.123.102:0/3049506641' entity='client.iscsi.foo.vm02.mxbwmh' cmd=[{"prefix": "osd blocklist ls"}]: dispatch 2026-03-10T05:47:44.258 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:47:43 vm05 bash[17864]: audit 2026-03-10T05:47:43.060121+0000 mon.a (mon.0) 690 : audit [INF] from='client.? 192.168.123.102:0/4270184176' entity='client.iscsi.foo.vm02.mxbwmh' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.102:0/1222859905"}]: dispatch 2026-03-10T05:47:45.084 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:47:44 vm02 bash[17462]: cluster 2026-03-10T05:47:43.687674+0000 mgr.y (mgr.14409) 73 : cluster [DBG] pgmap v45: 161 pgs: 161 active+clean; 457 KiB data, 94 MiB used, 160 GiB / 160 GiB avail; 394 KiB/s rd, 4.6 KiB/s wr, 653 op/s 2026-03-10T05:47:45.084 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:47:44 vm02 bash[17462]: audit 2026-03-10T05:47:43.809463+0000 mon.a (mon.0) 691 : audit [INF] from='client.? 192.168.123.102:0/4270184176' entity='client.iscsi.foo.vm02.mxbwmh' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.102:0/1222859905"}]': finished 2026-03-10T05:47:45.084 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:47:44 vm02 bash[17462]: cluster 2026-03-10T05:47:43.809497+0000 mon.a (mon.0) 692 : cluster [DBG] osdmap e66: 8 total, 8 up, 8 in 2026-03-10T05:47:45.084 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:47:44 vm02 bash[17462]: audit 2026-03-10T05:47:44.023072+0000 mon.a (mon.0) 693 : audit [INF] from='client.? 192.168.123.102:0/722304146' entity='client.iscsi.foo.vm02.mxbwmh' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.102:0/437369469"}]: dispatch 2026-03-10T05:47:45.084 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:47:44 vm02 bash[22526]: cluster 2026-03-10T05:47:43.687674+0000 mgr.y (mgr.14409) 73 : cluster [DBG] pgmap v45: 161 pgs: 161 active+clean; 457 KiB data, 94 MiB used, 160 GiB / 160 GiB avail; 394 KiB/s rd, 4.6 KiB/s wr, 653 op/s 2026-03-10T05:47:45.084 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:47:44 vm02 bash[22526]: audit 2026-03-10T05:47:43.809463+0000 mon.a (mon.0) 691 : audit [INF] from='client.? 192.168.123.102:0/4270184176' entity='client.iscsi.foo.vm02.mxbwmh' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.102:0/1222859905"}]': finished 2026-03-10T05:47:45.084 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:47:44 vm02 bash[22526]: cluster 2026-03-10T05:47:43.809497+0000 mon.a (mon.0) 692 : cluster [DBG] osdmap e66: 8 total, 8 up, 8 in 2026-03-10T05:47:45.084 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:47:44 vm02 bash[22526]: audit 2026-03-10T05:47:44.023072+0000 mon.a (mon.0) 693 : audit [INF] from='client.? 192.168.123.102:0/722304146' entity='client.iscsi.foo.vm02.mxbwmh' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.102:0/437369469"}]: dispatch 2026-03-10T05:47:45.258 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:47:44 vm05 bash[17864]: cluster 2026-03-10T05:47:43.687674+0000 mgr.y (mgr.14409) 73 : cluster [DBG] pgmap v45: 161 pgs: 161 active+clean; 457 KiB data, 94 MiB used, 160 GiB / 160 GiB avail; 394 KiB/s rd, 4.6 KiB/s wr, 653 op/s 2026-03-10T05:47:45.258 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:47:44 vm05 bash[17864]: audit 2026-03-10T05:47:43.809463+0000 mon.a (mon.0) 691 : audit [INF] from='client.? 192.168.123.102:0/4270184176' entity='client.iscsi.foo.vm02.mxbwmh' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.102:0/1222859905"}]': finished 2026-03-10T05:47:45.258 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:47:44 vm05 bash[17864]: cluster 2026-03-10T05:47:43.809497+0000 mon.a (mon.0) 692 : cluster [DBG] osdmap e66: 8 total, 8 up, 8 in 2026-03-10T05:47:45.258 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:47:44 vm05 bash[17864]: audit 2026-03-10T05:47:44.023072+0000 mon.a (mon.0) 693 : audit [INF] from='client.? 192.168.123.102:0/722304146' entity='client.iscsi.foo.vm02.mxbwmh' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.102:0/437369469"}]: dispatch 2026-03-10T05:47:45.821 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:47:45 vm02 bash[17462]: audit 2026-03-10T05:47:44.821621+0000 mon.a (mon.0) 694 : audit [INF] from='client.? 192.168.123.102:0/722304146' entity='client.iscsi.foo.vm02.mxbwmh' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.102:0/437369469"}]': finished 2026-03-10T05:47:46.084 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:47:45 vm02 bash[17462]: cluster 2026-03-10T05:47:44.821777+0000 mon.a (mon.0) 695 : cluster [DBG] osdmap e67: 8 total, 8 up, 8 in 2026-03-10T05:47:46.084 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:47:45 vm02 bash[17462]: audit 2026-03-10T05:47:45.071203+0000 mon.a (mon.0) 696 : audit [INF] from='client.? 192.168.123.102:0/1799336068' entity='client.iscsi.foo.vm02.mxbwmh' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.102:6800/3587596038"}]: dispatch 2026-03-10T05:47:46.084 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:47:45 vm02 bash[17462]: audit 2026-03-10T05:47:45.323196+0000 mon.a (mon.0) 697 : audit [INF] from='mgr.14409 ' entity='mgr.y' 2026-03-10T05:47:46.084 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:47:45 vm02 bash[17462]: audit 2026-03-10T05:47:45.445176+0000 mon.a (mon.0) 698 : audit [INF] from='mgr.14409 ' entity='mgr.y' 2026-03-10T05:47:46.084 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:47:45 vm02 bash[17462]: audit 2026-03-10T05:47:45.454753+0000 mon.a (mon.0) 699 : audit [INF] from='mgr.14409 ' entity='mgr.y' 2026-03-10T05:47:46.084 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:47:45 vm02 bash[17462]: audit 2026-03-10T05:47:45.461494+0000 mon.c (mon.1) 84 : audit [DBG] from='mgr.14409 192.168.123.102:0/4073702081' entity='mgr.y' cmd=[{"prefix": "dashboard iscsi-gateway-list"}]: dispatch 2026-03-10T05:47:46.084 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:47:45 vm02 bash[17462]: audit 2026-03-10T05:47:45.463038+0000 mon.c (mon.1) 85 : audit [INF] from='mgr.14409 192.168.123.102:0/4073702081' entity='mgr.y' cmd=[{"prefix": "dashboard set-iscsi-api-ssl-verification", "value": "true"}]: dispatch 2026-03-10T05:47:46.084 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:47:45 vm02 bash[17462]: audit 2026-03-10T05:47:45.468362+0000 mon.a (mon.0) 700 : audit [INF] from='mgr.14409 ' entity='mgr.y' 2026-03-10T05:47:46.084 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:47:45 vm02 bash[17462]: audit 2026-03-10T05:47:45.476814+0000 mon.c (mon.1) 86 : audit [INF] from='mgr.14409 192.168.123.102:0/4073702081' entity='mgr.y' cmd=[{"prefix": "dashboard iscsi-gateway-add", "name": "vm02"}]: dispatch 2026-03-10T05:47:46.084 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:47:45 vm02 bash[17462]: audit 2026-03-10T05:47:45.481530+0000 mon.a (mon.0) 701 : audit [INF] from='mgr.14409 ' entity='mgr.y' 2026-03-10T05:47:46.084 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:47:45 vm02 bash[17462]: audit 2026-03-10T05:47:45.486458+0000 mon.c (mon.1) 87 : audit [DBG] from='mgr.14409 192.168.123.102:0/4073702081' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:47:46.084 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:47:45 vm02 bash[17462]: audit 2026-03-10T05:47:45.487480+0000 mon.c (mon.1) 88 : audit [INF] from='mgr.14409 192.168.123.102:0/4073702081' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T05:47:46.084 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:47:45 vm02 bash[22526]: audit 2026-03-10T05:47:44.821621+0000 mon.a (mon.0) 694 : audit [INF] from='client.? 192.168.123.102:0/722304146' entity='client.iscsi.foo.vm02.mxbwmh' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.102:0/437369469"}]': finished 2026-03-10T05:47:46.084 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:47:45 vm02 bash[22526]: cluster 2026-03-10T05:47:44.821777+0000 mon.a (mon.0) 695 : cluster [DBG] osdmap e67: 8 total, 8 up, 8 in 2026-03-10T05:47:46.084 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:47:45 vm02 bash[22526]: audit 2026-03-10T05:47:45.071203+0000 mon.a (mon.0) 696 : audit [INF] from='client.? 192.168.123.102:0/1799336068' entity='client.iscsi.foo.vm02.mxbwmh' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.102:6800/3587596038"}]: dispatch 2026-03-10T05:47:46.084 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:47:45 vm02 bash[22526]: audit 2026-03-10T05:47:45.323196+0000 mon.a (mon.0) 697 : audit [INF] from='mgr.14409 ' entity='mgr.y' 2026-03-10T05:47:46.084 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:47:45 vm02 bash[22526]: audit 2026-03-10T05:47:45.445176+0000 mon.a (mon.0) 698 : audit [INF] from='mgr.14409 ' entity='mgr.y' 2026-03-10T05:47:46.084 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:47:45 vm02 bash[22526]: audit 2026-03-10T05:47:45.454753+0000 mon.a (mon.0) 699 : audit [INF] from='mgr.14409 ' entity='mgr.y' 2026-03-10T05:47:46.084 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:47:45 vm02 bash[22526]: audit 2026-03-10T05:47:45.461494+0000 mon.c (mon.1) 84 : audit [DBG] from='mgr.14409 192.168.123.102:0/4073702081' entity='mgr.y' cmd=[{"prefix": "dashboard iscsi-gateway-list"}]: dispatch 2026-03-10T05:47:46.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:47:45 vm02 bash[22526]: audit 2026-03-10T05:47:45.463038+0000 mon.c (mon.1) 85 : audit [INF] from='mgr.14409 192.168.123.102:0/4073702081' entity='mgr.y' cmd=[{"prefix": "dashboard set-iscsi-api-ssl-verification", "value": "true"}]: dispatch 2026-03-10T05:47:46.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:47:45 vm02 bash[22526]: audit 2026-03-10T05:47:45.468362+0000 mon.a (mon.0) 700 : audit [INF] from='mgr.14409 ' entity='mgr.y' 2026-03-10T05:47:46.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:47:45 vm02 bash[22526]: audit 2026-03-10T05:47:45.476814+0000 mon.c (mon.1) 86 : audit [INF] from='mgr.14409 192.168.123.102:0/4073702081' entity='mgr.y' cmd=[{"prefix": "dashboard iscsi-gateway-add", "name": "vm02"}]: dispatch 2026-03-10T05:47:46.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:47:45 vm02 bash[22526]: audit 2026-03-10T05:47:45.481530+0000 mon.a (mon.0) 701 : audit [INF] from='mgr.14409 ' entity='mgr.y' 2026-03-10T05:47:46.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:47:45 vm02 bash[22526]: audit 2026-03-10T05:47:45.486458+0000 mon.c (mon.1) 87 : audit [DBG] from='mgr.14409 192.168.123.102:0/4073702081' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:47:46.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:47:45 vm02 bash[22526]: audit 2026-03-10T05:47:45.487480+0000 mon.c (mon.1) 88 : audit [INF] from='mgr.14409 192.168.123.102:0/4073702081' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T05:47:46.258 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:47:45 vm05 bash[17864]: audit 2026-03-10T05:47:44.821621+0000 mon.a (mon.0) 694 : audit [INF] from='client.? 192.168.123.102:0/722304146' entity='client.iscsi.foo.vm02.mxbwmh' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.102:0/437369469"}]': finished 2026-03-10T05:47:46.258 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:47:45 vm05 bash[17864]: cluster 2026-03-10T05:47:44.821777+0000 mon.a (mon.0) 695 : cluster [DBG] osdmap e67: 8 total, 8 up, 8 in 2026-03-10T05:47:46.258 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:47:45 vm05 bash[17864]: audit 2026-03-10T05:47:45.071203+0000 mon.a (mon.0) 696 : audit [INF] from='client.? 192.168.123.102:0/1799336068' entity='client.iscsi.foo.vm02.mxbwmh' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.102:6800/3587596038"}]: dispatch 2026-03-10T05:47:46.258 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:47:45 vm05 bash[17864]: audit 2026-03-10T05:47:45.323196+0000 mon.a (mon.0) 697 : audit [INF] from='mgr.14409 ' entity='mgr.y' 2026-03-10T05:47:46.258 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:47:45 vm05 bash[17864]: audit 2026-03-10T05:47:45.445176+0000 mon.a (mon.0) 698 : audit [INF] from='mgr.14409 ' entity='mgr.y' 2026-03-10T05:47:46.258 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:47:45 vm05 bash[17864]: audit 2026-03-10T05:47:45.454753+0000 mon.a (mon.0) 699 : audit [INF] from='mgr.14409 ' entity='mgr.y' 2026-03-10T05:47:46.258 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:47:45 vm05 bash[17864]: audit 2026-03-10T05:47:45.461494+0000 mon.c (mon.1) 84 : audit [DBG] from='mgr.14409 192.168.123.102:0/4073702081' entity='mgr.y' cmd=[{"prefix": "dashboard iscsi-gateway-list"}]: dispatch 2026-03-10T05:47:46.258 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:47:45 vm05 bash[17864]: audit 2026-03-10T05:47:45.463038+0000 mon.c (mon.1) 85 : audit [INF] from='mgr.14409 192.168.123.102:0/4073702081' entity='mgr.y' cmd=[{"prefix": "dashboard set-iscsi-api-ssl-verification", "value": "true"}]: dispatch 2026-03-10T05:47:46.258 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:47:45 vm05 bash[17864]: audit 2026-03-10T05:47:45.468362+0000 mon.a (mon.0) 700 : audit [INF] from='mgr.14409 ' entity='mgr.y' 2026-03-10T05:47:46.258 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:47:45 vm05 bash[17864]: audit 2026-03-10T05:47:45.476814+0000 mon.c (mon.1) 86 : audit [INF] from='mgr.14409 192.168.123.102:0/4073702081' entity='mgr.y' cmd=[{"prefix": "dashboard iscsi-gateway-add", "name": "vm02"}]: dispatch 2026-03-10T05:47:46.258 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:47:45 vm05 bash[17864]: audit 2026-03-10T05:47:45.481530+0000 mon.a (mon.0) 701 : audit [INF] from='mgr.14409 ' entity='mgr.y' 2026-03-10T05:47:46.258 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:47:45 vm05 bash[17864]: audit 2026-03-10T05:47:45.486458+0000 mon.c (mon.1) 87 : audit [DBG] from='mgr.14409 192.168.123.102:0/4073702081' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:47:46.259 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:47:45 vm05 bash[17864]: audit 2026-03-10T05:47:45.487480+0000 mon.c (mon.1) 88 : audit [INF] from='mgr.14409 192.168.123.102:0/4073702081' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T05:47:47.149 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:47:46 vm02 bash[17462]: audit 2026-03-10T05:47:45.462058+0000 mgr.y (mgr.14409) 74 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard iscsi-gateway-list"}]: dispatch 2026-03-10T05:47:47.149 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:47:46 vm02 bash[17462]: cephadm 2026-03-10T05:47:45.462841+0000 mgr.y (mgr.14409) 75 : cephadm [INF] Adding iSCSI gateway http://:@192.168.123.102:5000 to Dashboard 2026-03-10T05:47:47.149 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:47:46 vm02 bash[17462]: audit 2026-03-10T05:47:45.463260+0000 mgr.y (mgr.14409) 76 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard set-iscsi-api-ssl-verification", "value": "true"}]: dispatch 2026-03-10T05:47:47.149 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:47:46 vm02 bash[17462]: audit 2026-03-10T05:47:45.477366+0000 mgr.y (mgr.14409) 77 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard iscsi-gateway-add", "name": "vm02"}]: dispatch 2026-03-10T05:47:47.149 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:47:46 vm02 bash[17462]: cluster 2026-03-10T05:47:45.688058+0000 mgr.y (mgr.14409) 78 : cluster [DBG] pgmap v48: 161 pgs: 161 active+clean; 457 KiB data, 94 MiB used, 160 GiB / 160 GiB avail; 308 KiB/s rd, 682 B/s wr, 473 op/s 2026-03-10T05:47:47.149 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:47:46 vm02 bash[17462]: audit 2026-03-10T05:47:45.844056+0000 mon.a (mon.0) 702 : audit [INF] from='client.? 192.168.123.102:0/1799336068' entity='client.iscsi.foo.vm02.mxbwmh' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.102:6800/3587596038"}]': finished 2026-03-10T05:47:47.149 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:47:46 vm02 bash[17462]: cluster 2026-03-10T05:47:45.844248+0000 mon.a (mon.0) 703 : cluster [DBG] osdmap e68: 8 total, 8 up, 8 in 2026-03-10T05:47:47.149 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:47:46 vm02 bash[17462]: audit 2026-03-10T05:47:46.042638+0000 mon.a (mon.0) 704 : audit [INF] from='client.? 192.168.123.102:0/2041586259' entity='client.iscsi.foo.vm02.mxbwmh' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.102:0/1174218704"}]: dispatch 2026-03-10T05:47:47.149 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:47:46 vm02 bash[17462]: audit 2026-03-10T05:47:46.793714+0000 mon.a (mon.0) 705 : audit [INF] from='mgr.14409 ' entity='mgr.y' 2026-03-10T05:47:47.149 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:47:46 vm02 bash[22526]: audit 2026-03-10T05:47:45.462058+0000 mgr.y (mgr.14409) 74 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard iscsi-gateway-list"}]: dispatch 2026-03-10T05:47:47.149 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:47:46 vm02 bash[22526]: cephadm 2026-03-10T05:47:45.462841+0000 mgr.y (mgr.14409) 75 : cephadm [INF] Adding iSCSI gateway http://:@192.168.123.102:5000 to Dashboard 2026-03-10T05:47:47.149 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:47:46 vm02 bash[22526]: audit 2026-03-10T05:47:45.463260+0000 mgr.y (mgr.14409) 76 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard set-iscsi-api-ssl-verification", "value": "true"}]: dispatch 2026-03-10T05:47:47.149 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:47:46 vm02 bash[22526]: audit 2026-03-10T05:47:45.477366+0000 mgr.y (mgr.14409) 77 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard iscsi-gateway-add", "name": "vm02"}]: dispatch 2026-03-10T05:47:47.149 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:47:46 vm02 bash[22526]: cluster 2026-03-10T05:47:45.688058+0000 mgr.y (mgr.14409) 78 : cluster [DBG] pgmap v48: 161 pgs: 161 active+clean; 457 KiB data, 94 MiB used, 160 GiB / 160 GiB avail; 308 KiB/s rd, 682 B/s wr, 473 op/s 2026-03-10T05:47:47.149 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:47:46 vm02 bash[22526]: audit 2026-03-10T05:47:45.844056+0000 mon.a (mon.0) 702 : audit [INF] from='client.? 192.168.123.102:0/1799336068' entity='client.iscsi.foo.vm02.mxbwmh' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.102:6800/3587596038"}]': finished 2026-03-10T05:47:47.149 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:47:46 vm02 bash[22526]: cluster 2026-03-10T05:47:45.844248+0000 mon.a (mon.0) 703 : cluster [DBG] osdmap e68: 8 total, 8 up, 8 in 2026-03-10T05:47:47.149 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:47:46 vm02 bash[22526]: audit 2026-03-10T05:47:46.042638+0000 mon.a (mon.0) 704 : audit [INF] from='client.? 192.168.123.102:0/2041586259' entity='client.iscsi.foo.vm02.mxbwmh' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.102:0/1174218704"}]: dispatch 2026-03-10T05:47:47.150 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:47:46 vm02 bash[22526]: audit 2026-03-10T05:47:46.793714+0000 mon.a (mon.0) 705 : audit [INF] from='mgr.14409 ' entity='mgr.y' 2026-03-10T05:47:47.258 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:47:46 vm05 bash[17864]: audit 2026-03-10T05:47:45.462058+0000 mgr.y (mgr.14409) 74 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard iscsi-gateway-list"}]: dispatch 2026-03-10T05:47:47.258 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:47:46 vm05 bash[17864]: cephadm 2026-03-10T05:47:45.462841+0000 mgr.y (mgr.14409) 75 : cephadm [INF] Adding iSCSI gateway http://:@192.168.123.102:5000 to Dashboard 2026-03-10T05:47:47.258 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:47:46 vm05 bash[17864]: audit 2026-03-10T05:47:45.463260+0000 mgr.y (mgr.14409) 76 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard set-iscsi-api-ssl-verification", "value": "true"}]: dispatch 2026-03-10T05:47:47.258 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:47:46 vm05 bash[17864]: audit 2026-03-10T05:47:45.477366+0000 mgr.y (mgr.14409) 77 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard iscsi-gateway-add", "name": "vm02"}]: dispatch 2026-03-10T05:47:47.258 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:47:46 vm05 bash[17864]: cluster 2026-03-10T05:47:45.688058+0000 mgr.y (mgr.14409) 78 : cluster [DBG] pgmap v48: 161 pgs: 161 active+clean; 457 KiB data, 94 MiB used, 160 GiB / 160 GiB avail; 308 KiB/s rd, 682 B/s wr, 473 op/s 2026-03-10T05:47:47.258 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:47:46 vm05 bash[17864]: audit 2026-03-10T05:47:45.844056+0000 mon.a (mon.0) 702 : audit [INF] from='client.? 192.168.123.102:0/1799336068' entity='client.iscsi.foo.vm02.mxbwmh' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.102:6800/3587596038"}]': finished 2026-03-10T05:47:47.258 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:47:46 vm05 bash[17864]: cluster 2026-03-10T05:47:45.844248+0000 mon.a (mon.0) 703 : cluster [DBG] osdmap e68: 8 total, 8 up, 8 in 2026-03-10T05:47:47.258 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:47:46 vm05 bash[17864]: audit 2026-03-10T05:47:46.042638+0000 mon.a (mon.0) 704 : audit [INF] from='client.? 192.168.123.102:0/2041586259' entity='client.iscsi.foo.vm02.mxbwmh' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.102:0/1174218704"}]: dispatch 2026-03-10T05:47:47.258 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:47:46 vm05 bash[17864]: audit 2026-03-10T05:47:46.793714+0000 mon.a (mon.0) 705 : audit [INF] from='mgr.14409 ' entity='mgr.y' 2026-03-10T05:47:47.584 INFO:journalctl@ceph.mgr.y.vm02.stdout:Mar 10 05:47:47 vm02 bash[17731]: ::ffff:192.168.123.105 - - [10/Mar/2026:05:47:47] "GET /metrics HTTP/1.1" 200 205980 "" "Prometheus/2.33.4" 2026-03-10T05:47:48.258 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:47:47 vm05 bash[17864]: audit 2026-03-10T05:47:46.853691+0000 mon.a (mon.0) 706 : audit [INF] from='client.? 192.168.123.102:0/2041586259' entity='client.iscsi.foo.vm02.mxbwmh' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.102:0/1174218704"}]': finished 2026-03-10T05:47:48.258 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:47:47 vm05 bash[17864]: cluster 2026-03-10T05:47:46.853912+0000 mon.a (mon.0) 707 : cluster [DBG] osdmap e69: 8 total, 8 up, 8 in 2026-03-10T05:47:48.258 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:47:47 vm05 bash[17864]: audit 2026-03-10T05:47:47.051046+0000 mon.a (mon.0) 708 : audit [INF] from='client.? 192.168.123.102:0/2669408545' entity='client.iscsi.foo.vm02.mxbwmh' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.102:6801/3587596038"}]: dispatch 2026-03-10T05:47:48.334 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:47:47 vm02 bash[17462]: audit 2026-03-10T05:47:46.853691+0000 mon.a (mon.0) 706 : audit [INF] from='client.? 192.168.123.102:0/2041586259' entity='client.iscsi.foo.vm02.mxbwmh' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.102:0/1174218704"}]': finished 2026-03-10T05:47:48.334 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:47:47 vm02 bash[17462]: cluster 2026-03-10T05:47:46.853912+0000 mon.a (mon.0) 707 : cluster [DBG] osdmap e69: 8 total, 8 up, 8 in 2026-03-10T05:47:48.334 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:47:47 vm02 bash[17462]: audit 2026-03-10T05:47:47.051046+0000 mon.a (mon.0) 708 : audit [INF] from='client.? 192.168.123.102:0/2669408545' entity='client.iscsi.foo.vm02.mxbwmh' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.102:6801/3587596038"}]: dispatch 2026-03-10T05:47:48.334 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:47:47 vm02 bash[22526]: audit 2026-03-10T05:47:46.853691+0000 mon.a (mon.0) 706 : audit [INF] from='client.? 192.168.123.102:0/2041586259' entity='client.iscsi.foo.vm02.mxbwmh' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.102:0/1174218704"}]': finished 2026-03-10T05:47:48.334 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:47:47 vm02 bash[22526]: cluster 2026-03-10T05:47:46.853912+0000 mon.a (mon.0) 707 : cluster [DBG] osdmap e69: 8 total, 8 up, 8 in 2026-03-10T05:47:48.334 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:47:47 vm02 bash[22526]: audit 2026-03-10T05:47:47.051046+0000 mon.a (mon.0) 708 : audit [INF] from='client.? 192.168.123.102:0/2669408545' entity='client.iscsi.foo.vm02.mxbwmh' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.102:6801/3587596038"}]: dispatch 2026-03-10T05:47:49.008 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:47:48 vm05 bash[17864]: cluster 2026-03-10T05:47:47.688369+0000 mgr.y (mgr.14409) 79 : cluster [DBG] pgmap v51: 161 pgs: 161 active+clean; 457 KiB data, 94 MiB used, 160 GiB / 160 GiB avail 2026-03-10T05:47:49.008 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:47:48 vm05 bash[17864]: audit 2026-03-10T05:47:47.863497+0000 mon.a (mon.0) 709 : audit [INF] from='client.? 192.168.123.102:0/2669408545' entity='client.iscsi.foo.vm02.mxbwmh' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.102:6801/3587596038"}]': finished 2026-03-10T05:47:49.008 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:47:48 vm05 bash[17864]: cluster 2026-03-10T05:47:47.863628+0000 mon.a (mon.0) 710 : cluster [DBG] osdmap e70: 8 total, 8 up, 8 in 2026-03-10T05:47:49.008 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:47:48 vm05 bash[17864]: audit 2026-03-10T05:47:48.065883+0000 mon.c (mon.1) 89 : audit [INF] from='client.? 192.168.123.102:0/893385163' entity='client.iscsi.foo.vm02.mxbwmh' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.102:0/180339681"}]: dispatch 2026-03-10T05:47:49.008 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:47:48 vm05 bash[17864]: audit 2026-03-10T05:47:48.066519+0000 mon.a (mon.0) 711 : audit [INF] from='client.? ' entity='client.iscsi.foo.vm02.mxbwmh' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.102:0/180339681"}]: dispatch 2026-03-10T05:47:49.008 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:47:48 vm05 bash[17864]: audit 2026-03-10T05:47:48.657168+0000 mon.a (mon.0) 712 : audit [INF] from='mgr.14409 ' entity='mgr.y' 2026-03-10T05:47:49.008 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:47:48 vm05 bash[17864]: audit 2026-03-10T05:47:48.695756+0000 mon.a (mon.0) 713 : audit [INF] from='mgr.14409 ' entity='mgr.y' 2026-03-10T05:47:49.084 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:47:48 vm02 bash[17462]: cluster 2026-03-10T05:47:47.688369+0000 mgr.y (mgr.14409) 79 : cluster [DBG] pgmap v51: 161 pgs: 161 active+clean; 457 KiB data, 94 MiB used, 160 GiB / 160 GiB avail 2026-03-10T05:47:49.084 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:47:48 vm02 bash[17462]: audit 2026-03-10T05:47:47.863497+0000 mon.a (mon.0) 709 : audit [INF] from='client.? 192.168.123.102:0/2669408545' entity='client.iscsi.foo.vm02.mxbwmh' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.102:6801/3587596038"}]': finished 2026-03-10T05:47:49.084 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:47:48 vm02 bash[17462]: cluster 2026-03-10T05:47:47.863628+0000 mon.a (mon.0) 710 : cluster [DBG] osdmap e70: 8 total, 8 up, 8 in 2026-03-10T05:47:49.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:47:48 vm02 bash[17462]: audit 2026-03-10T05:47:48.065883+0000 mon.c (mon.1) 89 : audit [INF] from='client.? 192.168.123.102:0/893385163' entity='client.iscsi.foo.vm02.mxbwmh' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.102:0/180339681"}]: dispatch 2026-03-10T05:47:49.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:47:48 vm02 bash[17462]: audit 2026-03-10T05:47:48.066519+0000 mon.a (mon.0) 711 : audit [INF] from='client.? ' entity='client.iscsi.foo.vm02.mxbwmh' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.102:0/180339681"}]: dispatch 2026-03-10T05:47:49.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:47:48 vm02 bash[17462]: audit 2026-03-10T05:47:48.657168+0000 mon.a (mon.0) 712 : audit [INF] from='mgr.14409 ' entity='mgr.y' 2026-03-10T05:47:49.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:47:48 vm02 bash[17462]: audit 2026-03-10T05:47:48.695756+0000 mon.a (mon.0) 713 : audit [INF] from='mgr.14409 ' entity='mgr.y' 2026-03-10T05:47:49.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:47:48 vm02 bash[22526]: cluster 2026-03-10T05:47:47.688369+0000 mgr.y (mgr.14409) 79 : cluster [DBG] pgmap v51: 161 pgs: 161 active+clean; 457 KiB data, 94 MiB used, 160 GiB / 160 GiB avail 2026-03-10T05:47:49.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:47:48 vm02 bash[22526]: audit 2026-03-10T05:47:47.863497+0000 mon.a (mon.0) 709 : audit [INF] from='client.? 192.168.123.102:0/2669408545' entity='client.iscsi.foo.vm02.mxbwmh' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.102:6801/3587596038"}]': finished 2026-03-10T05:47:49.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:47:48 vm02 bash[22526]: cluster 2026-03-10T05:47:47.863628+0000 mon.a (mon.0) 710 : cluster [DBG] osdmap e70: 8 total, 8 up, 8 in 2026-03-10T05:47:49.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:47:48 vm02 bash[22526]: audit 2026-03-10T05:47:48.065883+0000 mon.c (mon.1) 89 : audit [INF] from='client.? 192.168.123.102:0/893385163' entity='client.iscsi.foo.vm02.mxbwmh' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.102:0/180339681"}]: dispatch 2026-03-10T05:47:49.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:47:48 vm02 bash[22526]: audit 2026-03-10T05:47:48.066519+0000 mon.a (mon.0) 711 : audit [INF] from='client.? ' entity='client.iscsi.foo.vm02.mxbwmh' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.102:0/180339681"}]: dispatch 2026-03-10T05:47:49.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:47:48 vm02 bash[22526]: audit 2026-03-10T05:47:48.657168+0000 mon.a (mon.0) 712 : audit [INF] from='mgr.14409 ' entity='mgr.y' 2026-03-10T05:47:49.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:47:48 vm02 bash[22526]: audit 2026-03-10T05:47:48.695756+0000 mon.a (mon.0) 713 : audit [INF] from='mgr.14409 ' entity='mgr.y' 2026-03-10T05:47:50.130 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:47:49 vm02 bash[17462]: cephadm 2026-03-10T05:47:48.699386+0000 mgr.y (mgr.14409) 80 : cephadm [INF] Checking dashboard <-> RGW credentials 2026-03-10T05:47:50.131 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:47:49 vm02 bash[17462]: audit 2026-03-10T05:47:48.868921+0000 mon.a (mon.0) 714 : audit [INF] from='client.? ' entity='client.iscsi.foo.vm02.mxbwmh' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.102:0/180339681"}]': finished 2026-03-10T05:47:50.131 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:47:49 vm02 bash[17462]: cluster 2026-03-10T05:47:48.869081+0000 mon.a (mon.0) 715 : cluster [DBG] osdmap e71: 8 total, 8 up, 8 in 2026-03-10T05:47:50.131 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:47:49 vm02 bash[17462]: audit 2026-03-10T05:47:49.102751+0000 mon.a (mon.0) 716 : audit [INF] from='mgr.14409 ' entity='mgr.y' 2026-03-10T05:47:50.131 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:47:49 vm02 bash[17462]: audit 2026-03-10T05:47:49.126780+0000 mon.c (mon.1) 90 : audit [INF] from='client.? 192.168.123.102:0/2886898084' entity='client.iscsi.foo.vm02.mxbwmh' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.102:0/3558265816"}]: dispatch 2026-03-10T05:47:50.131 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:47:49 vm02 bash[17462]: audit 2026-03-10T05:47:49.127117+0000 mon.a (mon.0) 717 : audit [INF] from='client.? ' entity='client.iscsi.foo.vm02.mxbwmh' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.102:0/3558265816"}]: dispatch 2026-03-10T05:47:50.131 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:47:49 vm02 bash[22526]: cephadm 2026-03-10T05:47:48.699386+0000 mgr.y (mgr.14409) 80 : cephadm [INF] Checking dashboard <-> RGW credentials 2026-03-10T05:47:50.131 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:47:49 vm02 bash[22526]: audit 2026-03-10T05:47:48.868921+0000 mon.a (mon.0) 714 : audit [INF] from='client.? ' entity='client.iscsi.foo.vm02.mxbwmh' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.102:0/180339681"}]': finished 2026-03-10T05:47:50.131 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:47:49 vm02 bash[22526]: cluster 2026-03-10T05:47:48.869081+0000 mon.a (mon.0) 715 : cluster [DBG] osdmap e71: 8 total, 8 up, 8 in 2026-03-10T05:47:50.131 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:47:49 vm02 bash[22526]: audit 2026-03-10T05:47:49.102751+0000 mon.a (mon.0) 716 : audit [INF] from='mgr.14409 ' entity='mgr.y' 2026-03-10T05:47:50.131 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:47:49 vm02 bash[22526]: audit 2026-03-10T05:47:49.126780+0000 mon.c (mon.1) 90 : audit [INF] from='client.? 192.168.123.102:0/2886898084' entity='client.iscsi.foo.vm02.mxbwmh' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.102:0/3558265816"}]: dispatch 2026-03-10T05:47:50.131 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:47:49 vm02 bash[22526]: audit 2026-03-10T05:47:49.127117+0000 mon.a (mon.0) 717 : audit [INF] from='client.? ' entity='client.iscsi.foo.vm02.mxbwmh' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.102:0/3558265816"}]: dispatch 2026-03-10T05:47:50.258 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:47:49 vm05 bash[17864]: cephadm 2026-03-10T05:47:48.699386+0000 mgr.y (mgr.14409) 80 : cephadm [INF] Checking dashboard <-> RGW credentials 2026-03-10T05:47:50.258 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:47:49 vm05 bash[17864]: audit 2026-03-10T05:47:48.868921+0000 mon.a (mon.0) 714 : audit [INF] from='client.? ' entity='client.iscsi.foo.vm02.mxbwmh' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.102:0/180339681"}]': finished 2026-03-10T05:47:50.258 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:47:49 vm05 bash[17864]: cluster 2026-03-10T05:47:48.869081+0000 mon.a (mon.0) 715 : cluster [DBG] osdmap e71: 8 total, 8 up, 8 in 2026-03-10T05:47:50.258 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:47:49 vm05 bash[17864]: audit 2026-03-10T05:47:49.102751+0000 mon.a (mon.0) 716 : audit [INF] from='mgr.14409 ' entity='mgr.y' 2026-03-10T05:47:50.258 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:47:49 vm05 bash[17864]: audit 2026-03-10T05:47:49.126780+0000 mon.c (mon.1) 90 : audit [INF] from='client.? 192.168.123.102:0/2886898084' entity='client.iscsi.foo.vm02.mxbwmh' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.102:0/3558265816"}]: dispatch 2026-03-10T05:47:50.258 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:47:49 vm05 bash[17864]: audit 2026-03-10T05:47:49.127117+0000 mon.a (mon.0) 717 : audit [INF] from='client.? ' entity='client.iscsi.foo.vm02.mxbwmh' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.102:0/3558265816"}]: dispatch 2026-03-10T05:47:51.133 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:47:50 vm02 bash[17462]: cluster 2026-03-10T05:47:49.688785+0000 mgr.y (mgr.14409) 81 : cluster [DBG] pgmap v54: 161 pgs: 161 active+clean; 457 KiB data, 96 MiB used, 160 GiB / 160 GiB avail; 70 KiB/s rd, 0 B/s wr, 104 op/s 2026-03-10T05:47:51.133 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:47:50 vm02 bash[17462]: audit 2026-03-10T05:47:50.117839+0000 mon.a (mon.0) 718 : audit [INF] from='client.? ' entity='client.iscsi.foo.vm02.mxbwmh' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.102:0/3558265816"}]': finished 2026-03-10T05:47:51.133 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:47:50 vm02 bash[17462]: cluster 2026-03-10T05:47:50.118111+0000 mon.a (mon.0) 719 : cluster [DBG] osdmap e72: 8 total, 8 up, 8 in 2026-03-10T05:47:51.133 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:47:50 vm02 bash[17462]: audit 2026-03-10T05:47:50.301112+0000 mon.b (mon.2) 29 : audit [INF] from='client.? 192.168.123.102:0/3458093400' entity='client.iscsi.foo.vm02.mxbwmh' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.102:0/1876503597"}]: dispatch 2026-03-10T05:47:51.133 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:47:50 vm02 bash[17462]: audit 2026-03-10T05:47:50.306821+0000 mon.a (mon.0) 720 : audit [INF] from='client.? ' entity='client.iscsi.foo.vm02.mxbwmh' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.102:0/1876503597"}]: dispatch 2026-03-10T05:47:51.133 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:47:50 vm02 bash[22526]: cluster 2026-03-10T05:47:49.688785+0000 mgr.y (mgr.14409) 81 : cluster [DBG] pgmap v54: 161 pgs: 161 active+clean; 457 KiB data, 96 MiB used, 160 GiB / 160 GiB avail; 70 KiB/s rd, 0 B/s wr, 104 op/s 2026-03-10T05:47:51.133 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:47:50 vm02 bash[22526]: audit 2026-03-10T05:47:50.117839+0000 mon.a (mon.0) 718 : audit [INF] from='client.? ' entity='client.iscsi.foo.vm02.mxbwmh' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.102:0/3558265816"}]': finished 2026-03-10T05:47:51.133 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:47:50 vm02 bash[22526]: cluster 2026-03-10T05:47:50.118111+0000 mon.a (mon.0) 719 : cluster [DBG] osdmap e72: 8 total, 8 up, 8 in 2026-03-10T05:47:51.133 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:47:50 vm02 bash[22526]: audit 2026-03-10T05:47:50.301112+0000 mon.b (mon.2) 29 : audit [INF] from='client.? 192.168.123.102:0/3458093400' entity='client.iscsi.foo.vm02.mxbwmh' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.102:0/1876503597"}]: dispatch 2026-03-10T05:47:51.133 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:47:50 vm02 bash[22526]: audit 2026-03-10T05:47:50.306821+0000 mon.a (mon.0) 720 : audit [INF] from='client.? ' entity='client.iscsi.foo.vm02.mxbwmh' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.102:0/1876503597"}]: dispatch 2026-03-10T05:47:51.258 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:47:50 vm05 bash[17864]: cluster 2026-03-10T05:47:49.688785+0000 mgr.y (mgr.14409) 81 : cluster [DBG] pgmap v54: 161 pgs: 161 active+clean; 457 KiB data, 96 MiB used, 160 GiB / 160 GiB avail; 70 KiB/s rd, 0 B/s wr, 104 op/s 2026-03-10T05:47:51.258 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:47:50 vm05 bash[17864]: audit 2026-03-10T05:47:50.117839+0000 mon.a (mon.0) 718 : audit [INF] from='client.? ' entity='client.iscsi.foo.vm02.mxbwmh' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.102:0/3558265816"}]': finished 2026-03-10T05:47:51.258 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:47:50 vm05 bash[17864]: cluster 2026-03-10T05:47:50.118111+0000 mon.a (mon.0) 719 : cluster [DBG] osdmap e72: 8 total, 8 up, 8 in 2026-03-10T05:47:51.258 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:47:50 vm05 bash[17864]: audit 2026-03-10T05:47:50.301112+0000 mon.b (mon.2) 29 : audit [INF] from='client.? 192.168.123.102:0/3458093400' entity='client.iscsi.foo.vm02.mxbwmh' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.102:0/1876503597"}]: dispatch 2026-03-10T05:47:51.258 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:47:50 vm05 bash[17864]: audit 2026-03-10T05:47:50.306821+0000 mon.a (mon.0) 720 : audit [INF] from='client.? ' entity='client.iscsi.foo.vm02.mxbwmh' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.102:0/1876503597"}]: dispatch 2026-03-10T05:47:52.508 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:47:52 vm05 bash[17864]: audit 2026-03-10T05:47:51.118760+0000 mon.a (mon.0) 721 : audit [INF] from='client.? ' entity='client.iscsi.foo.vm02.mxbwmh' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.102:0/1876503597"}]': finished 2026-03-10T05:47:52.508 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:47:52 vm05 bash[17864]: cluster 2026-03-10T05:47:51.118830+0000 mon.a (mon.0) 722 : cluster [DBG] osdmap e73: 8 total, 8 up, 8 in 2026-03-10T05:47:52.508 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:47:52 vm05 bash[17864]: audit 2026-03-10T05:47:51.306460+0000 mon.a (mon.0) 723 : audit [INF] from='client.? 192.168.123.102:0/4800252' entity='client.iscsi.foo.vm02.mxbwmh' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.102:6801/3932825893"}]: dispatch 2026-03-10T05:47:52.508 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:47:52 vm05 bash[17864]: cluster 2026-03-10T05:47:51.689183+0000 mgr.y (mgr.14409) 82 : cluster [DBG] pgmap v57: 161 pgs: 161 active+clean; 457 KiB data, 96 MiB used, 160 GiB / 160 GiB avail; 70 KiB/s rd, 0 B/s wr, 104 op/s 2026-03-10T05:47:52.508 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:47:52 vm05 bash[17864]: audit 2026-03-10T05:47:51.713581+0000 mon.c (mon.1) 91 : audit [INF] from='mgr.14409 192.168.123.102:0/4073702081' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "2.1b", "id": [7, 2]}]: dispatch 2026-03-10T05:47:52.508 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:47:52 vm05 bash[17864]: audit 2026-03-10T05:47:51.713725+0000 mon.c (mon.1) 92 : audit [INF] from='mgr.14409 192.168.123.102:0/4073702081' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "4.7", "id": [1, 2]}]: dispatch 2026-03-10T05:47:52.508 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:47:52 vm05 bash[17864]: audit 2026-03-10T05:47:51.713804+0000 mon.c (mon.1) 93 : audit [INF] from='mgr.14409 192.168.123.102:0/4073702081' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "4.9", "id": [1, 2]}]: dispatch 2026-03-10T05:47:52.508 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:47:52 vm05 bash[17864]: audit 2026-03-10T05:47:51.713883+0000 mon.c (mon.1) 94 : audit [INF] from='mgr.14409 192.168.123.102:0/4073702081' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "4.d", "id": [1, 5]}]: dispatch 2026-03-10T05:47:52.508 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:47:52 vm05 bash[17864]: audit 2026-03-10T05:47:51.714010+0000 mon.a (mon.0) 724 : audit [INF] from='mgr.14409 ' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "2.1b", "id": [7, 2]}]: dispatch 2026-03-10T05:47:52.508 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:47:52 vm05 bash[17864]: audit 2026-03-10T05:47:51.714412+0000 mon.a (mon.0) 725 : audit [INF] from='mgr.14409 ' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "4.7", "id": [1, 2]}]: dispatch 2026-03-10T05:47:52.508 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:47:52 vm05 bash[17864]: audit 2026-03-10T05:47:51.714497+0000 mon.a (mon.0) 726 : audit [INF] from='mgr.14409 ' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "4.9", "id": [1, 2]}]: dispatch 2026-03-10T05:47:52.508 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:47:52 vm05 bash[17864]: audit 2026-03-10T05:47:51.714586+0000 mon.a (mon.0) 727 : audit [INF] from='mgr.14409 ' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "4.d", "id": [1, 5]}]: dispatch 2026-03-10T05:47:52.508 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:47:52 vm05 bash[17864]: audit 2026-03-10T05:47:51.758606+0000 mon.c (mon.1) 95 : audit [INF] from='mgr.14409 192.168.123.102:0/4073702081' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/mirror_snapshot_schedule"}]: dispatch 2026-03-10T05:47:52.508 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:47:52 vm05 bash[17864]: audit 2026-03-10T05:47:51.758949+0000 mon.a (mon.0) 728 : audit [INF] from='mgr.14409 ' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/mirror_snapshot_schedule"}]: dispatch 2026-03-10T05:47:52.508 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:47:52 vm05 bash[17864]: audit 2026-03-10T05:47:51.765950+0000 mon.c (mon.1) 96 : audit [INF] from='mgr.14409 192.168.123.102:0/4073702081' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/trash_purge_schedule"}]: dispatch 2026-03-10T05:47:52.508 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:47:52 vm05 bash[17864]: audit 2026-03-10T05:47:51.766342+0000 mon.a (mon.0) 729 : audit [INF] from='mgr.14409 ' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/trash_purge_schedule"}]: dispatch 2026-03-10T05:47:52.584 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:47:52 vm02 bash[17462]: audit 2026-03-10T05:47:51.118760+0000 mon.a (mon.0) 721 : audit [INF] from='client.? ' entity='client.iscsi.foo.vm02.mxbwmh' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.102:0/1876503597"}]': finished 2026-03-10T05:47:52.584 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:47:52 vm02 bash[17462]: cluster 2026-03-10T05:47:51.118830+0000 mon.a (mon.0) 722 : cluster [DBG] osdmap e73: 8 total, 8 up, 8 in 2026-03-10T05:47:52.584 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:47:52 vm02 bash[17462]: audit 2026-03-10T05:47:51.306460+0000 mon.a (mon.0) 723 : audit [INF] from='client.? 192.168.123.102:0/4800252' entity='client.iscsi.foo.vm02.mxbwmh' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.102:6801/3932825893"}]: dispatch 2026-03-10T05:47:52.584 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:47:52 vm02 bash[17462]: cluster 2026-03-10T05:47:51.689183+0000 mgr.y (mgr.14409) 82 : cluster [DBG] pgmap v57: 161 pgs: 161 active+clean; 457 KiB data, 96 MiB used, 160 GiB / 160 GiB avail; 70 KiB/s rd, 0 B/s wr, 104 op/s 2026-03-10T05:47:52.584 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:47:52 vm02 bash[17462]: audit 2026-03-10T05:47:51.713581+0000 mon.c (mon.1) 91 : audit [INF] from='mgr.14409 192.168.123.102:0/4073702081' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "2.1b", "id": [7, 2]}]: dispatch 2026-03-10T05:47:52.584 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:47:52 vm02 bash[17462]: audit 2026-03-10T05:47:51.713725+0000 mon.c (mon.1) 92 : audit [INF] from='mgr.14409 192.168.123.102:0/4073702081' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "4.7", "id": [1, 2]}]: dispatch 2026-03-10T05:47:52.584 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:47:52 vm02 bash[17462]: audit 2026-03-10T05:47:51.713804+0000 mon.c (mon.1) 93 : audit [INF] from='mgr.14409 192.168.123.102:0/4073702081' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "4.9", "id": [1, 2]}]: dispatch 2026-03-10T05:47:52.584 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:47:52 vm02 bash[17462]: audit 2026-03-10T05:47:51.713883+0000 mon.c (mon.1) 94 : audit [INF] from='mgr.14409 192.168.123.102:0/4073702081' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "4.d", "id": [1, 5]}]: dispatch 2026-03-10T05:47:52.584 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:47:52 vm02 bash[17462]: audit 2026-03-10T05:47:51.714010+0000 mon.a (mon.0) 724 : audit [INF] from='mgr.14409 ' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "2.1b", "id": [7, 2]}]: dispatch 2026-03-10T05:47:52.584 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:47:52 vm02 bash[17462]: audit 2026-03-10T05:47:51.714412+0000 mon.a (mon.0) 725 : audit [INF] from='mgr.14409 ' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "4.7", "id": [1, 2]}]: dispatch 2026-03-10T05:47:52.584 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:47:52 vm02 bash[17462]: audit 2026-03-10T05:47:51.714497+0000 mon.a (mon.0) 726 : audit [INF] from='mgr.14409 ' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "4.9", "id": [1, 2]}]: dispatch 2026-03-10T05:47:52.584 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:47:52 vm02 bash[17462]: audit 2026-03-10T05:47:51.714586+0000 mon.a (mon.0) 727 : audit [INF] from='mgr.14409 ' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "4.d", "id": [1, 5]}]: dispatch 2026-03-10T05:47:52.584 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:47:52 vm02 bash[17462]: audit 2026-03-10T05:47:51.758606+0000 mon.c (mon.1) 95 : audit [INF] from='mgr.14409 192.168.123.102:0/4073702081' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/mirror_snapshot_schedule"}]: dispatch 2026-03-10T05:47:52.584 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:47:52 vm02 bash[17462]: audit 2026-03-10T05:47:51.758949+0000 mon.a (mon.0) 728 : audit [INF] from='mgr.14409 ' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/mirror_snapshot_schedule"}]: dispatch 2026-03-10T05:47:52.584 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:47:52 vm02 bash[17462]: audit 2026-03-10T05:47:51.765950+0000 mon.c (mon.1) 96 : audit [INF] from='mgr.14409 192.168.123.102:0/4073702081' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/trash_purge_schedule"}]: dispatch 2026-03-10T05:47:52.584 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:47:52 vm02 bash[17462]: audit 2026-03-10T05:47:51.766342+0000 mon.a (mon.0) 729 : audit [INF] from='mgr.14409 ' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/trash_purge_schedule"}]: dispatch 2026-03-10T05:47:52.584 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:47:52 vm02 bash[22526]: audit 2026-03-10T05:47:51.118760+0000 mon.a (mon.0) 721 : audit [INF] from='client.? ' entity='client.iscsi.foo.vm02.mxbwmh' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.102:0/1876503597"}]': finished 2026-03-10T05:47:52.584 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:47:52 vm02 bash[22526]: cluster 2026-03-10T05:47:51.118830+0000 mon.a (mon.0) 722 : cluster [DBG] osdmap e73: 8 total, 8 up, 8 in 2026-03-10T05:47:52.584 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:47:52 vm02 bash[22526]: audit 2026-03-10T05:47:51.306460+0000 mon.a (mon.0) 723 : audit [INF] from='client.? 192.168.123.102:0/4800252' entity='client.iscsi.foo.vm02.mxbwmh' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.102:6801/3932825893"}]: dispatch 2026-03-10T05:47:52.585 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:47:52 vm02 bash[22526]: cluster 2026-03-10T05:47:51.689183+0000 mgr.y (mgr.14409) 82 : cluster [DBG] pgmap v57: 161 pgs: 161 active+clean; 457 KiB data, 96 MiB used, 160 GiB / 160 GiB avail; 70 KiB/s rd, 0 B/s wr, 104 op/s 2026-03-10T05:47:52.585 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:47:52 vm02 bash[22526]: audit 2026-03-10T05:47:51.713581+0000 mon.c (mon.1) 91 : audit [INF] from='mgr.14409 192.168.123.102:0/4073702081' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "2.1b", "id": [7, 2]}]: dispatch 2026-03-10T05:47:52.585 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:47:52 vm02 bash[22526]: audit 2026-03-10T05:47:51.713725+0000 mon.c (mon.1) 92 : audit [INF] from='mgr.14409 192.168.123.102:0/4073702081' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "4.7", "id": [1, 2]}]: dispatch 2026-03-10T05:47:52.585 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:47:52 vm02 bash[22526]: audit 2026-03-10T05:47:51.713804+0000 mon.c (mon.1) 93 : audit [INF] from='mgr.14409 192.168.123.102:0/4073702081' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "4.9", "id": [1, 2]}]: dispatch 2026-03-10T05:47:52.585 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:47:52 vm02 bash[22526]: audit 2026-03-10T05:47:51.713883+0000 mon.c (mon.1) 94 : audit [INF] from='mgr.14409 192.168.123.102:0/4073702081' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "4.d", "id": [1, 5]}]: dispatch 2026-03-10T05:47:52.585 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:47:52 vm02 bash[22526]: audit 2026-03-10T05:47:51.714010+0000 mon.a (mon.0) 724 : audit [INF] from='mgr.14409 ' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "2.1b", "id": [7, 2]}]: dispatch 2026-03-10T05:47:52.585 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:47:52 vm02 bash[22526]: audit 2026-03-10T05:47:51.714412+0000 mon.a (mon.0) 725 : audit [INF] from='mgr.14409 ' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "4.7", "id": [1, 2]}]: dispatch 2026-03-10T05:47:52.585 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:47:52 vm02 bash[22526]: audit 2026-03-10T05:47:51.714497+0000 mon.a (mon.0) 726 : audit [INF] from='mgr.14409 ' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "4.9", "id": [1, 2]}]: dispatch 2026-03-10T05:47:52.585 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:47:52 vm02 bash[22526]: audit 2026-03-10T05:47:51.714586+0000 mon.a (mon.0) 727 : audit [INF] from='mgr.14409 ' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "4.d", "id": [1, 5]}]: dispatch 2026-03-10T05:47:52.585 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:47:52 vm02 bash[22526]: audit 2026-03-10T05:47:51.758606+0000 mon.c (mon.1) 95 : audit [INF] from='mgr.14409 192.168.123.102:0/4073702081' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/mirror_snapshot_schedule"}]: dispatch 2026-03-10T05:47:52.585 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:47:52 vm02 bash[22526]: audit 2026-03-10T05:47:51.758949+0000 mon.a (mon.0) 728 : audit [INF] from='mgr.14409 ' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/mirror_snapshot_schedule"}]: dispatch 2026-03-10T05:47:52.585 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:47:52 vm02 bash[22526]: audit 2026-03-10T05:47:51.765950+0000 mon.c (mon.1) 96 : audit [INF] from='mgr.14409 192.168.123.102:0/4073702081' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/trash_purge_schedule"}]: dispatch 2026-03-10T05:47:52.585 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:47:52 vm02 bash[22526]: audit 2026-03-10T05:47:51.766342+0000 mon.a (mon.0) 729 : audit [INF] from='mgr.14409 ' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/trash_purge_schedule"}]: dispatch 2026-03-10T05:47:53.156 INFO:journalctl@ceph.mgr.x.vm05.stdout:Mar 10 05:47:52 vm05 bash[18520]: ::ffff:192.168.123.105 - - [10/Mar/2026:05:47:52] "GET /metrics HTTP/1.1" 200 - "" "Prometheus/2.33.4" 2026-03-10T05:47:53.508 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:47:53 vm05 bash[17864]: audit 2026-03-10T05:47:52.144294+0000 mon.a (mon.0) 730 : audit [INF] from='client.? 192.168.123.102:0/4800252' entity='client.iscsi.foo.vm02.mxbwmh' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.102:6801/3932825893"}]': finished 2026-03-10T05:47:53.508 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:47:53 vm05 bash[17864]: audit 2026-03-10T05:47:52.144342+0000 mon.a (mon.0) 731 : audit [INF] from='mgr.14409 ' entity='mgr.y' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "2.1b", "id": [7, 2]}]': finished 2026-03-10T05:47:53.508 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:47:53 vm05 bash[17864]: audit 2026-03-10T05:47:52.149034+0000 mon.a (mon.0) 732 : audit [INF] from='mgr.14409 ' entity='mgr.y' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "4.7", "id": [1, 2]}]': finished 2026-03-10T05:47:53.508 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:47:53 vm05 bash[17864]: audit 2026-03-10T05:47:52.149084+0000 mon.a (mon.0) 733 : audit [INF] from='mgr.14409 ' entity='mgr.y' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "4.9", "id": [1, 2]}]': finished 2026-03-10T05:47:53.508 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:47:53 vm05 bash[17864]: audit 2026-03-10T05:47:52.149105+0000 mon.a (mon.0) 734 : audit [INF] from='mgr.14409 ' entity='mgr.y' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "4.d", "id": [1, 5]}]': finished 2026-03-10T05:47:53.508 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:47:53 vm05 bash[17864]: cluster 2026-03-10T05:47:52.149125+0000 mon.a (mon.0) 735 : cluster [DBG] osdmap e74: 8 total, 8 up, 8 in 2026-03-10T05:47:53.508 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:47:53 vm05 bash[17864]: audit 2026-03-10T05:47:52.329296+0000 mon.a (mon.0) 736 : audit [INF] from='client.? 192.168.123.102:0/487299864' entity='client.iscsi.foo.vm02.mxbwmh' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.102:6800/3932825893"}]: dispatch 2026-03-10T05:47:53.508 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:47:53 vm05 bash[17864]: audit 2026-03-10T05:47:52.636409+0000 mgr.y (mgr.14409) 83 : audit [DBG] from='client.14712 -' entity='client.iscsi.foo.vm02.mxbwmh' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T05:47:53.584 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:47:53 vm02 bash[17462]: audit 2026-03-10T05:47:52.144294+0000 mon.a (mon.0) 730 : audit [INF] from='client.? 192.168.123.102:0/4800252' entity='client.iscsi.foo.vm02.mxbwmh' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.102:6801/3932825893"}]': finished 2026-03-10T05:47:53.584 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:47:53 vm02 bash[17462]: audit 2026-03-10T05:47:52.144342+0000 mon.a (mon.0) 731 : audit [INF] from='mgr.14409 ' entity='mgr.y' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "2.1b", "id": [7, 2]}]': finished 2026-03-10T05:47:53.584 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:47:53 vm02 bash[17462]: audit 2026-03-10T05:47:52.149034+0000 mon.a (mon.0) 732 : audit [INF] from='mgr.14409 ' entity='mgr.y' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "4.7", "id": [1, 2]}]': finished 2026-03-10T05:47:53.584 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:47:53 vm02 bash[17462]: audit 2026-03-10T05:47:52.149084+0000 mon.a (mon.0) 733 : audit [INF] from='mgr.14409 ' entity='mgr.y' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "4.9", "id": [1, 2]}]': finished 2026-03-10T05:47:53.584 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:47:53 vm02 bash[17462]: audit 2026-03-10T05:47:52.149105+0000 mon.a (mon.0) 734 : audit [INF] from='mgr.14409 ' entity='mgr.y' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "4.d", "id": [1, 5]}]': finished 2026-03-10T05:47:53.584 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:47:53 vm02 bash[17462]: cluster 2026-03-10T05:47:52.149125+0000 mon.a (mon.0) 735 : cluster [DBG] osdmap e74: 8 total, 8 up, 8 in 2026-03-10T05:47:53.584 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:47:53 vm02 bash[17462]: audit 2026-03-10T05:47:52.329296+0000 mon.a (mon.0) 736 : audit [INF] from='client.? 192.168.123.102:0/487299864' entity='client.iscsi.foo.vm02.mxbwmh' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.102:6800/3932825893"}]: dispatch 2026-03-10T05:47:53.584 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:47:53 vm02 bash[17462]: audit 2026-03-10T05:47:52.636409+0000 mgr.y (mgr.14409) 83 : audit [DBG] from='client.14712 -' entity='client.iscsi.foo.vm02.mxbwmh' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T05:47:53.584 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:47:53 vm02 bash[22526]: audit 2026-03-10T05:47:52.144294+0000 mon.a (mon.0) 730 : audit [INF] from='client.? 192.168.123.102:0/4800252' entity='client.iscsi.foo.vm02.mxbwmh' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.102:6801/3932825893"}]': finished 2026-03-10T05:47:53.584 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:47:53 vm02 bash[22526]: audit 2026-03-10T05:47:52.144342+0000 mon.a (mon.0) 731 : audit [INF] from='mgr.14409 ' entity='mgr.y' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "2.1b", "id": [7, 2]}]': finished 2026-03-10T05:47:53.584 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:47:53 vm02 bash[22526]: audit 2026-03-10T05:47:52.149034+0000 mon.a (mon.0) 732 : audit [INF] from='mgr.14409 ' entity='mgr.y' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "4.7", "id": [1, 2]}]': finished 2026-03-10T05:47:53.584 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:47:53 vm02 bash[22526]: audit 2026-03-10T05:47:52.149084+0000 mon.a (mon.0) 733 : audit [INF] from='mgr.14409 ' entity='mgr.y' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "4.9", "id": [1, 2]}]': finished 2026-03-10T05:47:53.584 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:47:53 vm02 bash[22526]: audit 2026-03-10T05:47:52.149105+0000 mon.a (mon.0) 734 : audit [INF] from='mgr.14409 ' entity='mgr.y' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "4.d", "id": [1, 5]}]': finished 2026-03-10T05:47:53.584 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:47:53 vm02 bash[22526]: cluster 2026-03-10T05:47:52.149125+0000 mon.a (mon.0) 735 : cluster [DBG] osdmap e74: 8 total, 8 up, 8 in 2026-03-10T05:47:53.584 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:47:53 vm02 bash[22526]: audit 2026-03-10T05:47:52.329296+0000 mon.a (mon.0) 736 : audit [INF] from='client.? 192.168.123.102:0/487299864' entity='client.iscsi.foo.vm02.mxbwmh' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.102:6800/3932825893"}]: dispatch 2026-03-10T05:47:53.584 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:47:53 vm02 bash[22526]: audit 2026-03-10T05:47:52.636409+0000 mgr.y (mgr.14409) 83 : audit [DBG] from='client.14712 -' entity='client.iscsi.foo.vm02.mxbwmh' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T05:47:54.508 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:47:54 vm05 bash[17864]: audit 2026-03-10T05:47:53.150917+0000 mon.a (mon.0) 737 : audit [INF] from='client.? 192.168.123.102:0/487299864' entity='client.iscsi.foo.vm02.mxbwmh' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.102:6800/3932825893"}]': finished 2026-03-10T05:47:54.508 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:47:54 vm05 bash[17864]: cluster 2026-03-10T05:47:53.150972+0000 mon.a (mon.0) 738 : cluster [DBG] osdmap e75: 8 total, 8 up, 8 in 2026-03-10T05:47:54.508 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:47:54 vm05 bash[17864]: audit 2026-03-10T05:47:53.359666+0000 mon.a (mon.0) 739 : audit [INF] from='client.? 192.168.123.102:0/4153579157' entity='client.iscsi.foo.vm02.mxbwmh' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.102:0/2702126893"}]: dispatch 2026-03-10T05:47:54.508 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:47:54 vm05 bash[17864]: cluster 2026-03-10T05:47:53.689601+0000 mgr.y (mgr.14409) 84 : cluster [DBG] pgmap v60: 161 pgs: 4 peering, 157 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 28 KiB/s rd, 0 B/s wr, 38 op/s 2026-03-10T05:47:54.584 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:47:54 vm02 bash[17462]: audit 2026-03-10T05:47:53.150917+0000 mon.a (mon.0) 737 : audit [INF] from='client.? 192.168.123.102:0/487299864' entity='client.iscsi.foo.vm02.mxbwmh' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.102:6800/3932825893"}]': finished 2026-03-10T05:47:54.584 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:47:54 vm02 bash[17462]: cluster 2026-03-10T05:47:53.150972+0000 mon.a (mon.0) 738 : cluster [DBG] osdmap e75: 8 total, 8 up, 8 in 2026-03-10T05:47:54.584 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:47:54 vm02 bash[17462]: audit 2026-03-10T05:47:53.359666+0000 mon.a (mon.0) 739 : audit [INF] from='client.? 192.168.123.102:0/4153579157' entity='client.iscsi.foo.vm02.mxbwmh' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.102:0/2702126893"}]: dispatch 2026-03-10T05:47:54.584 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:47:54 vm02 bash[17462]: cluster 2026-03-10T05:47:53.689601+0000 mgr.y (mgr.14409) 84 : cluster [DBG] pgmap v60: 161 pgs: 4 peering, 157 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 28 KiB/s rd, 0 B/s wr, 38 op/s 2026-03-10T05:47:54.584 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:47:54 vm02 bash[22526]: audit 2026-03-10T05:47:53.150917+0000 mon.a (mon.0) 737 : audit [INF] from='client.? 192.168.123.102:0/487299864' entity='client.iscsi.foo.vm02.mxbwmh' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.102:6800/3932825893"}]': finished 2026-03-10T05:47:54.584 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:47:54 vm02 bash[22526]: cluster 2026-03-10T05:47:53.150972+0000 mon.a (mon.0) 738 : cluster [DBG] osdmap e75: 8 total, 8 up, 8 in 2026-03-10T05:47:54.584 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:47:54 vm02 bash[22526]: audit 2026-03-10T05:47:53.359666+0000 mon.a (mon.0) 739 : audit [INF] from='client.? 192.168.123.102:0/4153579157' entity='client.iscsi.foo.vm02.mxbwmh' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.102:0/2702126893"}]: dispatch 2026-03-10T05:47:54.584 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:47:54 vm02 bash[22526]: cluster 2026-03-10T05:47:53.689601+0000 mgr.y (mgr.14409) 84 : cluster [DBG] pgmap v60: 161 pgs: 4 peering, 157 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 28 KiB/s rd, 0 B/s wr, 38 op/s 2026-03-10T05:47:55.508 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:47:55 vm05 bash[17864]: audit 2026-03-10T05:47:54.176616+0000 mon.a (mon.0) 740 : audit [INF] from='client.? 192.168.123.102:0/4153579157' entity='client.iscsi.foo.vm02.mxbwmh' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.102:0/2702126893"}]': finished 2026-03-10T05:47:55.508 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:47:55 vm05 bash[17864]: cluster 2026-03-10T05:47:54.176941+0000 mon.a (mon.0) 741 : cluster [DBG] osdmap e76: 8 total, 8 up, 8 in 2026-03-10T05:47:55.508 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:47:55 vm05 bash[17864]: audit 2026-03-10T05:47:54.351951+0000 mon.b (mon.2) 30 : audit [INF] from='client.? 192.168.123.102:0/3934774681' entity='client.iscsi.foo.vm02.mxbwmh' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.102:0/4232033379"}]: dispatch 2026-03-10T05:47:55.508 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:47:55 vm05 bash[17864]: audit 2026-03-10T05:47:54.357541+0000 mon.a (mon.0) 742 : audit [INF] from='client.? ' entity='client.iscsi.foo.vm02.mxbwmh' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.102:0/4232033379"}]: dispatch 2026-03-10T05:47:55.584 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:47:55 vm02 bash[17462]: audit 2026-03-10T05:47:54.176616+0000 mon.a (mon.0) 740 : audit [INF] from='client.? 192.168.123.102:0/4153579157' entity='client.iscsi.foo.vm02.mxbwmh' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.102:0/2702126893"}]': finished 2026-03-10T05:47:55.584 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:47:55 vm02 bash[17462]: cluster 2026-03-10T05:47:54.176941+0000 mon.a (mon.0) 741 : cluster [DBG] osdmap e76: 8 total, 8 up, 8 in 2026-03-10T05:47:55.584 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:47:55 vm02 bash[17462]: audit 2026-03-10T05:47:54.351951+0000 mon.b (mon.2) 30 : audit [INF] from='client.? 192.168.123.102:0/3934774681' entity='client.iscsi.foo.vm02.mxbwmh' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.102:0/4232033379"}]: dispatch 2026-03-10T05:47:55.584 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:47:55 vm02 bash[17462]: audit 2026-03-10T05:47:54.357541+0000 mon.a (mon.0) 742 : audit [INF] from='client.? ' entity='client.iscsi.foo.vm02.mxbwmh' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.102:0/4232033379"}]: dispatch 2026-03-10T05:47:55.584 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:47:55 vm02 bash[22526]: audit 2026-03-10T05:47:54.176616+0000 mon.a (mon.0) 740 : audit [INF] from='client.? 192.168.123.102:0/4153579157' entity='client.iscsi.foo.vm02.mxbwmh' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.102:0/2702126893"}]': finished 2026-03-10T05:47:55.584 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:47:55 vm02 bash[22526]: cluster 2026-03-10T05:47:54.176941+0000 mon.a (mon.0) 741 : cluster [DBG] osdmap e76: 8 total, 8 up, 8 in 2026-03-10T05:47:55.584 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:47:55 vm02 bash[22526]: audit 2026-03-10T05:47:54.351951+0000 mon.b (mon.2) 30 : audit [INF] from='client.? 192.168.123.102:0/3934774681' entity='client.iscsi.foo.vm02.mxbwmh' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.102:0/4232033379"}]: dispatch 2026-03-10T05:47:55.584 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:47:55 vm02 bash[22526]: audit 2026-03-10T05:47:54.357541+0000 mon.a (mon.0) 742 : audit [INF] from='client.? ' entity='client.iscsi.foo.vm02.mxbwmh' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.102:0/4232033379"}]: dispatch 2026-03-10T05:47:56.508 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:47:56 vm05 bash[17864]: audit 2026-03-10T05:47:55.176885+0000 mon.a (mon.0) 743 : audit [INF] from='client.? ' entity='client.iscsi.foo.vm02.mxbwmh' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.102:0/4232033379"}]': finished 2026-03-10T05:47:56.508 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:47:56 vm05 bash[17864]: cluster 2026-03-10T05:47:55.176908+0000 mon.a (mon.0) 744 : cluster [DBG] osdmap e77: 8 total, 8 up, 8 in 2026-03-10T05:47:56.508 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:47:56 vm05 bash[17864]: audit 2026-03-10T05:47:55.365080+0000 mon.c (mon.1) 97 : audit [INF] from='client.? 192.168.123.102:0/1681330770' entity='client.iscsi.foo.vm02.mxbwmh' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.102:6801/123828670"}]: dispatch 2026-03-10T05:47:56.508 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:47:56 vm05 bash[17864]: audit 2026-03-10T05:47:55.365435+0000 mon.a (mon.0) 745 : audit [INF] from='client.? ' entity='client.iscsi.foo.vm02.mxbwmh' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.102:6801/123828670"}]: dispatch 2026-03-10T05:47:56.508 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:47:56 vm05 bash[17864]: cluster 2026-03-10T05:47:55.689879+0000 mgr.y (mgr.14409) 85 : cluster [DBG] pgmap v63: 161 pgs: 4 peering, 157 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 28 KiB/s rd, 0 B/s wr, 38 op/s 2026-03-10T05:47:56.584 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:47:56 vm02 bash[17462]: audit 2026-03-10T05:47:55.176885+0000 mon.a (mon.0) 743 : audit [INF] from='client.? ' entity='client.iscsi.foo.vm02.mxbwmh' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.102:0/4232033379"}]': finished 2026-03-10T05:47:56.584 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:47:56 vm02 bash[17462]: cluster 2026-03-10T05:47:55.176908+0000 mon.a (mon.0) 744 : cluster [DBG] osdmap e77: 8 total, 8 up, 8 in 2026-03-10T05:47:56.584 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:47:56 vm02 bash[17462]: audit 2026-03-10T05:47:55.365080+0000 mon.c (mon.1) 97 : audit [INF] from='client.? 192.168.123.102:0/1681330770' entity='client.iscsi.foo.vm02.mxbwmh' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.102:6801/123828670"}]: dispatch 2026-03-10T05:47:56.584 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:47:56 vm02 bash[17462]: audit 2026-03-10T05:47:55.365435+0000 mon.a (mon.0) 745 : audit [INF] from='client.? ' entity='client.iscsi.foo.vm02.mxbwmh' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.102:6801/123828670"}]: dispatch 2026-03-10T05:47:56.584 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:47:56 vm02 bash[17462]: cluster 2026-03-10T05:47:55.689879+0000 mgr.y (mgr.14409) 85 : cluster [DBG] pgmap v63: 161 pgs: 4 peering, 157 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 28 KiB/s rd, 0 B/s wr, 38 op/s 2026-03-10T05:47:56.584 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:47:56 vm02 bash[22526]: audit 2026-03-10T05:47:55.176885+0000 mon.a (mon.0) 743 : audit [INF] from='client.? ' entity='client.iscsi.foo.vm02.mxbwmh' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.102:0/4232033379"}]': finished 2026-03-10T05:47:56.584 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:47:56 vm02 bash[22526]: cluster 2026-03-10T05:47:55.176908+0000 mon.a (mon.0) 744 : cluster [DBG] osdmap e77: 8 total, 8 up, 8 in 2026-03-10T05:47:56.584 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:47:56 vm02 bash[22526]: audit 2026-03-10T05:47:55.365080+0000 mon.c (mon.1) 97 : audit [INF] from='client.? 192.168.123.102:0/1681330770' entity='client.iscsi.foo.vm02.mxbwmh' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.102:6801/123828670"}]: dispatch 2026-03-10T05:47:56.584 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:47:56 vm02 bash[22526]: audit 2026-03-10T05:47:55.365435+0000 mon.a (mon.0) 745 : audit [INF] from='client.? ' entity='client.iscsi.foo.vm02.mxbwmh' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.102:6801/123828670"}]: dispatch 2026-03-10T05:47:56.584 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:47:56 vm02 bash[22526]: cluster 2026-03-10T05:47:55.689879+0000 mgr.y (mgr.14409) 85 : cluster [DBG] pgmap v63: 161 pgs: 4 peering, 157 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 28 KiB/s rd, 0 B/s wr, 38 op/s 2026-03-10T05:47:57.508 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:47:57 vm05 bash[17864]: audit 2026-03-10T05:47:56.252114+0000 mon.a (mon.0) 746 : audit [INF] from='client.? ' entity='client.iscsi.foo.vm02.mxbwmh' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.102:6801/123828670"}]': finished 2026-03-10T05:47:57.508 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:47:57 vm05 bash[17864]: cluster 2026-03-10T05:47:56.252300+0000 mon.a (mon.0) 747 : cluster [DBG] osdmap e78: 8 total, 8 up, 8 in 2026-03-10T05:47:57.508 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:47:57 vm05 bash[17864]: audit 2026-03-10T05:47:56.432632+0000 mon.a (mon.0) 748 : audit [INF] from='client.? 192.168.123.102:0/1975212873' entity='client.iscsi.foo.vm02.mxbwmh' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.102:0/3250290581"}]: dispatch 2026-03-10T05:47:57.584 INFO:journalctl@ceph.mgr.y.vm02.stdout:Mar 10 05:47:57 vm02 bash[17731]: ::ffff:192.168.123.105 - - [10/Mar/2026:05:47:57] "GET /metrics HTTP/1.1" 200 214534 "" "Prometheus/2.33.4" 2026-03-10T05:47:57.584 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:47:57 vm02 bash[17462]: audit 2026-03-10T05:47:56.252114+0000 mon.a (mon.0) 746 : audit [INF] from='client.? ' entity='client.iscsi.foo.vm02.mxbwmh' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.102:6801/123828670"}]': finished 2026-03-10T05:47:57.584 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:47:57 vm02 bash[17462]: cluster 2026-03-10T05:47:56.252300+0000 mon.a (mon.0) 747 : cluster [DBG] osdmap e78: 8 total, 8 up, 8 in 2026-03-10T05:47:57.584 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:47:57 vm02 bash[17462]: audit 2026-03-10T05:47:56.432632+0000 mon.a (mon.0) 748 : audit [INF] from='client.? 192.168.123.102:0/1975212873' entity='client.iscsi.foo.vm02.mxbwmh' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.102:0/3250290581"}]: dispatch 2026-03-10T05:47:57.584 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:47:57 vm02 bash[22526]: audit 2026-03-10T05:47:56.252114+0000 mon.a (mon.0) 746 : audit [INF] from='client.? ' entity='client.iscsi.foo.vm02.mxbwmh' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.102:6801/123828670"}]': finished 2026-03-10T05:47:57.584 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:47:57 vm02 bash[22526]: cluster 2026-03-10T05:47:56.252300+0000 mon.a (mon.0) 747 : cluster [DBG] osdmap e78: 8 total, 8 up, 8 in 2026-03-10T05:47:57.584 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:47:57 vm02 bash[22526]: audit 2026-03-10T05:47:56.432632+0000 mon.a (mon.0) 748 : audit [INF] from='client.? 192.168.123.102:0/1975212873' entity='client.iscsi.foo.vm02.mxbwmh' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.102:0/3250290581"}]: dispatch 2026-03-10T05:47:58.584 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:47:58 vm02 bash[17462]: audit 2026-03-10T05:47:57.252243+0000 mon.a (mon.0) 749 : audit [INF] from='client.? 192.168.123.102:0/1975212873' entity='client.iscsi.foo.vm02.mxbwmh' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.102:0/3250290581"}]': finished 2026-03-10T05:47:58.584 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:47:58 vm02 bash[17462]: cluster 2026-03-10T05:47:57.252317+0000 mon.a (mon.0) 750 : cluster [DBG] osdmap e79: 8 total, 8 up, 8 in 2026-03-10T05:47:58.584 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:47:58 vm02 bash[17462]: audit 2026-03-10T05:47:57.448278+0000 mon.a (mon.0) 751 : audit [INF] from='client.? 192.168.123.102:0/2257119136' entity='client.iscsi.foo.vm02.mxbwmh' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.102:0/4276843242"}]: dispatch 2026-03-10T05:47:58.584 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:47:58 vm02 bash[17462]: cluster 2026-03-10T05:47:57.690281+0000 mgr.y (mgr.14409) 86 : cluster [DBG] pgmap v66: 161 pgs: 4 peering, 157 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail 2026-03-10T05:47:58.584 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:47:58 vm02 bash[22526]: audit 2026-03-10T05:47:57.252243+0000 mon.a (mon.0) 749 : audit [INF] from='client.? 192.168.123.102:0/1975212873' entity='client.iscsi.foo.vm02.mxbwmh' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.102:0/3250290581"}]': finished 2026-03-10T05:47:58.584 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:47:58 vm02 bash[22526]: cluster 2026-03-10T05:47:57.252317+0000 mon.a (mon.0) 750 : cluster [DBG] osdmap e79: 8 total, 8 up, 8 in 2026-03-10T05:47:58.584 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:47:58 vm02 bash[22526]: audit 2026-03-10T05:47:57.448278+0000 mon.a (mon.0) 751 : audit [INF] from='client.? 192.168.123.102:0/2257119136' entity='client.iscsi.foo.vm02.mxbwmh' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.102:0/4276843242"}]: dispatch 2026-03-10T05:47:58.584 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:47:58 vm02 bash[22526]: cluster 2026-03-10T05:47:57.690281+0000 mgr.y (mgr.14409) 86 : cluster [DBG] pgmap v66: 161 pgs: 4 peering, 157 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail 2026-03-10T05:47:58.758 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:47:58 vm05 bash[17864]: audit 2026-03-10T05:47:57.252243+0000 mon.a (mon.0) 749 : audit [INF] from='client.? 192.168.123.102:0/1975212873' entity='client.iscsi.foo.vm02.mxbwmh' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.102:0/3250290581"}]': finished 2026-03-10T05:47:58.758 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:47:58 vm05 bash[17864]: cluster 2026-03-10T05:47:57.252317+0000 mon.a (mon.0) 750 : cluster [DBG] osdmap e79: 8 total, 8 up, 8 in 2026-03-10T05:47:58.758 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:47:58 vm05 bash[17864]: audit 2026-03-10T05:47:57.448278+0000 mon.a (mon.0) 751 : audit [INF] from='client.? 192.168.123.102:0/2257119136' entity='client.iscsi.foo.vm02.mxbwmh' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.102:0/4276843242"}]: dispatch 2026-03-10T05:47:58.758 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:47:58 vm05 bash[17864]: cluster 2026-03-10T05:47:57.690281+0000 mgr.y (mgr.14409) 86 : cluster [DBG] pgmap v66: 161 pgs: 4 peering, 157 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail 2026-03-10T05:47:59.584 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:47:59 vm02 bash[17462]: audit 2026-03-10T05:47:58.271757+0000 mon.a (mon.0) 752 : audit [INF] from='client.? 192.168.123.102:0/2257119136' entity='client.iscsi.foo.vm02.mxbwmh' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.102:0/4276843242"}]': finished 2026-03-10T05:47:59.584 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:47:59 vm02 bash[17462]: cluster 2026-03-10T05:47:58.271839+0000 mon.a (mon.0) 753 : cluster [DBG] osdmap e80: 8 total, 8 up, 8 in 2026-03-10T05:47:59.584 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:47:59 vm02 bash[17462]: audit 2026-03-10T05:47:58.447487+0000 mon.c (mon.1) 98 : audit [INF] from='client.? 192.168.123.102:0/2505924984' entity='client.iscsi.foo.vm02.mxbwmh' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.102:6800/123828670"}]: dispatch 2026-03-10T05:47:59.584 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:47:59 vm02 bash[17462]: audit 2026-03-10T05:47:58.448033+0000 mon.a (mon.0) 754 : audit [INF] from='client.? ' entity='client.iscsi.foo.vm02.mxbwmh' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.102:6800/123828670"}]: dispatch 2026-03-10T05:47:59.584 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:47:59 vm02 bash[22526]: audit 2026-03-10T05:47:58.271757+0000 mon.a (mon.0) 752 : audit [INF] from='client.? 192.168.123.102:0/2257119136' entity='client.iscsi.foo.vm02.mxbwmh' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.102:0/4276843242"}]': finished 2026-03-10T05:47:59.584 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:47:59 vm02 bash[22526]: cluster 2026-03-10T05:47:58.271839+0000 mon.a (mon.0) 753 : cluster [DBG] osdmap e80: 8 total, 8 up, 8 in 2026-03-10T05:47:59.584 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:47:59 vm02 bash[22526]: audit 2026-03-10T05:47:58.447487+0000 mon.c (mon.1) 98 : audit [INF] from='client.? 192.168.123.102:0/2505924984' entity='client.iscsi.foo.vm02.mxbwmh' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.102:6800/123828670"}]: dispatch 2026-03-10T05:47:59.584 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:47:59 vm02 bash[22526]: audit 2026-03-10T05:47:58.448033+0000 mon.a (mon.0) 754 : audit [INF] from='client.? ' entity='client.iscsi.foo.vm02.mxbwmh' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.102:6800/123828670"}]: dispatch 2026-03-10T05:47:59.758 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:47:59 vm05 bash[17864]: audit 2026-03-10T05:47:58.271757+0000 mon.a (mon.0) 752 : audit [INF] from='client.? 192.168.123.102:0/2257119136' entity='client.iscsi.foo.vm02.mxbwmh' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.102:0/4276843242"}]': finished 2026-03-10T05:47:59.758 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:47:59 vm05 bash[17864]: cluster 2026-03-10T05:47:58.271839+0000 mon.a (mon.0) 753 : cluster [DBG] osdmap e80: 8 total, 8 up, 8 in 2026-03-10T05:47:59.758 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:47:59 vm05 bash[17864]: audit 2026-03-10T05:47:58.447487+0000 mon.c (mon.1) 98 : audit [INF] from='client.? 192.168.123.102:0/2505924984' entity='client.iscsi.foo.vm02.mxbwmh' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.102:6800/123828670"}]: dispatch 2026-03-10T05:47:59.758 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:47:59 vm05 bash[17864]: audit 2026-03-10T05:47:58.448033+0000 mon.a (mon.0) 754 : audit [INF] from='client.? ' entity='client.iscsi.foo.vm02.mxbwmh' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.102:6800/123828670"}]: dispatch 2026-03-10T05:48:00.584 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:48:00 vm02 bash[17462]: audit 2026-03-10T05:47:59.277287+0000 mon.a (mon.0) 755 : audit [INF] from='client.? ' entity='client.iscsi.foo.vm02.mxbwmh' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.102:6800/123828670"}]': finished 2026-03-10T05:48:00.584 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:48:00 vm02 bash[17462]: cluster 2026-03-10T05:47:59.277416+0000 mon.a (mon.0) 756 : cluster [DBG] osdmap e81: 8 total, 8 up, 8 in 2026-03-10T05:48:00.584 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:48:00 vm02 bash[17462]: cluster 2026-03-10T05:47:59.690563+0000 mgr.y (mgr.14409) 87 : cluster [DBG] pgmap v69: 161 pgs: 161 active+clean; 457 KiB data, 99 MiB used, 160 GiB / 160 GiB avail; 18 KiB/s rd, 0 B/s wr, 26 op/s; 0 B/s, 0 objects/s recovering 2026-03-10T05:48:00.584 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:48:00 vm02 bash[22526]: audit 2026-03-10T05:47:59.277287+0000 mon.a (mon.0) 755 : audit [INF] from='client.? ' entity='client.iscsi.foo.vm02.mxbwmh' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.102:6800/123828670"}]': finished 2026-03-10T05:48:00.584 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:48:00 vm02 bash[22526]: cluster 2026-03-10T05:47:59.277416+0000 mon.a (mon.0) 756 : cluster [DBG] osdmap e81: 8 total, 8 up, 8 in 2026-03-10T05:48:00.584 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:48:00 vm02 bash[22526]: cluster 2026-03-10T05:47:59.690563+0000 mgr.y (mgr.14409) 87 : cluster [DBG] pgmap v69: 161 pgs: 161 active+clean; 457 KiB data, 99 MiB used, 160 GiB / 160 GiB avail; 18 KiB/s rd, 0 B/s wr, 26 op/s; 0 B/s, 0 objects/s recovering 2026-03-10T05:48:00.758 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:48:00 vm05 bash[17864]: audit 2026-03-10T05:47:59.277287+0000 mon.a (mon.0) 755 : audit [INF] from='client.? ' entity='client.iscsi.foo.vm02.mxbwmh' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.102:6800/123828670"}]': finished 2026-03-10T05:48:00.758 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:48:00 vm05 bash[17864]: cluster 2026-03-10T05:47:59.277416+0000 mon.a (mon.0) 756 : cluster [DBG] osdmap e81: 8 total, 8 up, 8 in 2026-03-10T05:48:00.758 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:48:00 vm05 bash[17864]: cluster 2026-03-10T05:47:59.690563+0000 mgr.y (mgr.14409) 87 : cluster [DBG] pgmap v69: 161 pgs: 161 active+clean; 457 KiB data, 99 MiB used, 160 GiB / 160 GiB avail; 18 KiB/s rd, 0 B/s wr, 26 op/s; 0 B/s, 0 objects/s recovering 2026-03-10T05:48:03.008 INFO:journalctl@ceph.mgr.x.vm05.stdout:Mar 10 05:48:02 vm05 bash[18520]: ::ffff:192.168.123.105 - - [10/Mar/2026:05:48:02] "GET /metrics HTTP/1.1" 200 - "" "Prometheus/2.33.4" 2026-03-10T05:48:03.008 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:48:02 vm05 bash[17864]: cluster 2026-03-10T05:48:01.690862+0000 mgr.y (mgr.14409) 88 : cluster [DBG] pgmap v70: 161 pgs: 161 active+clean; 457 KiB data, 99 MiB used, 160 GiB / 160 GiB avail; 13 KiB/s rd, 0 B/s wr, 19 op/s; 0 B/s, 0 objects/s recovering 2026-03-10T05:48:03.084 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:48:02 vm02 bash[17462]: cluster 2026-03-10T05:48:01.690862+0000 mgr.y (mgr.14409) 88 : cluster [DBG] pgmap v70: 161 pgs: 161 active+clean; 457 KiB data, 99 MiB used, 160 GiB / 160 GiB avail; 13 KiB/s rd, 0 B/s wr, 19 op/s; 0 B/s, 0 objects/s recovering 2026-03-10T05:48:03.084 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:48:02 vm02 bash[22526]: cluster 2026-03-10T05:48:01.690862+0000 mgr.y (mgr.14409) 88 : cluster [DBG] pgmap v70: 161 pgs: 161 active+clean; 457 KiB data, 99 MiB used, 160 GiB / 160 GiB avail; 13 KiB/s rd, 0 B/s wr, 19 op/s; 0 B/s, 0 objects/s recovering 2026-03-10T05:48:04.008 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:48:03 vm05 bash[17864]: audit 2026-03-10T05:48:02.646280+0000 mgr.y (mgr.14409) 89 : audit [DBG] from='client.14712 -' entity='client.iscsi.foo.vm02.mxbwmh' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T05:48:04.084 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:48:03 vm02 bash[17462]: audit 2026-03-10T05:48:02.646280+0000 mgr.y (mgr.14409) 89 : audit [DBG] from='client.14712 -' entity='client.iscsi.foo.vm02.mxbwmh' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T05:48:04.084 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:48:03 vm02 bash[22526]: audit 2026-03-10T05:48:02.646280+0000 mgr.y (mgr.14409) 89 : audit [DBG] from='client.14712 -' entity='client.iscsi.foo.vm02.mxbwmh' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T05:48:05.084 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:48:04 vm02 bash[17462]: cluster 2026-03-10T05:48:03.691364+0000 mgr.y (mgr.14409) 90 : cluster [DBG] pgmap v71: 161 pgs: 161 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail; 12 KiB/s rd, 0 B/s wr, 17 op/s; 0 B/s, 0 objects/s recovering 2026-03-10T05:48:05.084 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:48:04 vm02 bash[22526]: cluster 2026-03-10T05:48:03.691364+0000 mgr.y (mgr.14409) 90 : cluster [DBG] pgmap v71: 161 pgs: 161 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail; 12 KiB/s rd, 0 B/s wr, 17 op/s; 0 B/s, 0 objects/s recovering 2026-03-10T05:48:05.258 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:48:04 vm05 bash[17864]: cluster 2026-03-10T05:48:03.691364+0000 mgr.y (mgr.14409) 90 : cluster [DBG] pgmap v71: 161 pgs: 161 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail; 12 KiB/s rd, 0 B/s wr, 17 op/s; 0 B/s, 0 objects/s recovering 2026-03-10T05:48:07.084 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:48:06 vm02 bash[17462]: cluster 2026-03-10T05:48:05.691647+0000 mgr.y (mgr.14409) 91 : cluster [DBG] pgmap v72: 161 pgs: 161 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail; 9.7 KiB/s rd, 0 B/s wr, 13 op/s; 0 B/s, 0 objects/s recovering 2026-03-10T05:48:07.084 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:48:06 vm02 bash[22526]: cluster 2026-03-10T05:48:05.691647+0000 mgr.y (mgr.14409) 91 : cluster [DBG] pgmap v72: 161 pgs: 161 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail; 9.7 KiB/s rd, 0 B/s wr, 13 op/s; 0 B/s, 0 objects/s recovering 2026-03-10T05:48:07.258 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:48:06 vm05 bash[17864]: cluster 2026-03-10T05:48:05.691647+0000 mgr.y (mgr.14409) 91 : cluster [DBG] pgmap v72: 161 pgs: 161 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail; 9.7 KiB/s rd, 0 B/s wr, 13 op/s; 0 B/s, 0 objects/s recovering 2026-03-10T05:48:07.584 INFO:journalctl@ceph.mgr.y.vm02.stdout:Mar 10 05:48:07 vm02 bash[17731]: ::ffff:192.168.123.105 - - [10/Mar/2026:05:48:07] "GET /metrics HTTP/1.1" 200 214507 "" "Prometheus/2.33.4" 2026-03-10T05:48:09.258 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:48:08 vm05 bash[17864]: cluster 2026-03-10T05:48:07.691999+0000 mgr.y (mgr.14409) 92 : cluster [DBG] pgmap v73: 161 pgs: 161 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail; 543 B/s rd, 0 op/s 2026-03-10T05:48:09.334 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:48:08 vm02 bash[17462]: cluster 2026-03-10T05:48:07.691999+0000 mgr.y (mgr.14409) 92 : cluster [DBG] pgmap v73: 161 pgs: 161 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail; 543 B/s rd, 0 op/s 2026-03-10T05:48:09.334 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:48:08 vm02 bash[22526]: cluster 2026-03-10T05:48:07.691999+0000 mgr.y (mgr.14409) 92 : cluster [DBG] pgmap v73: 161 pgs: 161 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail; 543 B/s rd, 0 op/s 2026-03-10T05:48:11.258 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:48:10 vm05 bash[17864]: cluster 2026-03-10T05:48:09.692472+0000 mgr.y (mgr.14409) 93 : cluster [DBG] pgmap v74: 161 pgs: 161 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail; 983 B/s rd, 0 op/s 2026-03-10T05:48:11.334 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:48:10 vm02 bash[17462]: cluster 2026-03-10T05:48:09.692472+0000 mgr.y (mgr.14409) 93 : cluster [DBG] pgmap v74: 161 pgs: 161 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail; 983 B/s rd, 0 op/s 2026-03-10T05:48:11.334 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:48:10 vm02 bash[22526]: cluster 2026-03-10T05:48:09.692472+0000 mgr.y (mgr.14409) 93 : cluster [DBG] pgmap v74: 161 pgs: 161 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail; 983 B/s rd, 0 op/s 2026-03-10T05:48:13.084 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:48:12 vm02 bash[17462]: cluster 2026-03-10T05:48:11.692716+0000 mgr.y (mgr.14409) 94 : cluster [DBG] pgmap v75: 161 pgs: 161 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T05:48:13.084 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:48:12 vm02 bash[22526]: cluster 2026-03-10T05:48:11.692716+0000 mgr.y (mgr.14409) 94 : cluster [DBG] pgmap v75: 161 pgs: 161 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T05:48:13.258 INFO:journalctl@ceph.mgr.x.vm05.stdout:Mar 10 05:48:12 vm05 bash[18520]: ::ffff:192.168.123.105 - - [10/Mar/2026:05:48:12] "GET /metrics HTTP/1.1" 200 - "" "Prometheus/2.33.4" 2026-03-10T05:48:13.258 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:48:12 vm05 bash[17864]: cluster 2026-03-10T05:48:11.692716+0000 mgr.y (mgr.14409) 94 : cluster [DBG] pgmap v75: 161 pgs: 161 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T05:48:14.258 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:48:13 vm05 bash[17864]: audit 2026-03-10T05:48:12.650998+0000 mgr.y (mgr.14409) 95 : audit [DBG] from='client.14712 -' entity='client.iscsi.foo.vm02.mxbwmh' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T05:48:14.334 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:48:13 vm02 bash[17462]: audit 2026-03-10T05:48:12.650998+0000 mgr.y (mgr.14409) 95 : audit [DBG] from='client.14712 -' entity='client.iscsi.foo.vm02.mxbwmh' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T05:48:14.334 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:48:13 vm02 bash[22526]: audit 2026-03-10T05:48:12.650998+0000 mgr.y (mgr.14409) 95 : audit [DBG] from='client.14712 -' entity='client.iscsi.foo.vm02.mxbwmh' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T05:48:15.258 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:48:14 vm05 bash[17864]: cluster 2026-03-10T05:48:13.693293+0000 mgr.y (mgr.14409) 96 : cluster [DBG] pgmap v76: 161 pgs: 161 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T05:48:15.334 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:48:14 vm02 bash[17462]: cluster 2026-03-10T05:48:13.693293+0000 mgr.y (mgr.14409) 96 : cluster [DBG] pgmap v76: 161 pgs: 161 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T05:48:15.334 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:48:14 vm02 bash[22526]: cluster 2026-03-10T05:48:13.693293+0000 mgr.y (mgr.14409) 96 : cluster [DBG] pgmap v76: 161 pgs: 161 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T05:48:16.258 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:48:15 vm05 bash[17864]: cluster 2026-03-10T05:48:15.693626+0000 mgr.y (mgr.14409) 97 : cluster [DBG] pgmap v77: 161 pgs: 161 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T05:48:16.334 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:48:15 vm02 bash[17462]: cluster 2026-03-10T05:48:15.693626+0000 mgr.y (mgr.14409) 97 : cluster [DBG] pgmap v77: 161 pgs: 161 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T05:48:16.334 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:48:15 vm02 bash[22526]: cluster 2026-03-10T05:48:15.693626+0000 mgr.y (mgr.14409) 97 : cluster [DBG] pgmap v77: 161 pgs: 161 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T05:48:17.584 INFO:journalctl@ceph.mgr.y.vm02.stdout:Mar 10 05:48:17 vm02 bash[17731]: ::ffff:192.168.123.105 - - [10/Mar/2026:05:48:17] "GET /metrics HTTP/1.1" 200 214507 "" "Prometheus/2.33.4" 2026-03-10T05:48:19.008 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:48:18 vm05 bash[17864]: cluster 2026-03-10T05:48:17.693906+0000 mgr.y (mgr.14409) 98 : cluster [DBG] pgmap v78: 161 pgs: 161 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T05:48:19.084 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:48:18 vm02 bash[17462]: cluster 2026-03-10T05:48:17.693906+0000 mgr.y (mgr.14409) 98 : cluster [DBG] pgmap v78: 161 pgs: 161 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T05:48:19.084 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:48:18 vm02 bash[22526]: cluster 2026-03-10T05:48:17.693906+0000 mgr.y (mgr.14409) 98 : cluster [DBG] pgmap v78: 161 pgs: 161 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T05:48:21.008 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:48:20 vm05 bash[17864]: cluster 2026-03-10T05:48:19.694401+0000 mgr.y (mgr.14409) 99 : cluster [DBG] pgmap v79: 161 pgs: 161 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T05:48:21.084 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:48:20 vm02 bash[17462]: cluster 2026-03-10T05:48:19.694401+0000 mgr.y (mgr.14409) 99 : cluster [DBG] pgmap v79: 161 pgs: 161 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T05:48:21.084 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:48:20 vm02 bash[22526]: cluster 2026-03-10T05:48:19.694401+0000 mgr.y (mgr.14409) 99 : cluster [DBG] pgmap v79: 161 pgs: 161 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T05:48:23.084 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:48:22 vm02 bash[17462]: cluster 2026-03-10T05:48:21.694656+0000 mgr.y (mgr.14409) 100 : cluster [DBG] pgmap v80: 161 pgs: 161 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T05:48:23.084 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:48:22 vm02 bash[22526]: cluster 2026-03-10T05:48:21.694656+0000 mgr.y (mgr.14409) 100 : cluster [DBG] pgmap v80: 161 pgs: 161 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T05:48:23.258 INFO:journalctl@ceph.mgr.x.vm05.stdout:Mar 10 05:48:22 vm05 bash[18520]: ::ffff:192.168.123.105 - - [10/Mar/2026:05:48:22] "GET /metrics HTTP/1.1" 200 - "" "Prometheus/2.33.4" 2026-03-10T05:48:23.258 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:48:22 vm05 bash[17864]: cluster 2026-03-10T05:48:21.694656+0000 mgr.y (mgr.14409) 100 : cluster [DBG] pgmap v80: 161 pgs: 161 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T05:48:24.084 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:48:23 vm02 bash[17462]: audit 2026-03-10T05:48:22.660899+0000 mgr.y (mgr.14409) 101 : audit [DBG] from='client.14712 -' entity='client.iscsi.foo.vm02.mxbwmh' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T05:48:24.084 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:48:23 vm02 bash[22526]: audit 2026-03-10T05:48:22.660899+0000 mgr.y (mgr.14409) 101 : audit [DBG] from='client.14712 -' entity='client.iscsi.foo.vm02.mxbwmh' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T05:48:24.258 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:48:23 vm05 bash[17864]: audit 2026-03-10T05:48:22.660899+0000 mgr.y (mgr.14409) 101 : audit [DBG] from='client.14712 -' entity='client.iscsi.foo.vm02.mxbwmh' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T05:48:25.084 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:48:24 vm02 bash[17462]: cluster 2026-03-10T05:48:23.695256+0000 mgr.y (mgr.14409) 102 : cluster [DBG] pgmap v81: 161 pgs: 161 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T05:48:25.084 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:48:24 vm02 bash[22526]: cluster 2026-03-10T05:48:23.695256+0000 mgr.y (mgr.14409) 102 : cluster [DBG] pgmap v81: 161 pgs: 161 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T05:48:25.258 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:48:24 vm05 bash[17864]: cluster 2026-03-10T05:48:23.695256+0000 mgr.y (mgr.14409) 102 : cluster [DBG] pgmap v81: 161 pgs: 161 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T05:48:27.084 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:48:26 vm02 bash[17462]: cluster 2026-03-10T05:48:25.695658+0000 mgr.y (mgr.14409) 103 : cluster [DBG] pgmap v82: 161 pgs: 161 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T05:48:27.084 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:48:26 vm02 bash[22526]: cluster 2026-03-10T05:48:25.695658+0000 mgr.y (mgr.14409) 103 : cluster [DBG] pgmap v82: 161 pgs: 161 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T05:48:27.258 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:48:26 vm05 bash[17864]: cluster 2026-03-10T05:48:25.695658+0000 mgr.y (mgr.14409) 103 : cluster [DBG] pgmap v82: 161 pgs: 161 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T05:48:27.584 INFO:journalctl@ceph.mgr.y.vm02.stdout:Mar 10 05:48:27 vm02 bash[17731]: ::ffff:192.168.123.105 - - [10/Mar/2026:05:48:27] "GET /metrics HTTP/1.1" 200 214472 "" "Prometheus/2.33.4" 2026-03-10T05:48:29.084 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:48:28 vm02 bash[17462]: cluster 2026-03-10T05:48:27.695974+0000 mgr.y (mgr.14409) 104 : cluster [DBG] pgmap v83: 161 pgs: 161 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T05:48:29.084 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:48:28 vm02 bash[22526]: cluster 2026-03-10T05:48:27.695974+0000 mgr.y (mgr.14409) 104 : cluster [DBG] pgmap v83: 161 pgs: 161 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T05:48:29.258 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:48:28 vm05 bash[17864]: cluster 2026-03-10T05:48:27.695974+0000 mgr.y (mgr.14409) 104 : cluster [DBG] pgmap v83: 161 pgs: 161 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T05:48:31.084 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:48:30 vm02 bash[17462]: cluster 2026-03-10T05:48:29.696462+0000 mgr.y (mgr.14409) 105 : cluster [DBG] pgmap v84: 161 pgs: 161 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T05:48:31.084 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:48:30 vm02 bash[22526]: cluster 2026-03-10T05:48:29.696462+0000 mgr.y (mgr.14409) 105 : cluster [DBG] pgmap v84: 161 pgs: 161 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T05:48:31.258 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:48:30 vm05 bash[17864]: cluster 2026-03-10T05:48:29.696462+0000 mgr.y (mgr.14409) 105 : cluster [DBG] pgmap v84: 161 pgs: 161 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T05:48:33.084 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:48:32 vm02 bash[17462]: cluster 2026-03-10T05:48:31.696826+0000 mgr.y (mgr.14409) 106 : cluster [DBG] pgmap v85: 161 pgs: 161 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T05:48:33.084 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:48:32 vm02 bash[22526]: cluster 2026-03-10T05:48:31.696826+0000 mgr.y (mgr.14409) 106 : cluster [DBG] pgmap v85: 161 pgs: 161 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T05:48:33.258 INFO:journalctl@ceph.mgr.x.vm05.stdout:Mar 10 05:48:32 vm05 bash[18520]: ::ffff:192.168.123.105 - - [10/Mar/2026:05:48:32] "GET /metrics HTTP/1.1" 200 - "" "Prometheus/2.33.4" 2026-03-10T05:48:33.258 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:48:32 vm05 bash[17864]: cluster 2026-03-10T05:48:31.696826+0000 mgr.y (mgr.14409) 106 : cluster [DBG] pgmap v85: 161 pgs: 161 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T05:48:34.084 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:48:33 vm02 bash[17462]: audit 2026-03-10T05:48:32.669246+0000 mgr.y (mgr.14409) 107 : audit [DBG] from='client.14712 -' entity='client.iscsi.foo.vm02.mxbwmh' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T05:48:34.084 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:48:33 vm02 bash[22526]: audit 2026-03-10T05:48:32.669246+0000 mgr.y (mgr.14409) 107 : audit [DBG] from='client.14712 -' entity='client.iscsi.foo.vm02.mxbwmh' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T05:48:34.258 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:48:33 vm05 bash[17864]: audit 2026-03-10T05:48:32.669246+0000 mgr.y (mgr.14409) 107 : audit [DBG] from='client.14712 -' entity='client.iscsi.foo.vm02.mxbwmh' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T05:48:35.084 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:48:34 vm02 bash[17462]: cluster 2026-03-10T05:48:33.697393+0000 mgr.y (mgr.14409) 108 : cluster [DBG] pgmap v86: 161 pgs: 161 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T05:48:35.084 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:48:34 vm02 bash[22526]: cluster 2026-03-10T05:48:33.697393+0000 mgr.y (mgr.14409) 108 : cluster [DBG] pgmap v86: 161 pgs: 161 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T05:48:35.258 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:48:34 vm05 bash[17864]: cluster 2026-03-10T05:48:33.697393+0000 mgr.y (mgr.14409) 108 : cluster [DBG] pgmap v86: 161 pgs: 161 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T05:48:37.084 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:48:36 vm02 bash[17462]: cluster 2026-03-10T05:48:35.697688+0000 mgr.y (mgr.14409) 109 : cluster [DBG] pgmap v87: 161 pgs: 161 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T05:48:37.084 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:48:36 vm02 bash[22526]: cluster 2026-03-10T05:48:35.697688+0000 mgr.y (mgr.14409) 109 : cluster [DBG] pgmap v87: 161 pgs: 161 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T05:48:37.258 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:48:36 vm05 bash[17864]: cluster 2026-03-10T05:48:35.697688+0000 mgr.y (mgr.14409) 109 : cluster [DBG] pgmap v87: 161 pgs: 161 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T05:48:37.584 INFO:journalctl@ceph.mgr.y.vm02.stdout:Mar 10 05:48:37 vm02 bash[17731]: ::ffff:192.168.123.105 - - [10/Mar/2026:05:48:37] "GET /metrics HTTP/1.1" 200 214443 "" "Prometheus/2.33.4" 2026-03-10T05:48:39.084 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:48:38 vm02 bash[17462]: cluster 2026-03-10T05:48:37.698104+0000 mgr.y (mgr.14409) 110 : cluster [DBG] pgmap v88: 161 pgs: 161 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T05:48:39.084 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:48:38 vm02 bash[22526]: cluster 2026-03-10T05:48:37.698104+0000 mgr.y (mgr.14409) 110 : cluster [DBG] pgmap v88: 161 pgs: 161 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T05:48:39.258 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:48:38 vm05 bash[17864]: cluster 2026-03-10T05:48:37.698104+0000 mgr.y (mgr.14409) 110 : cluster [DBG] pgmap v88: 161 pgs: 161 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T05:48:41.258 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:48:40 vm05 bash[17864]: cluster 2026-03-10T05:48:39.698684+0000 mgr.y (mgr.14409) 111 : cluster [DBG] pgmap v89: 161 pgs: 161 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T05:48:41.334 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:48:40 vm02 bash[17462]: cluster 2026-03-10T05:48:39.698684+0000 mgr.y (mgr.14409) 111 : cluster [DBG] pgmap v89: 161 pgs: 161 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T05:48:41.334 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:48:40 vm02 bash[22526]: cluster 2026-03-10T05:48:39.698684+0000 mgr.y (mgr.14409) 111 : cluster [DBG] pgmap v89: 161 pgs: 161 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T05:48:43.084 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:48:42 vm02 bash[17462]: cluster 2026-03-10T05:48:41.699027+0000 mgr.y (mgr.14409) 112 : cluster [DBG] pgmap v90: 161 pgs: 161 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T05:48:43.084 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:48:42 vm02 bash[22526]: cluster 2026-03-10T05:48:41.699027+0000 mgr.y (mgr.14409) 112 : cluster [DBG] pgmap v90: 161 pgs: 161 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T05:48:43.258 INFO:journalctl@ceph.mgr.x.vm05.stdout:Mar 10 05:48:42 vm05 bash[18520]: ::ffff:192.168.123.105 - - [10/Mar/2026:05:48:42] "GET /metrics HTTP/1.1" 200 - "" "Prometheus/2.33.4" 2026-03-10T05:48:43.258 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:48:42 vm05 bash[17864]: cluster 2026-03-10T05:48:41.699027+0000 mgr.y (mgr.14409) 112 : cluster [DBG] pgmap v90: 161 pgs: 161 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T05:48:43.834 INFO:journalctl@ceph.alertmanager.a.vm02.stdout:Mar 10 05:48:43 vm02 bash[43400]: level=warn ts=2026-03-10T05:48:43.521Z caller=notify.go:724 component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"https://192.168.123.102:8443//api/prometheus_receiver\": x509: cannot validate certificate for 192.168.123.102 because it doesn't contain any IP SANs" 2026-03-10T05:48:43.834 INFO:journalctl@ceph.alertmanager.a.vm02.stdout:Mar 10 05:48:43 vm02 bash[43400]: level=warn ts=2026-03-10T05:48:43.521Z caller=notify.go:724 component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"https://192.168.123.105:8443/api/prometheus_receiver\": x509: cannot validate certificate for 192.168.123.105 because it doesn't contain any IP SANs" 2026-03-10T05:48:44.258 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:48:43 vm05 bash[17864]: audit 2026-03-10T05:48:42.678815+0000 mgr.y (mgr.14409) 113 : audit [DBG] from='client.14712 -' entity='client.iscsi.foo.vm02.mxbwmh' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T05:48:44.334 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:48:43 vm02 bash[17462]: audit 2026-03-10T05:48:42.678815+0000 mgr.y (mgr.14409) 113 : audit [DBG] from='client.14712 -' entity='client.iscsi.foo.vm02.mxbwmh' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T05:48:44.334 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:48:43 vm02 bash[22526]: audit 2026-03-10T05:48:42.678815+0000 mgr.y (mgr.14409) 113 : audit [DBG] from='client.14712 -' entity='client.iscsi.foo.vm02.mxbwmh' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T05:48:45.258 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:48:44 vm05 bash[17864]: cluster 2026-03-10T05:48:43.699482+0000 mgr.y (mgr.14409) 114 : cluster [DBG] pgmap v91: 161 pgs: 161 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T05:48:45.333 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:48:44 vm02 bash[17462]: cluster 2026-03-10T05:48:43.699482+0000 mgr.y (mgr.14409) 114 : cluster [DBG] pgmap v91: 161 pgs: 161 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T05:48:45.334 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:48:44 vm02 bash[22526]: cluster 2026-03-10T05:48:43.699482+0000 mgr.y (mgr.14409) 114 : cluster [DBG] pgmap v91: 161 pgs: 161 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T05:48:47.148 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:48:46 vm02 bash[17462]: cluster 2026-03-10T05:48:45.699712+0000 mgr.y (mgr.14409) 115 : cluster [DBG] pgmap v92: 161 pgs: 161 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T05:48:47.148 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:48:46 vm02 bash[22526]: cluster 2026-03-10T05:48:45.699712+0000 mgr.y (mgr.14409) 115 : cluster [DBG] pgmap v92: 161 pgs: 161 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T05:48:47.258 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:48:46 vm05 bash[17864]: cluster 2026-03-10T05:48:45.699712+0000 mgr.y (mgr.14409) 115 : cluster [DBG] pgmap v92: 161 pgs: 161 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T05:48:47.584 INFO:journalctl@ceph.mgr.y.vm02.stdout:Mar 10 05:48:47 vm02 bash[17731]: ::ffff:192.168.123.105 - - [10/Mar/2026:05:48:47] "GET /metrics HTTP/1.1" 200 214443 "" "Prometheus/2.33.4" 2026-03-10T05:48:49.124 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:48:48 vm02 bash[17462]: cluster 2026-03-10T05:48:47.700069+0000 mgr.y (mgr.14409) 116 : cluster [DBG] pgmap v93: 161 pgs: 161 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T05:48:49.124 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:48:48 vm02 bash[22526]: cluster 2026-03-10T05:48:47.700069+0000 mgr.y (mgr.14409) 116 : cluster [DBG] pgmap v93: 161 pgs: 161 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T05:48:49.127 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:48:48 vm05 bash[17864]: cluster 2026-03-10T05:48:47.700069+0000 mgr.y (mgr.14409) 116 : cluster [DBG] pgmap v93: 161 pgs: 161 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T05:48:50.258 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:48:49 vm05 bash[17864]: audit 2026-03-10T05:48:49.107995+0000 mon.c (mon.1) 99 : audit [DBG] from='mgr.14409 192.168.123.102:0/4073702081' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:48:50.258 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:48:49 vm05 bash[17864]: audit 2026-03-10T05:48:49.108953+0000 mon.c (mon.1) 100 : audit [INF] from='mgr.14409 192.168.123.102:0/4073702081' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T05:48:50.334 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:48:49 vm02 bash[17462]: audit 2026-03-10T05:48:49.107995+0000 mon.c (mon.1) 99 : audit [DBG] from='mgr.14409 192.168.123.102:0/4073702081' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:48:50.334 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:48:49 vm02 bash[17462]: audit 2026-03-10T05:48:49.108953+0000 mon.c (mon.1) 100 : audit [INF] from='mgr.14409 192.168.123.102:0/4073702081' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T05:48:50.334 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:48:49 vm02 bash[22526]: audit 2026-03-10T05:48:49.107995+0000 mon.c (mon.1) 99 : audit [DBG] from='mgr.14409 192.168.123.102:0/4073702081' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:48:50.334 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:48:49 vm02 bash[22526]: audit 2026-03-10T05:48:49.108953+0000 mon.c (mon.1) 100 : audit [INF] from='mgr.14409 192.168.123.102:0/4073702081' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T05:48:51.258 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:48:50 vm05 bash[17864]: cluster 2026-03-10T05:48:49.700633+0000 mgr.y (mgr.14409) 117 : cluster [DBG] pgmap v94: 161 pgs: 161 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T05:48:51.334 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:48:50 vm02 bash[17462]: cluster 2026-03-10T05:48:49.700633+0000 mgr.y (mgr.14409) 117 : cluster [DBG] pgmap v94: 161 pgs: 161 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T05:48:51.334 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:48:50 vm02 bash[22526]: cluster 2026-03-10T05:48:49.700633+0000 mgr.y (mgr.14409) 117 : cluster [DBG] pgmap v94: 161 pgs: 161 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T05:48:52.235 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:48:51 vm05 bash[17864]: audit 2026-03-10T05:48:51.761933+0000 mon.c (mon.1) 101 : audit [INF] from='mgr.14409 192.168.123.102:0/4073702081' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/mirror_snapshot_schedule"}]: dispatch 2026-03-10T05:48:52.235 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:48:51 vm05 bash[17864]: audit 2026-03-10T05:48:51.762402+0000 mon.a (mon.0) 757 : audit [INF] from='mgr.14409 ' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/mirror_snapshot_schedule"}]: dispatch 2026-03-10T05:48:52.235 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:48:51 vm05 bash[17864]: audit 2026-03-10T05:48:51.774303+0000 mon.c (mon.1) 102 : audit [INF] from='mgr.14409 192.168.123.102:0/4073702081' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/trash_purge_schedule"}]: dispatch 2026-03-10T05:48:52.235 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:48:51 vm05 bash[17864]: audit 2026-03-10T05:48:51.774659+0000 mon.a (mon.0) 758 : audit [INF] from='mgr.14409 ' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/trash_purge_schedule"}]: dispatch 2026-03-10T05:48:52.300 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:48:51 vm02 bash[17462]: audit 2026-03-10T05:48:51.761933+0000 mon.c (mon.1) 101 : audit [INF] from='mgr.14409 192.168.123.102:0/4073702081' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/mirror_snapshot_schedule"}]: dispatch 2026-03-10T05:48:52.300 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:48:51 vm02 bash[17462]: audit 2026-03-10T05:48:51.762402+0000 mon.a (mon.0) 757 : audit [INF] from='mgr.14409 ' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/mirror_snapshot_schedule"}]: dispatch 2026-03-10T05:48:52.300 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:48:51 vm02 bash[17462]: audit 2026-03-10T05:48:51.774303+0000 mon.c (mon.1) 102 : audit [INF] from='mgr.14409 192.168.123.102:0/4073702081' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/trash_purge_schedule"}]: dispatch 2026-03-10T05:48:52.300 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:48:51 vm02 bash[17462]: audit 2026-03-10T05:48:51.774659+0000 mon.a (mon.0) 758 : audit [INF] from='mgr.14409 ' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/trash_purge_schedule"}]: dispatch 2026-03-10T05:48:52.300 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:48:51 vm02 bash[22526]: audit 2026-03-10T05:48:51.761933+0000 mon.c (mon.1) 101 : audit [INF] from='mgr.14409 192.168.123.102:0/4073702081' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/mirror_snapshot_schedule"}]: dispatch 2026-03-10T05:48:52.300 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:48:51 vm02 bash[22526]: audit 2026-03-10T05:48:51.762402+0000 mon.a (mon.0) 757 : audit [INF] from='mgr.14409 ' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/mirror_snapshot_schedule"}]: dispatch 2026-03-10T05:48:52.300 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:48:51 vm02 bash[22526]: audit 2026-03-10T05:48:51.774303+0000 mon.c (mon.1) 102 : audit [INF] from='mgr.14409 192.168.123.102:0/4073702081' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/trash_purge_schedule"}]: dispatch 2026-03-10T05:48:52.300 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:48:51 vm02 bash[22526]: audit 2026-03-10T05:48:51.774659+0000 mon.a (mon.0) 758 : audit [INF] from='mgr.14409 ' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/trash_purge_schedule"}]: dispatch 2026-03-10T05:48:53.084 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:48:52 vm02 bash[17462]: cluster 2026-03-10T05:48:51.701174+0000 mgr.y (mgr.14409) 118 : cluster [DBG] pgmap v95: 161 pgs: 161 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T05:48:53.084 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:48:52 vm02 bash[17462]: audit 2026-03-10T05:48:52.245279+0000 mon.a (mon.0) 759 : audit [INF] from='mgr.14409 ' entity='mgr.y' 2026-03-10T05:48:53.084 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:48:52 vm02 bash[17462]: audit 2026-03-10T05:48:52.310799+0000 mon.a (mon.0) 760 : audit [INF] from='mgr.14409 ' entity='mgr.y' 2026-03-10T05:48:53.084 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:48:52 vm02 bash[17462]: audit 2026-03-10T05:48:52.472055+0000 mon.a (mon.0) 761 : audit [INF] from='mgr.14409 ' entity='mgr.y' 2026-03-10T05:48:53.084 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:48:52 vm02 bash[17462]: audit 2026-03-10T05:48:52.689469+0000 mgr.y (mgr.14409) 119 : audit [DBG] from='client.14712 -' entity='client.iscsi.foo.vm02.mxbwmh' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T05:48:53.084 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:48:52 vm02 bash[22526]: cluster 2026-03-10T05:48:51.701174+0000 mgr.y (mgr.14409) 118 : cluster [DBG] pgmap v95: 161 pgs: 161 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T05:48:53.084 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:48:52 vm02 bash[22526]: audit 2026-03-10T05:48:52.245279+0000 mon.a (mon.0) 759 : audit [INF] from='mgr.14409 ' entity='mgr.y' 2026-03-10T05:48:53.084 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:48:52 vm02 bash[22526]: audit 2026-03-10T05:48:52.310799+0000 mon.a (mon.0) 760 : audit [INF] from='mgr.14409 ' entity='mgr.y' 2026-03-10T05:48:53.084 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:48:52 vm02 bash[22526]: audit 2026-03-10T05:48:52.472055+0000 mon.a (mon.0) 761 : audit [INF] from='mgr.14409 ' entity='mgr.y' 2026-03-10T05:48:53.084 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:48:52 vm02 bash[22526]: audit 2026-03-10T05:48:52.689469+0000 mgr.y (mgr.14409) 119 : audit [DBG] from='client.14712 -' entity='client.iscsi.foo.vm02.mxbwmh' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T05:48:53.258 INFO:journalctl@ceph.mgr.x.vm05.stdout:Mar 10 05:48:52 vm05 bash[18520]: ::ffff:192.168.123.105 - - [10/Mar/2026:05:48:52] "GET /metrics HTTP/1.1" 200 - "" "Prometheus/2.33.4" 2026-03-10T05:48:53.258 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:48:52 vm05 bash[17864]: cluster 2026-03-10T05:48:51.701174+0000 mgr.y (mgr.14409) 118 : cluster [DBG] pgmap v95: 161 pgs: 161 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T05:48:53.258 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:48:52 vm05 bash[17864]: audit 2026-03-10T05:48:52.245279+0000 mon.a (mon.0) 759 : audit [INF] from='mgr.14409 ' entity='mgr.y' 2026-03-10T05:48:53.258 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:48:52 vm05 bash[17864]: audit 2026-03-10T05:48:52.310799+0000 mon.a (mon.0) 760 : audit [INF] from='mgr.14409 ' entity='mgr.y' 2026-03-10T05:48:53.258 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:48:52 vm05 bash[17864]: audit 2026-03-10T05:48:52.472055+0000 mon.a (mon.0) 761 : audit [INF] from='mgr.14409 ' entity='mgr.y' 2026-03-10T05:48:53.258 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:48:52 vm05 bash[17864]: audit 2026-03-10T05:48:52.689469+0000 mgr.y (mgr.14409) 119 : audit [DBG] from='client.14712 -' entity='client.iscsi.foo.vm02.mxbwmh' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T05:48:53.834 INFO:journalctl@ceph.alertmanager.a.vm02.stdout:Mar 10 05:48:53 vm02 bash[43400]: level=error ts=2026-03-10T05:48:53.511Z caller=dispatch.go:354 component=dispatcher msg="Notify for alerts failed" num_alerts=10 err="ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"https://192.168.123.102:8443//api/prometheus_receiver\": x509: cannot validate certificate for 192.168.123.102 because it doesn't contain any IP SANs; ceph-dashboard/webhook[1]: notify retry canceled after 8 attempts: Post \"https://192.168.123.105:8443/api/prometheus_receiver\": x509: cannot validate certificate for 192.168.123.105 because it doesn't contain any IP SANs" 2026-03-10T05:48:53.834 INFO:journalctl@ceph.alertmanager.a.vm02.stdout:Mar 10 05:48:53 vm02 bash[43400]: level=warn ts=2026-03-10T05:48:53.513Z caller=notify.go:724 component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"https://192.168.123.105:8443/api/prometheus_receiver\": x509: cannot validate certificate for 192.168.123.105 because it doesn't contain any IP SANs" 2026-03-10T05:48:53.834 INFO:journalctl@ceph.alertmanager.a.vm02.stdout:Mar 10 05:48:53 vm02 bash[43400]: level=warn ts=2026-03-10T05:48:53.514Z caller=notify.go:724 component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"https://192.168.123.102:8443//api/prometheus_receiver\": x509: cannot validate certificate for 192.168.123.102 because it doesn't contain any IP SANs" 2026-03-10T05:48:54.258 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:48:53 vm05 bash[17864]: cluster 2026-03-10T05:48:53.701777+0000 mgr.y (mgr.14409) 120 : cluster [DBG] pgmap v96: 161 pgs: 161 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T05:48:54.334 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:48:53 vm02 bash[17462]: cluster 2026-03-10T05:48:53.701777+0000 mgr.y (mgr.14409) 120 : cluster [DBG] pgmap v96: 161 pgs: 161 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T05:48:54.334 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:48:53 vm02 bash[22526]: cluster 2026-03-10T05:48:53.701777+0000 mgr.y (mgr.14409) 120 : cluster [DBG] pgmap v96: 161 pgs: 161 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T05:48:57.084 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:48:56 vm02 bash[17462]: cluster 2026-03-10T05:48:55.702046+0000 mgr.y (mgr.14409) 121 : cluster [DBG] pgmap v97: 161 pgs: 161 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail; 767 B/s rd, 0 op/s 2026-03-10T05:48:57.084 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:48:56 vm02 bash[22526]: cluster 2026-03-10T05:48:55.702046+0000 mgr.y (mgr.14409) 121 : cluster [DBG] pgmap v97: 161 pgs: 161 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail; 767 B/s rd, 0 op/s 2026-03-10T05:48:57.258 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:48:56 vm05 bash[17864]: cluster 2026-03-10T05:48:55.702046+0000 mgr.y (mgr.14409) 121 : cluster [DBG] pgmap v97: 161 pgs: 161 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail; 767 B/s rd, 0 op/s 2026-03-10T05:48:57.584 INFO:journalctl@ceph.mgr.y.vm02.stdout:Mar 10 05:48:57 vm02 bash[17731]: ::ffff:192.168.123.105 - - [10/Mar/2026:05:48:57] "GET /metrics HTTP/1.1" 200 214468 "" "Prometheus/2.33.4" 2026-03-10T05:48:59.084 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:48:58 vm02 bash[17462]: cluster 2026-03-10T05:48:57.702398+0000 mgr.y (mgr.14409) 122 : cluster [DBG] pgmap v98: 161 pgs: 161 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail; 767 B/s rd, 0 op/s 2026-03-10T05:48:59.084 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:48:58 vm02 bash[22526]: cluster 2026-03-10T05:48:57.702398+0000 mgr.y (mgr.14409) 122 : cluster [DBG] pgmap v98: 161 pgs: 161 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail; 767 B/s rd, 0 op/s 2026-03-10T05:48:59.258 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:48:58 vm05 bash[17864]: cluster 2026-03-10T05:48:57.702398+0000 mgr.y (mgr.14409) 122 : cluster [DBG] pgmap v98: 161 pgs: 161 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail; 767 B/s rd, 0 op/s 2026-03-10T05:49:01.258 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:49:00 vm05 bash[17864]: cluster 2026-03-10T05:48:59.702967+0000 mgr.y (mgr.14409) 123 : cluster [DBG] pgmap v99: 161 pgs: 161 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T05:49:01.334 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:49:00 vm02 bash[17462]: cluster 2026-03-10T05:48:59.702967+0000 mgr.y (mgr.14409) 123 : cluster [DBG] pgmap v99: 161 pgs: 161 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T05:49:01.334 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:49:00 vm02 bash[22526]: cluster 2026-03-10T05:48:59.702967+0000 mgr.y (mgr.14409) 123 : cluster [DBG] pgmap v99: 161 pgs: 161 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T05:49:03.084 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:49:02 vm02 bash[17462]: cluster 2026-03-10T05:49:01.703334+0000 mgr.y (mgr.14409) 124 : cluster [DBG] pgmap v100: 161 pgs: 161 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail; 767 B/s rd, 0 op/s 2026-03-10T05:49:03.084 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:49:02 vm02 bash[22526]: cluster 2026-03-10T05:49:01.703334+0000 mgr.y (mgr.14409) 124 : cluster [DBG] pgmap v100: 161 pgs: 161 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail; 767 B/s rd, 0 op/s 2026-03-10T05:49:03.258 INFO:journalctl@ceph.mgr.x.vm05.stdout:Mar 10 05:49:02 vm05 bash[18520]: ::ffff:192.168.123.105 - - [10/Mar/2026:05:49:02] "GET /metrics HTTP/1.1" 200 - "" "Prometheus/2.33.4" 2026-03-10T05:49:03.258 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:49:02 vm05 bash[17864]: cluster 2026-03-10T05:49:01.703334+0000 mgr.y (mgr.14409) 124 : cluster [DBG] pgmap v100: 161 pgs: 161 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail; 767 B/s rd, 0 op/s 2026-03-10T05:49:03.834 INFO:journalctl@ceph.alertmanager.a.vm02.stdout:Mar 10 05:49:03 vm02 bash[43400]: level=error ts=2026-03-10T05:49:03.512Z caller=dispatch.go:354 component=dispatcher msg="Notify for alerts failed" num_alerts=10 err="ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"https://192.168.123.105:8443/api/prometheus_receiver\": x509: cannot validate certificate for 192.168.123.105 because it doesn't contain any IP SANs; ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"https://192.168.123.102:8443//api/prometheus_receiver\": x509: cannot validate certificate for 192.168.123.102 because it doesn't contain any IP SANs" 2026-03-10T05:49:03.834 INFO:journalctl@ceph.alertmanager.a.vm02.stdout:Mar 10 05:49:03 vm02 bash[43400]: level=warn ts=2026-03-10T05:49:03.514Z caller=notify.go:724 component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"https://192.168.123.102:8443//api/prometheus_receiver\": x509: cannot validate certificate for 192.168.123.102 because it doesn't contain any IP SANs" 2026-03-10T05:49:03.834 INFO:journalctl@ceph.alertmanager.a.vm02.stdout:Mar 10 05:49:03 vm02 bash[43400]: level=warn ts=2026-03-10T05:49:03.514Z caller=notify.go:724 component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"https://192.168.123.105:8443/api/prometheus_receiver\": x509: cannot validate certificate for 192.168.123.105 because it doesn't contain any IP SANs" 2026-03-10T05:49:04.258 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:49:03 vm05 bash[17864]: audit 2026-03-10T05:49:02.692264+0000 mgr.y (mgr.14409) 125 : audit [DBG] from='client.14712 -' entity='client.iscsi.foo.vm02.mxbwmh' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T05:49:04.334 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:49:03 vm02 bash[17462]: audit 2026-03-10T05:49:02.692264+0000 mgr.y (mgr.14409) 125 : audit [DBG] from='client.14712 -' entity='client.iscsi.foo.vm02.mxbwmh' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T05:49:04.334 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:49:03 vm02 bash[22526]: audit 2026-03-10T05:49:02.692264+0000 mgr.y (mgr.14409) 125 : audit [DBG] from='client.14712 -' entity='client.iscsi.foo.vm02.mxbwmh' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T05:49:05.258 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:49:04 vm05 bash[17864]: cluster 2026-03-10T05:49:03.703945+0000 mgr.y (mgr.14409) 126 : cluster [DBG] pgmap v101: 161 pgs: 161 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T05:49:05.334 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:49:04 vm02 bash[17462]: cluster 2026-03-10T05:49:03.703945+0000 mgr.y (mgr.14409) 126 : cluster [DBG] pgmap v101: 161 pgs: 161 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T05:49:05.334 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:49:04 vm02 bash[22526]: cluster 2026-03-10T05:49:03.703945+0000 mgr.y (mgr.14409) 126 : cluster [DBG] pgmap v101: 161 pgs: 161 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T05:49:06.258 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:49:05 vm05 bash[17864]: cluster 2026-03-10T05:49:05.704371+0000 mgr.y (mgr.14409) 127 : cluster [DBG] pgmap v102: 161 pgs: 161 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T05:49:06.334 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:49:05 vm02 bash[17462]: cluster 2026-03-10T05:49:05.704371+0000 mgr.y (mgr.14409) 127 : cluster [DBG] pgmap v102: 161 pgs: 161 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T05:49:06.334 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:49:05 vm02 bash[22526]: cluster 2026-03-10T05:49:05.704371+0000 mgr.y (mgr.14409) 127 : cluster [DBG] pgmap v102: 161 pgs: 161 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T05:49:07.584 INFO:journalctl@ceph.mgr.y.vm02.stdout:Mar 10 05:49:07 vm02 bash[17731]: ::ffff:192.168.123.105 - - [10/Mar/2026:05:49:07] "GET /metrics HTTP/1.1" 200 214472 "" "Prometheus/2.33.4" 2026-03-10T05:49:09.084 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:49:08 vm02 bash[17462]: cluster 2026-03-10T05:49:07.704695+0000 mgr.y (mgr.14409) 128 : cluster [DBG] pgmap v103: 161 pgs: 161 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T05:49:09.104 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:49:08 vm02 bash[22526]: cluster 2026-03-10T05:49:07.704695+0000 mgr.y (mgr.14409) 128 : cluster [DBG] pgmap v103: 161 pgs: 161 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T05:49:09.258 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:49:08 vm05 bash[17864]: cluster 2026-03-10T05:49:07.704695+0000 mgr.y (mgr.14409) 128 : cluster [DBG] pgmap v103: 161 pgs: 161 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T05:49:11.084 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:49:10 vm02 bash[17462]: cluster 2026-03-10T05:49:09.705290+0000 mgr.y (mgr.14409) 129 : cluster [DBG] pgmap v104: 161 pgs: 161 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T05:49:11.084 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:49:10 vm02 bash[22526]: cluster 2026-03-10T05:49:09.705290+0000 mgr.y (mgr.14409) 129 : cluster [DBG] pgmap v104: 161 pgs: 161 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T05:49:11.258 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:49:10 vm05 bash[17864]: cluster 2026-03-10T05:49:09.705290+0000 mgr.y (mgr.14409) 129 : cluster [DBG] pgmap v104: 161 pgs: 161 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T05:49:13.084 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:49:12 vm02 bash[17462]: cluster 2026-03-10T05:49:11.705619+0000 mgr.y (mgr.14409) 130 : cluster [DBG] pgmap v105: 161 pgs: 161 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T05:49:13.084 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:49:12 vm02 bash[22526]: cluster 2026-03-10T05:49:11.705619+0000 mgr.y (mgr.14409) 130 : cluster [DBG] pgmap v105: 161 pgs: 161 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T05:49:13.258 INFO:journalctl@ceph.mgr.x.vm05.stdout:Mar 10 05:49:12 vm05 bash[18520]: ::ffff:192.168.123.105 - - [10/Mar/2026:05:49:12] "GET /metrics HTTP/1.1" 200 - "" "Prometheus/2.33.4" 2026-03-10T05:49:13.258 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:49:12 vm05 bash[17864]: cluster 2026-03-10T05:49:11.705619+0000 mgr.y (mgr.14409) 130 : cluster [DBG] pgmap v105: 161 pgs: 161 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T05:49:13.802 INFO:journalctl@ceph.alertmanager.a.vm02.stdout:Mar 10 05:49:13 vm02 bash[43400]: level=error ts=2026-03-10T05:49:13.513Z caller=dispatch.go:354 component=dispatcher msg="Notify for alerts failed" num_alerts=10 err="ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"https://192.168.123.102:8443//api/prometheus_receiver\": x509: cannot validate certificate for 192.168.123.102 because it doesn't contain any IP SANs; ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"https://192.168.123.105:8443/api/prometheus_receiver\": x509: cannot validate certificate for 192.168.123.105 because it doesn't contain any IP SANs" 2026-03-10T05:49:13.802 INFO:journalctl@ceph.alertmanager.a.vm02.stdout:Mar 10 05:49:13 vm02 bash[43400]: level=warn ts=2026-03-10T05:49:13.515Z caller=notify.go:724 component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"https://192.168.123.105:8443/api/prometheus_receiver\": x509: cannot validate certificate for 192.168.123.105 because it doesn't contain any IP SANs" 2026-03-10T05:49:13.802 INFO:journalctl@ceph.alertmanager.a.vm02.stdout:Mar 10 05:49:13 vm02 bash[43400]: level=warn ts=2026-03-10T05:49:13.515Z caller=notify.go:724 component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"https://192.168.123.102:8443//api/prometheus_receiver\": x509: cannot validate certificate for 192.168.123.102 because it doesn't contain any IP SANs" 2026-03-10T05:49:14.084 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:49:13 vm02 bash[17462]: audit 2026-03-10T05:49:12.696428+0000 mgr.y (mgr.14409) 131 : audit [DBG] from='client.14712 -' entity='client.iscsi.foo.vm02.mxbwmh' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T05:49:14.084 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:49:13 vm02 bash[22526]: audit 2026-03-10T05:49:12.696428+0000 mgr.y (mgr.14409) 131 : audit [DBG] from='client.14712 -' entity='client.iscsi.foo.vm02.mxbwmh' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T05:49:14.258 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:49:13 vm05 bash[17864]: audit 2026-03-10T05:49:12.696428+0000 mgr.y (mgr.14409) 131 : audit [DBG] from='client.14712 -' entity='client.iscsi.foo.vm02.mxbwmh' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T05:49:15.084 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:49:14 vm02 bash[17462]: cluster 2026-03-10T05:49:13.706356+0000 mgr.y (mgr.14409) 132 : cluster [DBG] pgmap v106: 161 pgs: 161 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T05:49:15.084 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:49:14 vm02 bash[22526]: cluster 2026-03-10T05:49:13.706356+0000 mgr.y (mgr.14409) 132 : cluster [DBG] pgmap v106: 161 pgs: 161 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T05:49:15.258 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:49:14 vm05 bash[17864]: cluster 2026-03-10T05:49:13.706356+0000 mgr.y (mgr.14409) 132 : cluster [DBG] pgmap v106: 161 pgs: 161 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T05:49:17.084 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:49:16 vm02 bash[17462]: cluster 2026-03-10T05:49:15.706680+0000 mgr.y (mgr.14409) 133 : cluster [DBG] pgmap v107: 161 pgs: 161 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T05:49:17.084 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:49:16 vm02 bash[22526]: cluster 2026-03-10T05:49:15.706680+0000 mgr.y (mgr.14409) 133 : cluster [DBG] pgmap v107: 161 pgs: 161 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T05:49:17.257 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:49:16 vm05 bash[17864]: cluster 2026-03-10T05:49:15.706680+0000 mgr.y (mgr.14409) 133 : cluster [DBG] pgmap v107: 161 pgs: 161 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T05:49:17.584 INFO:journalctl@ceph.mgr.y.vm02.stdout:Mar 10 05:49:17 vm02 bash[17731]: ::ffff:192.168.123.105 - - [10/Mar/2026:05:49:17] "GET /metrics HTTP/1.1" 200 214472 "" "Prometheus/2.33.4" 2026-03-10T05:49:19.084 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:49:18 vm02 bash[17462]: cluster 2026-03-10T05:49:17.707019+0000 mgr.y (mgr.14409) 134 : cluster [DBG] pgmap v108: 161 pgs: 161 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T05:49:19.084 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:49:18 vm02 bash[22526]: cluster 2026-03-10T05:49:17.707019+0000 mgr.y (mgr.14409) 134 : cluster [DBG] pgmap v108: 161 pgs: 161 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T05:49:19.257 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:49:18 vm05 bash[17864]: cluster 2026-03-10T05:49:17.707019+0000 mgr.y (mgr.14409) 134 : cluster [DBG] pgmap v108: 161 pgs: 161 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T05:49:21.084 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:49:20 vm02 bash[17462]: cluster 2026-03-10T05:49:19.707724+0000 mgr.y (mgr.14409) 135 : cluster [DBG] pgmap v109: 161 pgs: 161 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T05:49:21.084 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:49:20 vm02 bash[22526]: cluster 2026-03-10T05:49:19.707724+0000 mgr.y (mgr.14409) 135 : cluster [DBG] pgmap v109: 161 pgs: 161 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T05:49:21.257 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:49:20 vm05 bash[17864]: cluster 2026-03-10T05:49:19.707724+0000 mgr.y (mgr.14409) 135 : cluster [DBG] pgmap v109: 161 pgs: 161 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T05:49:23.084 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:49:22 vm02 bash[17462]: cluster 2026-03-10T05:49:21.708046+0000 mgr.y (mgr.14409) 136 : cluster [DBG] pgmap v110: 161 pgs: 161 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T05:49:23.084 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:49:22 vm02 bash[22526]: cluster 2026-03-10T05:49:21.708046+0000 mgr.y (mgr.14409) 136 : cluster [DBG] pgmap v110: 161 pgs: 161 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T05:49:23.256 INFO:journalctl@ceph.mgr.x.vm05.stdout:Mar 10 05:49:22 vm05 bash[18520]: ::ffff:192.168.123.105 - - [10/Mar/2026:05:49:22] "GET /metrics HTTP/1.1" 200 - "" "Prometheus/2.33.4" 2026-03-10T05:49:23.256 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:49:22 vm05 bash[17864]: cluster 2026-03-10T05:49:21.708046+0000 mgr.y (mgr.14409) 136 : cluster [DBG] pgmap v110: 161 pgs: 161 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T05:49:23.835 INFO:journalctl@ceph.alertmanager.a.vm02.stdout:Mar 10 05:49:23 vm02 bash[43400]: level=error ts=2026-03-10T05:49:23.513Z caller=dispatch.go:354 component=dispatcher msg="Notify for alerts failed" num_alerts=10 err="ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"https://192.168.123.102:8443//api/prometheus_receiver\": x509: cannot validate certificate for 192.168.123.102 because it doesn't contain any IP SANs; ceph-dashboard/webhook[1]: notify retry canceled after 8 attempts: Post \"https://192.168.123.105:8443/api/prometheus_receiver\": x509: cannot validate certificate for 192.168.123.105 because it doesn't contain any IP SANs" 2026-03-10T05:49:23.835 INFO:journalctl@ceph.alertmanager.a.vm02.stdout:Mar 10 05:49:23 vm02 bash[43400]: level=warn ts=2026-03-10T05:49:23.515Z caller=notify.go:724 component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"https://192.168.123.105:8443/api/prometheus_receiver\": x509: cannot validate certificate for 192.168.123.105 because it doesn't contain any IP SANs" 2026-03-10T05:49:23.835 INFO:journalctl@ceph.alertmanager.a.vm02.stdout:Mar 10 05:49:23 vm02 bash[43400]: level=warn ts=2026-03-10T05:49:23.516Z caller=notify.go:724 component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"https://192.168.123.102:8443//api/prometheus_receiver\": x509: cannot validate certificate for 192.168.123.102 because it doesn't contain any IP SANs" 2026-03-10T05:49:24.256 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:49:23 vm05 bash[17864]: audit 2026-03-10T05:49:22.698497+0000 mgr.y (mgr.14409) 137 : audit [DBG] from='client.14712 -' entity='client.iscsi.foo.vm02.mxbwmh' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T05:49:24.334 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:49:23 vm02 bash[17462]: audit 2026-03-10T05:49:22.698497+0000 mgr.y (mgr.14409) 137 : audit [DBG] from='client.14712 -' entity='client.iscsi.foo.vm02.mxbwmh' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T05:49:24.334 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:49:23 vm02 bash[22526]: audit 2026-03-10T05:49:22.698497+0000 mgr.y (mgr.14409) 137 : audit [DBG] from='client.14712 -' entity='client.iscsi.foo.vm02.mxbwmh' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T05:49:25.256 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:49:24 vm05 bash[17864]: cluster 2026-03-10T05:49:23.708748+0000 mgr.y (mgr.14409) 138 : cluster [DBG] pgmap v111: 161 pgs: 161 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T05:49:25.334 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:49:24 vm02 bash[17462]: cluster 2026-03-10T05:49:23.708748+0000 mgr.y (mgr.14409) 138 : cluster [DBG] pgmap v111: 161 pgs: 161 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T05:49:25.334 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:49:24 vm02 bash[22526]: cluster 2026-03-10T05:49:23.708748+0000 mgr.y (mgr.14409) 138 : cluster [DBG] pgmap v111: 161 pgs: 161 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T05:49:27.146 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:49:26 vm02 bash[17462]: cluster 2026-03-10T05:49:25.709075+0000 mgr.y (mgr.14409) 139 : cluster [DBG] pgmap v112: 161 pgs: 161 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T05:49:27.146 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:49:26 vm02 bash[22526]: cluster 2026-03-10T05:49:25.709075+0000 mgr.y (mgr.14409) 139 : cluster [DBG] pgmap v112: 161 pgs: 161 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T05:49:27.256 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:49:26 vm05 bash[17864]: cluster 2026-03-10T05:49:25.709075+0000 mgr.y (mgr.14409) 139 : cluster [DBG] pgmap v112: 161 pgs: 161 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T05:49:27.584 INFO:journalctl@ceph.mgr.y.vm02.stdout:Mar 10 05:49:27 vm02 bash[17731]: ::ffff:192.168.123.105 - - [10/Mar/2026:05:49:27] "GET /metrics HTTP/1.1" 200 214526 "" "Prometheus/2.33.4" 2026-03-10T05:49:29.256 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:49:28 vm05 bash[17864]: cluster 2026-03-10T05:49:27.709444+0000 mgr.y (mgr.14409) 140 : cluster [DBG] pgmap v113: 161 pgs: 161 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T05:49:29.334 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:49:28 vm02 bash[17462]: cluster 2026-03-10T05:49:27.709444+0000 mgr.y (mgr.14409) 140 : cluster [DBG] pgmap v113: 161 pgs: 161 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T05:49:29.334 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:49:28 vm02 bash[22526]: cluster 2026-03-10T05:49:27.709444+0000 mgr.y (mgr.14409) 140 : cluster [DBG] pgmap v113: 161 pgs: 161 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T05:49:31.255 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:49:30 vm05 bash[17864]: cluster 2026-03-10T05:49:29.709929+0000 mgr.y (mgr.14409) 141 : cluster [DBG] pgmap v114: 161 pgs: 161 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T05:49:31.334 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:49:30 vm02 bash[17462]: cluster 2026-03-10T05:49:29.709929+0000 mgr.y (mgr.14409) 141 : cluster [DBG] pgmap v114: 161 pgs: 161 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T05:49:31.334 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:49:30 vm02 bash[22526]: cluster 2026-03-10T05:49:29.709929+0000 mgr.y (mgr.14409) 141 : cluster [DBG] pgmap v114: 161 pgs: 161 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T05:49:33.084 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:49:32 vm02 bash[17462]: cluster 2026-03-10T05:49:31.710192+0000 mgr.y (mgr.14409) 142 : cluster [DBG] pgmap v115: 161 pgs: 161 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T05:49:33.084 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:49:32 vm02 bash[22526]: cluster 2026-03-10T05:49:31.710192+0000 mgr.y (mgr.14409) 142 : cluster [DBG] pgmap v115: 161 pgs: 161 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T05:49:33.255 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:49:32 vm05 bash[17864]: cluster 2026-03-10T05:49:31.710192+0000 mgr.y (mgr.14409) 142 : cluster [DBG] pgmap v115: 161 pgs: 161 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T05:49:33.255 INFO:journalctl@ceph.mgr.x.vm05.stdout:Mar 10 05:49:32 vm05 bash[18520]: ::ffff:192.168.123.105 - - [10/Mar/2026:05:49:32] "GET /metrics HTTP/1.1" 200 - "" "Prometheus/2.33.4" 2026-03-10T05:49:33.834 INFO:journalctl@ceph.alertmanager.a.vm02.stdout:Mar 10 05:49:33 vm02 bash[43400]: level=error ts=2026-03-10T05:49:33.514Z caller=dispatch.go:354 component=dispatcher msg="Notify for alerts failed" num_alerts=10 err="ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"https://192.168.123.105:8443/api/prometheus_receiver\": x509: cannot validate certificate for 192.168.123.105 because it doesn't contain any IP SANs; ceph-dashboard/webhook[0]: notify retry canceled after 8 attempts: Post \"https://192.168.123.102:8443//api/prometheus_receiver\": x509: cannot validate certificate for 192.168.123.102 because it doesn't contain any IP SANs" 2026-03-10T05:49:33.834 INFO:journalctl@ceph.alertmanager.a.vm02.stdout:Mar 10 05:49:33 vm02 bash[43400]: level=warn ts=2026-03-10T05:49:33.516Z caller=notify.go:724 component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"https://192.168.123.105:8443/api/prometheus_receiver\": x509: cannot validate certificate for 192.168.123.105 because it doesn't contain any IP SANs" 2026-03-10T05:49:33.834 INFO:journalctl@ceph.alertmanager.a.vm02.stdout:Mar 10 05:49:33 vm02 bash[43400]: level=warn ts=2026-03-10T05:49:33.516Z caller=notify.go:724 component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"https://192.168.123.102:8443//api/prometheus_receiver\": x509: cannot validate certificate for 192.168.123.102 because it doesn't contain any IP SANs" 2026-03-10T05:49:34.255 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:49:33 vm05 bash[17864]: audit 2026-03-10T05:49:32.706133+0000 mgr.y (mgr.14409) 143 : audit [DBG] from='client.14712 -' entity='client.iscsi.foo.vm02.mxbwmh' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T05:49:34.334 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:49:33 vm02 bash[17462]: audit 2026-03-10T05:49:32.706133+0000 mgr.y (mgr.14409) 143 : audit [DBG] from='client.14712 -' entity='client.iscsi.foo.vm02.mxbwmh' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T05:49:34.334 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:49:33 vm02 bash[22526]: audit 2026-03-10T05:49:32.706133+0000 mgr.y (mgr.14409) 143 : audit [DBG] from='client.14712 -' entity='client.iscsi.foo.vm02.mxbwmh' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T05:49:35.255 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:49:34 vm05 bash[17864]: cluster 2026-03-10T05:49:33.710708+0000 mgr.y (mgr.14409) 144 : cluster [DBG] pgmap v116: 161 pgs: 161 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T05:49:35.334 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:49:34 vm02 bash[17462]: cluster 2026-03-10T05:49:33.710708+0000 mgr.y (mgr.14409) 144 : cluster [DBG] pgmap v116: 161 pgs: 161 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T05:49:35.334 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:49:34 vm02 bash[22526]: cluster 2026-03-10T05:49:33.710708+0000 mgr.y (mgr.14409) 144 : cluster [DBG] pgmap v116: 161 pgs: 161 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T05:49:35.975 DEBUG:teuthology.orchestra.run.vm02:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 107483ae-1c44-11f1-b530-c1172cd6122a -e sha1=e911bdebe5c8faa3800735d1568fcdca65db60df -- bash -c 'ceph config set mon mon_warn_on_insecure_global_id_reclaim false --force' 2026-03-10T05:49:36.470 DEBUG:teuthology.orchestra.run.vm02:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 107483ae-1c44-11f1-b530-c1172cd6122a -e sha1=e911bdebe5c8faa3800735d1568fcdca65db60df -- bash -c 'ceph config set mon mon_warn_on_insecure_global_id_reclaim_allowed false --force' 2026-03-10T05:49:37.025 DEBUG:teuthology.orchestra.run.vm02:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 107483ae-1c44-11f1-b530-c1172cd6122a -e sha1=e911bdebe5c8faa3800735d1568fcdca65db60df -- bash -c 'ceph config set global log_to_journald false --force' 2026-03-10T05:49:37.146 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:49:36 vm02 bash[17462]: cluster 2026-03-10T05:49:35.710995+0000 mgr.y (mgr.14409) 145 : cluster [DBG] pgmap v117: 161 pgs: 161 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T05:49:37.146 INFO:journalctl@ceph.mgr.y.vm02.stdout:Mar 10 05:49:37 vm02 bash[17731]: ::ffff:192.168.123.105 - - [10/Mar/2026:05:49:37] "GET /metrics HTTP/1.1" 200 214529 "" "Prometheus/2.33.4" 2026-03-10T05:49:37.147 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:49:36 vm02 bash[22526]: cluster 2026-03-10T05:49:35.710995+0000 mgr.y (mgr.14409) 145 : cluster [DBG] pgmap v117: 161 pgs: 161 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T05:49:37.255 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:49:36 vm05 bash[17864]: cluster 2026-03-10T05:49:35.710995+0000 mgr.y (mgr.14409) 145 : cluster [DBG] pgmap v117: 161 pgs: 161 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T05:49:37.543 DEBUG:teuthology.orchestra.run.vm02:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 107483ae-1c44-11f1-b530-c1172cd6122a -e sha1=e911bdebe5c8faa3800735d1568fcdca65db60df -- bash -c 'ceph orch upgrade start --image quay.ceph.io/ceph-ci/ceph:$sha1' 2026-03-10T05:49:38.020 INFO:teuthology.orchestra.run.vm02.stdout:Initiating upgrade to quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df 2026-03-10T05:49:38.106 INFO:teuthology.run_tasks:Running task cephadm.shell... 2026-03-10T05:49:38.109 INFO:tasks.cephadm:Running commands on role mon.a host ubuntu@vm02.local 2026-03-10T05:49:38.109 DEBUG:teuthology.orchestra.run.vm02:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 107483ae-1c44-11f1-b530-c1172cd6122a -e sha1=e911bdebe5c8faa3800735d1568fcdca65db60df -- bash -c 'while ceph orch upgrade status | jq '"'"'.in_progress'"'"' | grep true && ! ceph orch upgrade status | jq '"'"'.message'"'"' | grep Error ; do ceph orch ps ; ceph versions ; ceph orch upgrade status ; ceph health detail ; sleep 30 ; done' 2026-03-10T05:49:38.571 INFO:teuthology.orchestra.run.vm02.stdout:true 2026-03-10T05:49:38.941 INFO:teuthology.orchestra.run.vm02.stdout:NAME HOST PORTS STATUS REFRESHED AGE MEM USE MEM LIM VERSION IMAGE ID CONTAINER ID 2026-03-10T05:49:38.941 INFO:teuthology.orchestra.run.vm02.stdout:alertmanager.a vm02 *:9093,9094 running (2m) 46s ago 2m 16.4M - ba2b418f427c 3305780e5ef5 2026-03-10T05:49:38.941 INFO:teuthology.orchestra.run.vm02.stdout:grafana.a vm05 *:3000 running (2m) 46s ago 2m 42.4M - 8.3.5 dad864ee21e9 a370f3725ef2 2026-03-10T05:49:38.941 INFO:teuthology.orchestra.run.vm02.stdout:iscsi.foo.vm02.mxbwmh vm02 running (116s) 46s ago 116s 41.3M - 3.5 e1d6a67b021e c01d22afac06 2026-03-10T05:49:38.941 INFO:teuthology.orchestra.run.vm02.stdout:mgr.x vm05 *:8443 running (5m) 46s ago 5m 398M - 17.2.0 e1d6a67b021e b2f4d40768f0 2026-03-10T05:49:38.941 INFO:teuthology.orchestra.run.vm02.stdout:mgr.y vm02 *:9283 running (5m) 46s ago 5m 445M - 17.2.0 e1d6a67b021e a04e3f113661 2026-03-10T05:49:38.941 INFO:teuthology.orchestra.run.vm02.stdout:mon.a vm02 running (5m) 46s ago 5m 49.4M 2048M 17.2.0 e1d6a67b021e bf59d12a7baa 2026-03-10T05:49:38.941 INFO:teuthology.orchestra.run.vm02.stdout:mon.b vm05 running (5m) 46s ago 5m 45.9M 2048M 17.2.0 e1d6a67b021e 96a2a71fd403 2026-03-10T05:49:38.941 INFO:teuthology.orchestra.run.vm02.stdout:mon.c vm02 running (5m) 46s ago 5m 47.6M 2048M 17.2.0 e1d6a67b021e 2f6dcf491c61 2026-03-10T05:49:38.941 INFO:teuthology.orchestra.run.vm02.stdout:node-exporter.a vm02 *:9100 running (2m) 46s ago 2m 8040k - 1dbe0e931976 111574d033cc 2026-03-10T05:49:38.941 INFO:teuthology.orchestra.run.vm02.stdout:node-exporter.b vm05 *:9100 running (2m) 46s ago 2m 9784k - 1dbe0e931976 b6278e64d85c 2026-03-10T05:49:38.941 INFO:teuthology.orchestra.run.vm02.stdout:osd.0 vm02 running (4m) 46s ago 4m 47.9M 4096M 17.2.0 e1d6a67b021e 563d55a3e6a4 2026-03-10T05:49:38.941 INFO:teuthology.orchestra.run.vm02.stdout:osd.1 vm02 running (4m) 46s ago 4m 50.8M 4096M 17.2.0 e1d6a67b021e 8c25a1e89677 2026-03-10T05:49:38.941 INFO:teuthology.orchestra.run.vm02.stdout:osd.2 vm02 running (4m) 46s ago 4m 46.2M 4096M 17.2.0 e1d6a67b021e 826f54bdbc5c 2026-03-10T05:49:38.941 INFO:teuthology.orchestra.run.vm02.stdout:osd.3 vm02 running (4m) 46s ago 4m 49.1M 4096M 17.2.0 e1d6a67b021e 0c6cfa53c9fd 2026-03-10T05:49:38.941 INFO:teuthology.orchestra.run.vm02.stdout:osd.4 vm05 running (3m) 46s ago 3m 49.4M 4096M 17.2.0 e1d6a67b021e 4ffe1741f201 2026-03-10T05:49:38.941 INFO:teuthology.orchestra.run.vm02.stdout:osd.5 vm05 running (3m) 46s ago 3m 47.8M 4096M 17.2.0 e1d6a67b021e cba5583c238e 2026-03-10T05:49:38.941 INFO:teuthology.orchestra.run.vm02.stdout:osd.6 vm05 running (3m) 46s ago 3m 45.7M 4096M 17.2.0 e1d6a67b021e 9d1b370357d7 2026-03-10T05:49:38.941 INFO:teuthology.orchestra.run.vm02.stdout:osd.7 vm05 running (3m) 46s ago 3m 47.4M 4096M 17.2.0 e1d6a67b021e 8a4837b788cf 2026-03-10T05:49:38.941 INFO:teuthology.orchestra.run.vm02.stdout:prometheus.a vm05 *:9095 running (2m) 46s ago 2m 45.8M - 514e6a882f6e 6c053703db40 2026-03-10T05:49:38.942 INFO:teuthology.orchestra.run.vm02.stdout:rgw.foo.vm02.pbogjd vm02 *:8000 running (2m) 46s ago 2m 82.9M - 17.2.0 e1d6a67b021e 2ab2ffd1abaa 2026-03-10T05:49:38.942 INFO:teuthology.orchestra.run.vm02.stdout:rgw.foo.vm05.hvmsxl vm05 *:8000 running (2m) 46s ago 2m 82.9M - 17.2.0 e1d6a67b021e 85d1c77b7e9d 2026-03-10T05:49:38.942 INFO:teuthology.orchestra.run.vm02.stdout:rgw.smpl.vm02.pglcfm vm02 *:80 running (2m) 46s ago 2m 82.7M - 17.2.0 e1d6a67b021e ef152a460673 2026-03-10T05:49:38.942 INFO:teuthology.orchestra.run.vm02.stdout:rgw.smpl.vm05.hqqmap vm05 *:80 running (2m) 46s ago 2m 82.5M - 17.2.0 e1d6a67b021e 29c9ee794f34 2026-03-10T05:49:39.151 INFO:teuthology.orchestra.run.vm02.stdout:{ 2026-03-10T05:49:39.151 INFO:teuthology.orchestra.run.vm02.stdout: "mon": { 2026-03-10T05:49:39.151 INFO:teuthology.orchestra.run.vm02.stdout: "ceph version 17.2.0 (43e2e60a7559d3f46c9d53f1ca875fd499a1e35e) quincy (stable)": 3 2026-03-10T05:49:39.151 INFO:teuthology.orchestra.run.vm02.stdout: }, 2026-03-10T05:49:39.151 INFO:teuthology.orchestra.run.vm02.stdout: "mgr": { 2026-03-10T05:49:39.151 INFO:teuthology.orchestra.run.vm02.stdout: "ceph version 17.2.0 (43e2e60a7559d3f46c9d53f1ca875fd499a1e35e) quincy (stable)": 2 2026-03-10T05:49:39.151 INFO:teuthology.orchestra.run.vm02.stdout: }, 2026-03-10T05:49:39.151 INFO:teuthology.orchestra.run.vm02.stdout: "osd": { 2026-03-10T05:49:39.151 INFO:teuthology.orchestra.run.vm02.stdout: "ceph version 17.2.0 (43e2e60a7559d3f46c9d53f1ca875fd499a1e35e) quincy (stable)": 8 2026-03-10T05:49:39.151 INFO:teuthology.orchestra.run.vm02.stdout: }, 2026-03-10T05:49:39.151 INFO:teuthology.orchestra.run.vm02.stdout: "mds": {}, 2026-03-10T05:49:39.151 INFO:teuthology.orchestra.run.vm02.stdout: "rgw": { 2026-03-10T05:49:39.151 INFO:teuthology.orchestra.run.vm02.stdout: "ceph version 17.2.0 (43e2e60a7559d3f46c9d53f1ca875fd499a1e35e) quincy (stable)": 4 2026-03-10T05:49:39.151 INFO:teuthology.orchestra.run.vm02.stdout: }, 2026-03-10T05:49:39.151 INFO:teuthology.orchestra.run.vm02.stdout: "overall": { 2026-03-10T05:49:39.151 INFO:teuthology.orchestra.run.vm02.stdout: "ceph version 17.2.0 (43e2e60a7559d3f46c9d53f1ca875fd499a1e35e) quincy (stable)": 17 2026-03-10T05:49:39.151 INFO:teuthology.orchestra.run.vm02.stdout: } 2026-03-10T05:49:39.151 INFO:teuthology.orchestra.run.vm02.stdout:} 2026-03-10T05:49:39.255 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:49:38 vm05 bash[17864]: cluster 2026-03-10T05:49:37.711369+0000 mgr.y (mgr.14409) 146 : cluster [DBG] pgmap v118: 161 pgs: 161 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T05:49:39.255 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:49:38 vm05 bash[17864]: audit 2026-03-10T05:49:38.019199+0000 mon.a (mon.0) 762 : audit [INF] from='mgr.14409 ' entity='mgr.y' 2026-03-10T05:49:39.255 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:49:38 vm05 bash[17864]: audit 2026-03-10T05:49:38.031554+0000 mon.c (mon.1) 103 : audit [DBG] from='mgr.14409 192.168.123.102:0/4073702081' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:49:39.255 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:49:38 vm05 bash[17864]: audit 2026-03-10T05:49:38.034571+0000 mon.c (mon.1) 104 : audit [INF] from='mgr.14409 192.168.123.102:0/4073702081' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T05:49:39.255 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:49:38 vm05 bash[17864]: audit 2026-03-10T05:49:38.044291+0000 mon.a (mon.0) 763 : audit [INF] from='mgr.14409 ' entity='mgr.y' 2026-03-10T05:49:39.334 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:49:38 vm02 bash[17462]: cluster 2026-03-10T05:49:37.711369+0000 mgr.y (mgr.14409) 146 : cluster [DBG] pgmap v118: 161 pgs: 161 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T05:49:39.334 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:49:38 vm02 bash[17462]: audit 2026-03-10T05:49:38.019199+0000 mon.a (mon.0) 762 : audit [INF] from='mgr.14409 ' entity='mgr.y' 2026-03-10T05:49:39.334 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:49:38 vm02 bash[17462]: audit 2026-03-10T05:49:38.031554+0000 mon.c (mon.1) 103 : audit [DBG] from='mgr.14409 192.168.123.102:0/4073702081' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:49:39.334 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:49:38 vm02 bash[17462]: audit 2026-03-10T05:49:38.034571+0000 mon.c (mon.1) 104 : audit [INF] from='mgr.14409 192.168.123.102:0/4073702081' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T05:49:39.334 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:49:38 vm02 bash[17462]: audit 2026-03-10T05:49:38.044291+0000 mon.a (mon.0) 763 : audit [INF] from='mgr.14409 ' entity='mgr.y' 2026-03-10T05:49:39.334 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:49:38 vm02 bash[22526]: cluster 2026-03-10T05:49:37.711369+0000 mgr.y (mgr.14409) 146 : cluster [DBG] pgmap v118: 161 pgs: 161 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T05:49:39.334 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:49:38 vm02 bash[22526]: audit 2026-03-10T05:49:38.019199+0000 mon.a (mon.0) 762 : audit [INF] from='mgr.14409 ' entity='mgr.y' 2026-03-10T05:49:39.334 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:49:38 vm02 bash[22526]: audit 2026-03-10T05:49:38.031554+0000 mon.c (mon.1) 103 : audit [DBG] from='mgr.14409 192.168.123.102:0/4073702081' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:49:39.335 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:49:38 vm02 bash[22526]: audit 2026-03-10T05:49:38.034571+0000 mon.c (mon.1) 104 : audit [INF] from='mgr.14409 192.168.123.102:0/4073702081' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T05:49:39.335 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:49:38 vm02 bash[22526]: audit 2026-03-10T05:49:38.044291+0000 mon.a (mon.0) 763 : audit [INF] from='mgr.14409 ' entity='mgr.y' 2026-03-10T05:49:39.335 INFO:teuthology.orchestra.run.vm02.stdout:{ 2026-03-10T05:49:39.336 INFO:teuthology.orchestra.run.vm02.stdout: "target_image": "quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df", 2026-03-10T05:49:39.336 INFO:teuthology.orchestra.run.vm02.stdout: "in_progress": true, 2026-03-10T05:49:39.336 INFO:teuthology.orchestra.run.vm02.stdout: "services_complete": [], 2026-03-10T05:49:39.336 INFO:teuthology.orchestra.run.vm02.stdout: "progress": "", 2026-03-10T05:49:39.336 INFO:teuthology.orchestra.run.vm02.stdout: "message": "Doing first pull of quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df image" 2026-03-10T05:49:39.336 INFO:teuthology.orchestra.run.vm02.stdout:} 2026-03-10T05:49:39.546 INFO:teuthology.orchestra.run.vm02.stdout:HEALTH_OK 2026-03-10T05:49:40.255 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:49:39 vm05 bash[17864]: audit 2026-03-10T05:49:38.013862+0000 mgr.y (mgr.14409) 147 : audit [DBG] from='client.24775 -' entity='client.admin' cmd=[{"prefix": "orch upgrade start", "image": "quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T05:49:40.255 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:49:39 vm05 bash[17864]: cephadm 2026-03-10T05:49:38.014532+0000 mgr.y (mgr.14409) 148 : cephadm [INF] Upgrade: Started with target quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df 2026-03-10T05:49:40.255 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:49:39 vm05 bash[17864]: cephadm 2026-03-10T05:49:38.048993+0000 mgr.y (mgr.14409) 149 : cephadm [INF] Upgrade: First pull of quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df 2026-03-10T05:49:40.255 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:49:39 vm05 bash[17864]: audit 2026-03-10T05:49:38.561248+0000 mgr.y (mgr.14409) 150 : audit [DBG] from='client.24781 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T05:49:40.255 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:49:39 vm05 bash[17864]: audit 2026-03-10T05:49:38.745904+0000 mgr.y (mgr.14409) 151 : audit [DBG] from='client.24787 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T05:49:40.255 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:49:39 vm05 bash[17864]: audit 2026-03-10T05:49:38.935776+0000 mgr.y (mgr.14409) 152 : audit [DBG] from='client.14856 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T05:49:40.255 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:49:39 vm05 bash[17864]: audit 2026-03-10T05:49:39.150548+0000 mon.c (mon.1) 105 : audit [DBG] from='client.? 192.168.123.102:0/1140445229' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T05:49:40.255 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:49:39 vm05 bash[17864]: audit 2026-03-10T05:49:39.335538+0000 mgr.y (mgr.14409) 153 : audit [DBG] from='client.24802 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T05:49:40.255 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:49:39 vm05 bash[17864]: audit 2026-03-10T05:49:39.545410+0000 mon.c (mon.1) 106 : audit [DBG] from='client.? 192.168.123.102:0/1871970390' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch 2026-03-10T05:49:40.255 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:49:39 vm05 bash[17864]: cluster 2026-03-10T05:49:39.711838+0000 mgr.y (mgr.14409) 154 : cluster [DBG] pgmap v119: 161 pgs: 161 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T05:49:40.334 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:49:39 vm02 bash[17462]: audit 2026-03-10T05:49:38.013862+0000 mgr.y (mgr.14409) 147 : audit [DBG] from='client.24775 -' entity='client.admin' cmd=[{"prefix": "orch upgrade start", "image": "quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T05:49:40.334 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:49:39 vm02 bash[17462]: cephadm 2026-03-10T05:49:38.014532+0000 mgr.y (mgr.14409) 148 : cephadm [INF] Upgrade: Started with target quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df 2026-03-10T05:49:40.334 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:49:39 vm02 bash[17462]: cephadm 2026-03-10T05:49:38.048993+0000 mgr.y (mgr.14409) 149 : cephadm [INF] Upgrade: First pull of quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df 2026-03-10T05:49:40.334 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:49:39 vm02 bash[17462]: audit 2026-03-10T05:49:38.561248+0000 mgr.y (mgr.14409) 150 : audit [DBG] from='client.24781 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T05:49:40.334 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:49:39 vm02 bash[17462]: audit 2026-03-10T05:49:38.745904+0000 mgr.y (mgr.14409) 151 : audit [DBG] from='client.24787 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T05:49:40.334 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:49:39 vm02 bash[17462]: audit 2026-03-10T05:49:38.935776+0000 mgr.y (mgr.14409) 152 : audit [DBG] from='client.14856 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T05:49:40.334 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:49:39 vm02 bash[17462]: audit 2026-03-10T05:49:39.150548+0000 mon.c (mon.1) 105 : audit [DBG] from='client.? 192.168.123.102:0/1140445229' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T05:49:40.334 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:49:39 vm02 bash[17462]: audit 2026-03-10T05:49:39.335538+0000 mgr.y (mgr.14409) 153 : audit [DBG] from='client.24802 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T05:49:40.334 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:49:39 vm02 bash[17462]: audit 2026-03-10T05:49:39.545410+0000 mon.c (mon.1) 106 : audit [DBG] from='client.? 192.168.123.102:0/1871970390' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch 2026-03-10T05:49:40.334 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:49:39 vm02 bash[17462]: cluster 2026-03-10T05:49:39.711838+0000 mgr.y (mgr.14409) 154 : cluster [DBG] pgmap v119: 161 pgs: 161 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T05:49:40.334 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:49:39 vm02 bash[22526]: audit 2026-03-10T05:49:38.013862+0000 mgr.y (mgr.14409) 147 : audit [DBG] from='client.24775 -' entity='client.admin' cmd=[{"prefix": "orch upgrade start", "image": "quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T05:49:40.334 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:49:39 vm02 bash[22526]: cephadm 2026-03-10T05:49:38.014532+0000 mgr.y (mgr.14409) 148 : cephadm [INF] Upgrade: Started with target quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df 2026-03-10T05:49:40.334 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:49:39 vm02 bash[22526]: cephadm 2026-03-10T05:49:38.048993+0000 mgr.y (mgr.14409) 149 : cephadm [INF] Upgrade: First pull of quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df 2026-03-10T05:49:40.334 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:49:39 vm02 bash[22526]: audit 2026-03-10T05:49:38.561248+0000 mgr.y (mgr.14409) 150 : audit [DBG] from='client.24781 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T05:49:40.334 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:49:39 vm02 bash[22526]: audit 2026-03-10T05:49:38.745904+0000 mgr.y (mgr.14409) 151 : audit [DBG] from='client.24787 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T05:49:40.334 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:49:39 vm02 bash[22526]: audit 2026-03-10T05:49:38.935776+0000 mgr.y (mgr.14409) 152 : audit [DBG] from='client.14856 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T05:49:40.334 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:49:39 vm02 bash[22526]: audit 2026-03-10T05:49:39.150548+0000 mon.c (mon.1) 105 : audit [DBG] from='client.? 192.168.123.102:0/1140445229' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T05:49:40.334 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:49:39 vm02 bash[22526]: audit 2026-03-10T05:49:39.335538+0000 mgr.y (mgr.14409) 153 : audit [DBG] from='client.24802 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T05:49:40.334 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:49:39 vm02 bash[22526]: audit 2026-03-10T05:49:39.545410+0000 mon.c (mon.1) 106 : audit [DBG] from='client.? 192.168.123.102:0/1871970390' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch 2026-03-10T05:49:40.334 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:49:39 vm02 bash[22526]: cluster 2026-03-10T05:49:39.711838+0000 mgr.y (mgr.14409) 154 : cluster [DBG] pgmap v119: 161 pgs: 161 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T05:49:43.084 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:49:42 vm02 bash[17462]: cluster 2026-03-10T05:49:41.712157+0000 mgr.y (mgr.14409) 155 : cluster [DBG] pgmap v120: 161 pgs: 161 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T05:49:43.084 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:49:42 vm02 bash[22526]: cluster 2026-03-10T05:49:41.712157+0000 mgr.y (mgr.14409) 155 : cluster [DBG] pgmap v120: 161 pgs: 161 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T05:49:43.255 INFO:journalctl@ceph.mgr.x.vm05.stdout:Mar 10 05:49:42 vm05 bash[18520]: ::ffff:192.168.123.105 - - [10/Mar/2026:05:49:42] "GET /metrics HTTP/1.1" 200 - "" "Prometheus/2.33.4" 2026-03-10T05:49:43.255 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:49:42 vm05 bash[17864]: cluster 2026-03-10T05:49:41.712157+0000 mgr.y (mgr.14409) 155 : cluster [DBG] pgmap v120: 161 pgs: 161 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T05:49:43.799 INFO:journalctl@ceph.alertmanager.a.vm02.stdout:Mar 10 05:49:43 vm02 bash[43400]: level=error ts=2026-03-10T05:49:43.514Z caller=dispatch.go:354 component=dispatcher msg="Notify for alerts failed" num_alerts=10 err="ceph-dashboard/webhook[0]: notify retry canceled after 8 attempts: Post \"https://192.168.123.102:8443//api/prometheus_receiver\": x509: cannot validate certificate for 192.168.123.102 because it doesn't contain any IP SANs; ceph-dashboard/webhook[1]: notify retry canceled after 8 attempts: Post \"https://192.168.123.105:8443/api/prometheus_receiver\": x509: cannot validate certificate for 192.168.123.105 because it doesn't contain any IP SANs" 2026-03-10T05:49:43.800 INFO:journalctl@ceph.alertmanager.a.vm02.stdout:Mar 10 05:49:43 vm02 bash[43400]: level=warn ts=2026-03-10T05:49:43.516Z caller=notify.go:724 component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"https://192.168.123.102:8443//api/prometheus_receiver\": x509: cannot validate certificate for 192.168.123.102 because it doesn't contain any IP SANs" 2026-03-10T05:49:43.800 INFO:journalctl@ceph.alertmanager.a.vm02.stdout:Mar 10 05:49:43 vm02 bash[43400]: level=warn ts=2026-03-10T05:49:43.516Z caller=notify.go:724 component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"https://192.168.123.105:8443/api/prometheus_receiver\": x509: cannot validate certificate for 192.168.123.105 because it doesn't contain any IP SANs" 2026-03-10T05:49:44.084 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:49:43 vm02 bash[17462]: audit 2026-03-10T05:49:42.713257+0000 mgr.y (mgr.14409) 156 : audit [DBG] from='client.14712 -' entity='client.iscsi.foo.vm02.mxbwmh' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T05:49:44.084 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:49:43 vm02 bash[22526]: audit 2026-03-10T05:49:42.713257+0000 mgr.y (mgr.14409) 156 : audit [DBG] from='client.14712 -' entity='client.iscsi.foo.vm02.mxbwmh' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T05:49:44.255 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:49:43 vm05 bash[17864]: audit 2026-03-10T05:49:42.713257+0000 mgr.y (mgr.14409) 156 : audit [DBG] from='client.14712 -' entity='client.iscsi.foo.vm02.mxbwmh' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T05:49:45.084 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:49:44 vm02 bash[17462]: cluster 2026-03-10T05:49:43.712756+0000 mgr.y (mgr.14409) 157 : cluster [DBG] pgmap v121: 161 pgs: 161 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T05:49:45.084 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:49:44 vm02 bash[22526]: cluster 2026-03-10T05:49:43.712756+0000 mgr.y (mgr.14409) 157 : cluster [DBG] pgmap v121: 161 pgs: 161 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T05:49:45.254 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:49:44 vm05 bash[17864]: cluster 2026-03-10T05:49:43.712756+0000 mgr.y (mgr.14409) 157 : cluster [DBG] pgmap v121: 161 pgs: 161 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T05:49:47.084 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:49:46 vm02 bash[17462]: cluster 2026-03-10T05:49:45.713087+0000 mgr.y (mgr.14409) 158 : cluster [DBG] pgmap v122: 161 pgs: 161 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T05:49:47.084 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:49:46 vm02 bash[22526]: cluster 2026-03-10T05:49:45.713087+0000 mgr.y (mgr.14409) 158 : cluster [DBG] pgmap v122: 161 pgs: 161 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T05:49:47.254 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:49:46 vm05 bash[17864]: cluster 2026-03-10T05:49:45.713087+0000 mgr.y (mgr.14409) 158 : cluster [DBG] pgmap v122: 161 pgs: 161 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T05:49:47.584 INFO:journalctl@ceph.mgr.y.vm02.stdout:Mar 10 05:49:47 vm02 bash[17731]: ::ffff:192.168.123.105 - - [10/Mar/2026:05:49:47] "GET /metrics HTTP/1.1" 200 214529 "" "Prometheus/2.33.4" 2026-03-10T05:49:49.334 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:49:49 vm02 bash[17462]: cluster 2026-03-10T05:49:47.713447+0000 mgr.y (mgr.14409) 159 : cluster [DBG] pgmap v123: 161 pgs: 161 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T05:49:49.334 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:49:49 vm02 bash[22526]: cluster 2026-03-10T05:49:47.713447+0000 mgr.y (mgr.14409) 159 : cluster [DBG] pgmap v123: 161 pgs: 161 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T05:49:49.504 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:49:49 vm05 bash[17864]: cluster 2026-03-10T05:49:47.713447+0000 mgr.y (mgr.14409) 159 : cluster [DBG] pgmap v123: 161 pgs: 161 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T05:49:50.334 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:49:50 vm02 bash[17462]: cluster 2026-03-10T05:49:49.713898+0000 mgr.y (mgr.14409) 160 : cluster [DBG] pgmap v124: 161 pgs: 161 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T05:49:50.334 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:49:50 vm02 bash[22526]: cluster 2026-03-10T05:49:49.713898+0000 mgr.y (mgr.14409) 160 : cluster [DBG] pgmap v124: 161 pgs: 161 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T05:49:50.504 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:49:50 vm05 bash[17864]: cluster 2026-03-10T05:49:49.713898+0000 mgr.y (mgr.14409) 160 : cluster [DBG] pgmap v124: 161 pgs: 161 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T05:49:53.084 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:49:52 vm02 bash[17462]: cluster 2026-03-10T05:49:51.714168+0000 mgr.y (mgr.14409) 161 : cluster [DBG] pgmap v125: 161 pgs: 161 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T05:49:53.084 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:49:52 vm02 bash[17462]: audit 2026-03-10T05:49:51.766841+0000 mon.c (mon.1) 107 : audit [INF] from='mgr.14409 192.168.123.102:0/4073702081' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/mirror_snapshot_schedule"}]: dispatch 2026-03-10T05:49:53.084 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:49:52 vm02 bash[17462]: audit 2026-03-10T05:49:51.770405+0000 mon.a (mon.0) 764 : audit [INF] from='mgr.14409 ' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/mirror_snapshot_schedule"}]: dispatch 2026-03-10T05:49:53.084 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:49:52 vm02 bash[17462]: audit 2026-03-10T05:49:51.776237+0000 mon.c (mon.1) 108 : audit [INF] from='mgr.14409 192.168.123.102:0/4073702081' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/trash_purge_schedule"}]: dispatch 2026-03-10T05:49:53.084 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:49:52 vm02 bash[17462]: audit 2026-03-10T05:49:51.776443+0000 mon.a (mon.0) 765 : audit [INF] from='mgr.14409 ' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/trash_purge_schedule"}]: dispatch 2026-03-10T05:49:53.084 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:49:52 vm02 bash[22526]: cluster 2026-03-10T05:49:51.714168+0000 mgr.y (mgr.14409) 161 : cluster [DBG] pgmap v125: 161 pgs: 161 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T05:49:53.084 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:49:52 vm02 bash[22526]: audit 2026-03-10T05:49:51.766841+0000 mon.c (mon.1) 107 : audit [INF] from='mgr.14409 192.168.123.102:0/4073702081' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/mirror_snapshot_schedule"}]: dispatch 2026-03-10T05:49:53.084 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:49:52 vm02 bash[22526]: audit 2026-03-10T05:49:51.770405+0000 mon.a (mon.0) 764 : audit [INF] from='mgr.14409 ' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/mirror_snapshot_schedule"}]: dispatch 2026-03-10T05:49:53.084 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:49:52 vm02 bash[22526]: audit 2026-03-10T05:49:51.776237+0000 mon.c (mon.1) 108 : audit [INF] from='mgr.14409 192.168.123.102:0/4073702081' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/trash_purge_schedule"}]: dispatch 2026-03-10T05:49:53.084 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:49:52 vm02 bash[22526]: audit 2026-03-10T05:49:51.776443+0000 mon.a (mon.0) 765 : audit [INF] from='mgr.14409 ' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/trash_purge_schedule"}]: dispatch 2026-03-10T05:49:53.254 INFO:journalctl@ceph.mgr.x.vm05.stdout:Mar 10 05:49:52 vm05 bash[18520]: ::ffff:192.168.123.105 - - [10/Mar/2026:05:49:52] "GET /metrics HTTP/1.1" 200 - "" "Prometheus/2.33.4" 2026-03-10T05:49:53.254 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:49:52 vm05 bash[17864]: cluster 2026-03-10T05:49:51.714168+0000 mgr.y (mgr.14409) 161 : cluster [DBG] pgmap v125: 161 pgs: 161 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T05:49:53.254 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:49:52 vm05 bash[17864]: audit 2026-03-10T05:49:51.766841+0000 mon.c (mon.1) 107 : audit [INF] from='mgr.14409 192.168.123.102:0/4073702081' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/mirror_snapshot_schedule"}]: dispatch 2026-03-10T05:49:53.254 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:49:52 vm05 bash[17864]: audit 2026-03-10T05:49:51.770405+0000 mon.a (mon.0) 764 : audit [INF] from='mgr.14409 ' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/mirror_snapshot_schedule"}]: dispatch 2026-03-10T05:49:53.254 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:49:52 vm05 bash[17864]: audit 2026-03-10T05:49:51.776237+0000 mon.c (mon.1) 108 : audit [INF] from='mgr.14409 192.168.123.102:0/4073702081' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/trash_purge_schedule"}]: dispatch 2026-03-10T05:49:53.254 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:49:52 vm05 bash[17864]: audit 2026-03-10T05:49:51.776443+0000 mon.a (mon.0) 765 : audit [INF] from='mgr.14409 ' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/trash_purge_schedule"}]: dispatch 2026-03-10T05:49:53.833 INFO:journalctl@ceph.alertmanager.a.vm02.stdout:Mar 10 05:49:53 vm02 bash[43400]: level=error ts=2026-03-10T05:49:53.515Z caller=dispatch.go:354 component=dispatcher msg="Notify for alerts failed" num_alerts=10 err="ceph-dashboard/webhook[0]: notify retry canceled after 8 attempts: Post \"https://192.168.123.102:8443//api/prometheus_receiver\": x509: cannot validate certificate for 192.168.123.102 because it doesn't contain any IP SANs; ceph-dashboard/webhook[1]: notify retry canceled after 8 attempts: Post \"https://192.168.123.105:8443/api/prometheus_receiver\": x509: cannot validate certificate for 192.168.123.105 because it doesn't contain any IP SANs" 2026-03-10T05:49:53.833 INFO:journalctl@ceph.alertmanager.a.vm02.stdout:Mar 10 05:49:53 vm02 bash[43400]: level=warn ts=2026-03-10T05:49:53.517Z caller=notify.go:724 component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"https://192.168.123.102:8443//api/prometheus_receiver\": x509: cannot validate certificate for 192.168.123.102 because it doesn't contain any IP SANs" 2026-03-10T05:49:53.833 INFO:journalctl@ceph.alertmanager.a.vm02.stdout:Mar 10 05:49:53 vm02 bash[43400]: level=warn ts=2026-03-10T05:49:53.517Z caller=notify.go:724 component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"https://192.168.123.105:8443/api/prometheus_receiver\": x509: cannot validate certificate for 192.168.123.105 because it doesn't contain any IP SANs" 2026-03-10T05:49:54.086 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:49:53 vm02 bash[17462]: audit 2026-03-10T05:49:52.720931+0000 mgr.y (mgr.14409) 162 : audit [DBG] from='client.14712 -' entity='client.iscsi.foo.vm02.mxbwmh' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T05:49:54.086 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:49:53 vm02 bash[22526]: audit 2026-03-10T05:49:52.720931+0000 mgr.y (mgr.14409) 162 : audit [DBG] from='client.14712 -' entity='client.iscsi.foo.vm02.mxbwmh' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T05:49:54.254 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:49:53 vm05 bash[17864]: audit 2026-03-10T05:49:52.720931+0000 mgr.y (mgr.14409) 162 : audit [DBG] from='client.14712 -' entity='client.iscsi.foo.vm02.mxbwmh' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T05:49:55.254 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:49:54 vm05 bash[17864]: cluster 2026-03-10T05:49:53.714784+0000 mgr.y (mgr.14409) 163 : cluster [DBG] pgmap v126: 161 pgs: 161 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T05:49:55.334 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:49:54 vm02 bash[17462]: cluster 2026-03-10T05:49:53.714784+0000 mgr.y (mgr.14409) 163 : cluster [DBG] pgmap v126: 161 pgs: 161 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T05:49:55.334 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:49:54 vm02 bash[22526]: cluster 2026-03-10T05:49:53.714784+0000 mgr.y (mgr.14409) 163 : cluster [DBG] pgmap v126: 161 pgs: 161 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T05:49:57.144 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:49:56 vm02 bash[17462]: cluster 2026-03-10T05:49:55.715173+0000 mgr.y (mgr.14409) 164 : cluster [DBG] pgmap v127: 161 pgs: 161 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T05:49:57.144 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:49:56 vm02 bash[22526]: cluster 2026-03-10T05:49:55.715173+0000 mgr.y (mgr.14409) 164 : cluster [DBG] pgmap v127: 161 pgs: 161 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T05:49:57.253 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:49:56 vm05 bash[17864]: cluster 2026-03-10T05:49:55.715173+0000 mgr.y (mgr.14409) 164 : cluster [DBG] pgmap v127: 161 pgs: 161 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T05:49:57.584 INFO:journalctl@ceph.mgr.y.vm02.stdout:Mar 10 05:49:57 vm02 bash[17731]: ::ffff:192.168.123.105 - - [10/Mar/2026:05:49:57] "GET /metrics HTTP/1.1" 200 214514 "" "Prometheus/2.33.4" 2026-03-10T05:49:59.254 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:49:58 vm05 bash[17864]: cluster 2026-03-10T05:49:57.715542+0000 mgr.y (mgr.14409) 165 : cluster [DBG] pgmap v128: 161 pgs: 161 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T05:49:59.334 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:49:58 vm02 bash[17462]: cluster 2026-03-10T05:49:57.715542+0000 mgr.y (mgr.14409) 165 : cluster [DBG] pgmap v128: 161 pgs: 161 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T05:49:59.334 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:49:58 vm02 bash[22526]: cluster 2026-03-10T05:49:57.715542+0000 mgr.y (mgr.14409) 165 : cluster [DBG] pgmap v128: 161 pgs: 161 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T05:50:01.253 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:50:00 vm05 bash[17864]: cluster 2026-03-10T05:49:59.715957+0000 mgr.y (mgr.14409) 166 : cluster [DBG] pgmap v129: 161 pgs: 161 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T05:50:01.254 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:50:00 vm05 bash[17864]: cluster 2026-03-10T05:50:00.000084+0000 mon.a (mon.0) 766 : cluster [INF] overall HEALTH_OK 2026-03-10T05:50:01.334 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:50:00 vm02 bash[17462]: cluster 2026-03-10T05:49:59.715957+0000 mgr.y (mgr.14409) 166 : cluster [DBG] pgmap v129: 161 pgs: 161 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T05:50:01.334 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:50:00 vm02 bash[17462]: cluster 2026-03-10T05:50:00.000084+0000 mon.a (mon.0) 766 : cluster [INF] overall HEALTH_OK 2026-03-10T05:50:01.334 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:50:00 vm02 bash[22526]: cluster 2026-03-10T05:49:59.715957+0000 mgr.y (mgr.14409) 166 : cluster [DBG] pgmap v129: 161 pgs: 161 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T05:50:01.334 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:50:00 vm02 bash[22526]: cluster 2026-03-10T05:50:00.000084+0000 mon.a (mon.0) 766 : cluster [INF] overall HEALTH_OK 2026-03-10T05:50:03.084 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:50:02 vm02 bash[17462]: cluster 2026-03-10T05:50:01.716366+0000 mgr.y (mgr.14409) 167 : cluster [DBG] pgmap v130: 161 pgs: 161 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T05:50:03.084 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:50:02 vm02 bash[22526]: cluster 2026-03-10T05:50:01.716366+0000 mgr.y (mgr.14409) 167 : cluster [DBG] pgmap v130: 161 pgs: 161 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T05:50:03.253 INFO:journalctl@ceph.mgr.x.vm05.stdout:Mar 10 05:50:02 vm05 bash[18520]: ::ffff:192.168.123.105 - - [10/Mar/2026:05:50:02] "GET /metrics HTTP/1.1" 200 - "" "Prometheus/2.33.4" 2026-03-10T05:50:03.253 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:50:02 vm05 bash[17864]: cluster 2026-03-10T05:50:01.716366+0000 mgr.y (mgr.14409) 167 : cluster [DBG] pgmap v130: 161 pgs: 161 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T05:50:03.834 INFO:journalctl@ceph.alertmanager.a.vm02.stdout:Mar 10 05:50:03 vm02 bash[43400]: level=error ts=2026-03-10T05:50:03.515Z caller=dispatch.go:354 component=dispatcher msg="Notify for alerts failed" num_alerts=10 err="ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"https://192.168.123.105:8443/api/prometheus_receiver\": x509: cannot validate certificate for 192.168.123.105 because it doesn't contain any IP SANs; ceph-dashboard/webhook[0]: notify retry canceled after 8 attempts: Post \"https://192.168.123.102:8443//api/prometheus_receiver\": x509: cannot validate certificate for 192.168.123.102 because it doesn't contain any IP SANs" 2026-03-10T05:50:03.834 INFO:journalctl@ceph.alertmanager.a.vm02.stdout:Mar 10 05:50:03 vm02 bash[43400]: level=warn ts=2026-03-10T05:50:03.517Z caller=notify.go:724 component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"https://192.168.123.105:8443/api/prometheus_receiver\": x509: cannot validate certificate for 192.168.123.105 because it doesn't contain any IP SANs" 2026-03-10T05:50:03.834 INFO:journalctl@ceph.alertmanager.a.vm02.stdout:Mar 10 05:50:03 vm02 bash[43400]: level=warn ts=2026-03-10T05:50:03.518Z caller=notify.go:724 component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"https://192.168.123.102:8443//api/prometheus_receiver\": x509: cannot validate certificate for 192.168.123.102 because it doesn't contain any IP SANs" 2026-03-10T05:50:04.253 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:50:03 vm05 bash[17864]: audit 2026-03-10T05:50:02.731323+0000 mgr.y (mgr.14409) 168 : audit [DBG] from='client.14712 -' entity='client.iscsi.foo.vm02.mxbwmh' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T05:50:04.334 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:50:03 vm02 bash[17462]: audit 2026-03-10T05:50:02.731323+0000 mgr.y (mgr.14409) 168 : audit [DBG] from='client.14712 -' entity='client.iscsi.foo.vm02.mxbwmh' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T05:50:04.334 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:50:03 vm02 bash[22526]: audit 2026-03-10T05:50:02.731323+0000 mgr.y (mgr.14409) 168 : audit [DBG] from='client.14712 -' entity='client.iscsi.foo.vm02.mxbwmh' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T05:50:05.253 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:50:04 vm05 bash[17864]: cluster 2026-03-10T05:50:03.717147+0000 mgr.y (mgr.14409) 169 : cluster [DBG] pgmap v131: 161 pgs: 161 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T05:50:05.334 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:50:04 vm02 bash[17462]: cluster 2026-03-10T05:50:03.717147+0000 mgr.y (mgr.14409) 169 : cluster [DBG] pgmap v131: 161 pgs: 161 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T05:50:05.334 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:50:04 vm02 bash[22526]: cluster 2026-03-10T05:50:03.717147+0000 mgr.y (mgr.14409) 169 : cluster [DBG] pgmap v131: 161 pgs: 161 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T05:50:07.148 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:50:06 vm02 bash[17462]: cluster 2026-03-10T05:50:05.717470+0000 mgr.y (mgr.14409) 170 : cluster [DBG] pgmap v132: 161 pgs: 161 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T05:50:07.148 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:50:06 vm02 bash[22526]: cluster 2026-03-10T05:50:05.717470+0000 mgr.y (mgr.14409) 170 : cluster [DBG] pgmap v132: 161 pgs: 161 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T05:50:07.253 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:50:06 vm05 bash[17864]: cluster 2026-03-10T05:50:05.717470+0000 mgr.y (mgr.14409) 170 : cluster [DBG] pgmap v132: 161 pgs: 161 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T05:50:07.584 INFO:journalctl@ceph.mgr.y.vm02.stdout:Mar 10 05:50:07 vm02 bash[17731]: ::ffff:192.168.123.105 - - [10/Mar/2026:05:50:07] "GET /metrics HTTP/1.1" 200 214468 "" "Prometheus/2.33.4" 2026-03-10T05:50:09.253 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:50:08 vm05 bash[17864]: cluster 2026-03-10T05:50:07.717840+0000 mgr.y (mgr.14409) 171 : cluster [DBG] pgmap v133: 161 pgs: 161 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T05:50:09.334 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:50:08 vm02 bash[17462]: cluster 2026-03-10T05:50:07.717840+0000 mgr.y (mgr.14409) 171 : cluster [DBG] pgmap v133: 161 pgs: 161 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T05:50:09.334 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:50:08 vm02 bash[22526]: cluster 2026-03-10T05:50:07.717840+0000 mgr.y (mgr.14409) 171 : cluster [DBG] pgmap v133: 161 pgs: 161 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T05:50:09.828 INFO:teuthology.orchestra.run.vm02.stdout:true 2026-03-10T05:50:10.584 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:50:10 vm02 bash[17462]: cluster 2026-03-10T05:50:09.719803+0000 mgr.y (mgr.14409) 172 : cluster [DBG] pgmap v134: 161 pgs: 161 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T05:50:10.584 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:50:10 vm02 bash[17462]: audit 2026-03-10T05:50:09.805935+0000 mgr.y (mgr.14409) 173 : audit [DBG] from='client.14871 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T05:50:10.584 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:50:10 vm02 bash[22526]: cluster 2026-03-10T05:50:09.719803+0000 mgr.y (mgr.14409) 172 : cluster [DBG] pgmap v134: 161 pgs: 161 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T05:50:10.584 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:50:10 vm02 bash[22526]: audit 2026-03-10T05:50:09.805935+0000 mgr.y (mgr.14409) 173 : audit [DBG] from='client.14871 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T05:50:10.753 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:50:10 vm05 bash[17864]: cluster 2026-03-10T05:50:09.719803+0000 mgr.y (mgr.14409) 172 : cluster [DBG] pgmap v134: 161 pgs: 161 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T05:50:10.753 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:50:10 vm05 bash[17864]: audit 2026-03-10T05:50:09.805935+0000 mgr.y (mgr.14409) 173 : audit [DBG] from='client.14871 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T05:50:10.765 INFO:teuthology.orchestra.run.vm02.stdout:NAME HOST PORTS STATUS REFRESHED AGE MEM USE MEM LIM VERSION IMAGE ID CONTAINER ID 2026-03-10T05:50:10.765 INFO:teuthology.orchestra.run.vm02.stdout:alertmanager.a vm02 *:9093,9094 running (2m) 78s ago 3m 16.4M - ba2b418f427c 3305780e5ef5 2026-03-10T05:50:10.765 INFO:teuthology.orchestra.run.vm02.stdout:grafana.a vm05 *:3000 running (2m) 78s ago 2m 42.4M - 8.3.5 dad864ee21e9 a370f3725ef2 2026-03-10T05:50:10.765 INFO:teuthology.orchestra.run.vm02.stdout:iscsi.foo.vm02.mxbwmh vm02 running (2m) 78s ago 2m 41.3M - 3.5 e1d6a67b021e c01d22afac06 2026-03-10T05:50:10.765 INFO:teuthology.orchestra.run.vm02.stdout:mgr.x vm05 *:8443 running (5m) 78s ago 5m 398M - 17.2.0 e1d6a67b021e b2f4d40768f0 2026-03-10T05:50:10.765 INFO:teuthology.orchestra.run.vm02.stdout:mgr.y vm02 *:9283 running (6m) 78s ago 6m 445M - 17.2.0 e1d6a67b021e a04e3f113661 2026-03-10T05:50:10.765 INFO:teuthology.orchestra.run.vm02.stdout:mon.a vm02 running (6m) 78s ago 6m 49.4M 2048M 17.2.0 e1d6a67b021e bf59d12a7baa 2026-03-10T05:50:10.766 INFO:teuthology.orchestra.run.vm02.stdout:mon.b vm05 running (5m) 78s ago 5m 45.9M 2048M 17.2.0 e1d6a67b021e 96a2a71fd403 2026-03-10T05:50:10.766 INFO:teuthology.orchestra.run.vm02.stdout:mon.c vm02 running (5m) 78s ago 5m 47.6M 2048M 17.2.0 e1d6a67b021e 2f6dcf491c61 2026-03-10T05:50:10.766 INFO:teuthology.orchestra.run.vm02.stdout:node-exporter.a vm02 *:9100 running (3m) 78s ago 3m 8040k - 1dbe0e931976 111574d033cc 2026-03-10T05:50:10.766 INFO:teuthology.orchestra.run.vm02.stdout:node-exporter.b vm05 *:9100 running (3m) 78s ago 3m 9784k - 1dbe0e931976 b6278e64d85c 2026-03-10T05:50:10.766 INFO:teuthology.orchestra.run.vm02.stdout:osd.0 vm02 running (5m) 78s ago 5m 47.9M 4096M 17.2.0 e1d6a67b021e 563d55a3e6a4 2026-03-10T05:50:10.766 INFO:teuthology.orchestra.run.vm02.stdout:osd.1 vm02 running (5m) 78s ago 5m 50.8M 4096M 17.2.0 e1d6a67b021e 8c25a1e89677 2026-03-10T05:50:10.766 INFO:teuthology.orchestra.run.vm02.stdout:osd.2 vm02 running (4m) 78s ago 4m 46.2M 4096M 17.2.0 e1d6a67b021e 826f54bdbc5c 2026-03-10T05:50:10.766 INFO:teuthology.orchestra.run.vm02.stdout:osd.3 vm02 running (4m) 78s ago 4m 49.1M 4096M 17.2.0 e1d6a67b021e 0c6cfa53c9fd 2026-03-10T05:50:10.766 INFO:teuthology.orchestra.run.vm02.stdout:osd.4 vm05 running (4m) 78s ago 4m 49.4M 4096M 17.2.0 e1d6a67b021e 4ffe1741f201 2026-03-10T05:50:10.766 INFO:teuthology.orchestra.run.vm02.stdout:osd.5 vm05 running (4m) 78s ago 4m 47.8M 4096M 17.2.0 e1d6a67b021e cba5583c238e 2026-03-10T05:50:10.766 INFO:teuthology.orchestra.run.vm02.stdout:osd.6 vm05 running (3m) 78s ago 3m 45.7M 4096M 17.2.0 e1d6a67b021e 9d1b370357d7 2026-03-10T05:50:10.766 INFO:teuthology.orchestra.run.vm02.stdout:osd.7 vm05 running (3m) 78s ago 3m 47.4M 4096M 17.2.0 e1d6a67b021e 8a4837b788cf 2026-03-10T05:50:10.766 INFO:teuthology.orchestra.run.vm02.stdout:prometheus.a vm05 *:9095 running (2m) 78s ago 3m 45.8M - 514e6a882f6e 6c053703db40 2026-03-10T05:50:10.766 INFO:teuthology.orchestra.run.vm02.stdout:rgw.foo.vm02.pbogjd vm02 *:8000 running (2m) 78s ago 2m 82.9M - 17.2.0 e1d6a67b021e 2ab2ffd1abaa 2026-03-10T05:50:10.766 INFO:teuthology.orchestra.run.vm02.stdout:rgw.foo.vm05.hvmsxl vm05 *:8000 running (2m) 78s ago 2m 82.9M - 17.2.0 e1d6a67b021e 85d1c77b7e9d 2026-03-10T05:50:10.766 INFO:teuthology.orchestra.run.vm02.stdout:rgw.smpl.vm02.pglcfm vm02 *:80 running (2m) 78s ago 2m 82.7M - 17.2.0 e1d6a67b021e ef152a460673 2026-03-10T05:50:10.766 INFO:teuthology.orchestra.run.vm02.stdout:rgw.smpl.vm05.hqqmap vm05 *:80 running (2m) 78s ago 2m 82.5M - 17.2.0 e1d6a67b021e 29c9ee794f34 2026-03-10T05:50:11.018 INFO:teuthology.orchestra.run.vm02.stdout:{ 2026-03-10T05:50:11.018 INFO:teuthology.orchestra.run.vm02.stdout: "mon": { 2026-03-10T05:50:11.018 INFO:teuthology.orchestra.run.vm02.stdout: "ceph version 17.2.0 (43e2e60a7559d3f46c9d53f1ca875fd499a1e35e) quincy (stable)": 3 2026-03-10T05:50:11.018 INFO:teuthology.orchestra.run.vm02.stdout: }, 2026-03-10T05:50:11.018 INFO:teuthology.orchestra.run.vm02.stdout: "mgr": { 2026-03-10T05:50:11.018 INFO:teuthology.orchestra.run.vm02.stdout: "ceph version 17.2.0 (43e2e60a7559d3f46c9d53f1ca875fd499a1e35e) quincy (stable)": 2 2026-03-10T05:50:11.018 INFO:teuthology.orchestra.run.vm02.stdout: }, 2026-03-10T05:50:11.018 INFO:teuthology.orchestra.run.vm02.stdout: "osd": { 2026-03-10T05:50:11.018 INFO:teuthology.orchestra.run.vm02.stdout: "ceph version 17.2.0 (43e2e60a7559d3f46c9d53f1ca875fd499a1e35e) quincy (stable)": 8 2026-03-10T05:50:11.018 INFO:teuthology.orchestra.run.vm02.stdout: }, 2026-03-10T05:50:11.018 INFO:teuthology.orchestra.run.vm02.stdout: "mds": {}, 2026-03-10T05:50:11.018 INFO:teuthology.orchestra.run.vm02.stdout: "rgw": { 2026-03-10T05:50:11.018 INFO:teuthology.orchestra.run.vm02.stdout: "ceph version 17.2.0 (43e2e60a7559d3f46c9d53f1ca875fd499a1e35e) quincy (stable)": 4 2026-03-10T05:50:11.018 INFO:teuthology.orchestra.run.vm02.stdout: }, 2026-03-10T05:50:11.018 INFO:teuthology.orchestra.run.vm02.stdout: "overall": { 2026-03-10T05:50:11.018 INFO:teuthology.orchestra.run.vm02.stdout: "ceph version 17.2.0 (43e2e60a7559d3f46c9d53f1ca875fd499a1e35e) quincy (stable)": 17 2026-03-10T05:50:11.018 INFO:teuthology.orchestra.run.vm02.stdout: } 2026-03-10T05:50:11.018 INFO:teuthology.orchestra.run.vm02.stdout:} 2026-03-10T05:50:11.231 INFO:teuthology.orchestra.run.vm02.stdout:{ 2026-03-10T05:50:11.231 INFO:teuthology.orchestra.run.vm02.stdout: "target_image": "quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df", 2026-03-10T05:50:11.231 INFO:teuthology.orchestra.run.vm02.stdout: "in_progress": true, 2026-03-10T05:50:11.231 INFO:teuthology.orchestra.run.vm02.stdout: "services_complete": [], 2026-03-10T05:50:11.231 INFO:teuthology.orchestra.run.vm02.stdout: "progress": "", 2026-03-10T05:50:11.231 INFO:teuthology.orchestra.run.vm02.stdout: "message": "Doing first pull of quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df image" 2026-03-10T05:50:11.231 INFO:teuthology.orchestra.run.vm02.stdout:} 2026-03-10T05:50:11.603 INFO:teuthology.orchestra.run.vm02.stdout:HEALTH_OK 2026-03-10T05:50:11.753 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:50:11 vm05 bash[17864]: audit 2026-03-10T05:50:10.491291+0000 mgr.y (mgr.14409) 174 : audit [DBG] from='client.24817 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T05:50:11.753 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:50:11 vm05 bash[17864]: audit 2026-03-10T05:50:10.759511+0000 mgr.y (mgr.14409) 175 : audit [DBG] from='client.14880 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T05:50:11.753 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:50:11 vm05 bash[17864]: audit 2026-03-10T05:50:11.017485+0000 mon.a (mon.0) 767 : audit [DBG] from='client.? 192.168.123.102:0/1805543775' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T05:50:11.834 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:50:11 vm02 bash[17462]: audit 2026-03-10T05:50:10.491291+0000 mgr.y (mgr.14409) 174 : audit [DBG] from='client.24817 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T05:50:11.834 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:50:11 vm02 bash[17462]: audit 2026-03-10T05:50:10.759511+0000 mgr.y (mgr.14409) 175 : audit [DBG] from='client.14880 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T05:50:11.834 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:50:11 vm02 bash[17462]: audit 2026-03-10T05:50:11.017485+0000 mon.a (mon.0) 767 : audit [DBG] from='client.? 192.168.123.102:0/1805543775' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T05:50:11.834 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:50:11 vm02 bash[22526]: audit 2026-03-10T05:50:10.491291+0000 mgr.y (mgr.14409) 174 : audit [DBG] from='client.24817 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T05:50:11.834 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:50:11 vm02 bash[22526]: audit 2026-03-10T05:50:10.759511+0000 mgr.y (mgr.14409) 175 : audit [DBG] from='client.14880 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T05:50:11.834 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:50:11 vm02 bash[22526]: audit 2026-03-10T05:50:11.017485+0000 mon.a (mon.0) 767 : audit [DBG] from='client.? 192.168.123.102:0/1805543775' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T05:50:12.741 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:50:12 vm02 bash[17462]: audit 2026-03-10T05:50:11.230819+0000 mgr.y (mgr.14409) 176 : audit [DBG] from='client.24826 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T05:50:12.741 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:50:12 vm02 bash[17462]: audit 2026-03-10T05:50:11.602353+0000 mon.a (mon.0) 768 : audit [DBG] from='client.? 192.168.123.102:0/3337078032' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch 2026-03-10T05:50:12.741 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:50:12 vm02 bash[17462]: cluster 2026-03-10T05:50:11.720072+0000 mgr.y (mgr.14409) 177 : cluster [DBG] pgmap v135: 161 pgs: 161 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T05:50:12.741 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:50:12 vm02 bash[22526]: audit 2026-03-10T05:50:11.230819+0000 mgr.y (mgr.14409) 176 : audit [DBG] from='client.24826 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T05:50:12.741 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:50:12 vm02 bash[22526]: audit 2026-03-10T05:50:11.602353+0000 mon.a (mon.0) 768 : audit [DBG] from='client.? 192.168.123.102:0/3337078032' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch 2026-03-10T05:50:12.741 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:50:12 vm02 bash[22526]: cluster 2026-03-10T05:50:11.720072+0000 mgr.y (mgr.14409) 177 : cluster [DBG] pgmap v135: 161 pgs: 161 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T05:50:12.753 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:50:12 vm05 bash[17864]: audit 2026-03-10T05:50:11.230819+0000 mgr.y (mgr.14409) 176 : audit [DBG] from='client.24826 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T05:50:12.753 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:50:12 vm05 bash[17864]: audit 2026-03-10T05:50:11.602353+0000 mon.a (mon.0) 768 : audit [DBG] from='client.? 192.168.123.102:0/3337078032' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch 2026-03-10T05:50:12.753 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:50:12 vm05 bash[17864]: cluster 2026-03-10T05:50:11.720072+0000 mgr.y (mgr.14409) 177 : cluster [DBG] pgmap v135: 161 pgs: 161 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T05:50:13.253 INFO:journalctl@ceph.mgr.x.vm05.stdout:Mar 10 05:50:12 vm05 bash[18520]: ::ffff:192.168.123.105 - - [10/Mar/2026:05:50:12] "GET /metrics HTTP/1.1" 200 - "" "Prometheus/2.33.4" 2026-03-10T05:50:13.753 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:50:13 vm05 bash[17864]: audit 2026-03-10T05:50:12.738303+0000 mgr.y (mgr.14409) 178 : audit [DBG] from='client.14712 -' entity='client.iscsi.foo.vm02.mxbwmh' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T05:50:13.836 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:50:13 vm02 bash[17462]: audit 2026-03-10T05:50:12.738303+0000 mgr.y (mgr.14409) 178 : audit [DBG] from='client.14712 -' entity='client.iscsi.foo.vm02.mxbwmh' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T05:50:13.836 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:50:13 vm02 bash[22526]: audit 2026-03-10T05:50:12.738303+0000 mgr.y (mgr.14409) 178 : audit [DBG] from='client.14712 -' entity='client.iscsi.foo.vm02.mxbwmh' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T05:50:13.836 INFO:journalctl@ceph.alertmanager.a.vm02.stdout:Mar 10 05:50:13 vm02 bash[43400]: level=error ts=2026-03-10T05:50:13.516Z caller=dispatch.go:354 component=dispatcher msg="Notify for alerts failed" num_alerts=10 err="ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"https://192.168.123.105:8443/api/prometheus_receiver\": x509: cannot validate certificate for 192.168.123.105 because it doesn't contain any IP SANs; ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"https://192.168.123.102:8443//api/prometheus_receiver\": x509: cannot validate certificate for 192.168.123.102 because it doesn't contain any IP SANs" 2026-03-10T05:50:13.836 INFO:journalctl@ceph.alertmanager.a.vm02.stdout:Mar 10 05:50:13 vm02 bash[43400]: level=warn ts=2026-03-10T05:50:13.518Z caller=notify.go:724 component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"https://192.168.123.105:8443/api/prometheus_receiver\": x509: cannot validate certificate for 192.168.123.105 because it doesn't contain any IP SANs" 2026-03-10T05:50:13.836 INFO:journalctl@ceph.alertmanager.a.vm02.stdout:Mar 10 05:50:13 vm02 bash[43400]: level=warn ts=2026-03-10T05:50:13.518Z caller=notify.go:724 component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"https://192.168.123.102:8443//api/prometheus_receiver\": x509: cannot validate certificate for 192.168.123.102 because it doesn't contain any IP SANs" 2026-03-10T05:50:15.003 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:50:14 vm05 bash[17864]: cluster 2026-03-10T05:50:13.720598+0000 mgr.y (mgr.14409) 179 : cluster [DBG] pgmap v136: 161 pgs: 161 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T05:50:15.084 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:50:14 vm02 bash[17462]: cluster 2026-03-10T05:50:13.720598+0000 mgr.y (mgr.14409) 179 : cluster [DBG] pgmap v136: 161 pgs: 161 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T05:50:15.084 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:50:14 vm02 bash[22526]: cluster 2026-03-10T05:50:13.720598+0000 mgr.y (mgr.14409) 179 : cluster [DBG] pgmap v136: 161 pgs: 161 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T05:50:17.003 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:50:16 vm05 bash[17864]: cluster 2026-03-10T05:50:15.720928+0000 mgr.y (mgr.14409) 180 : cluster [DBG] pgmap v137: 161 pgs: 161 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T05:50:17.084 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:50:16 vm02 bash[17462]: cluster 2026-03-10T05:50:15.720928+0000 mgr.y (mgr.14409) 180 : cluster [DBG] pgmap v137: 161 pgs: 161 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T05:50:17.084 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:50:16 vm02 bash[22526]: cluster 2026-03-10T05:50:15.720928+0000 mgr.y (mgr.14409) 180 : cluster [DBG] pgmap v137: 161 pgs: 161 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T05:50:17.551 INFO:journalctl@ceph.mgr.y.vm02.stdout:Mar 10 05:50:17 vm02 bash[17731]: ::ffff:192.168.123.105 - - [10/Mar/2026:05:50:17] "GET /metrics HTTP/1.1" 200 214468 "" "Prometheus/2.33.4" 2026-03-10T05:50:19.003 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:50:18 vm05 bash[17864]: audit 2026-03-10T05:50:17.682863+0000 mon.a (mon.0) 769 : audit [INF] from='mgr.14409 ' entity='mgr.y' 2026-03-10T05:50:19.003 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:50:18 vm05 bash[17864]: cephadm 2026-03-10T05:50:17.685219+0000 mgr.y (mgr.14409) 181 : cephadm [INF] Upgrade: Target is version 19.2.3-678-ge911bdeb (unknown) 2026-03-10T05:50:19.003 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:50:18 vm05 bash[17864]: cephadm 2026-03-10T05:50:17.685245+0000 mgr.y (mgr.14409) 182 : cephadm [INF] Upgrade: Target container is quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df, digests ['quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc'] 2026-03-10T05:50:19.003 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:50:18 vm05 bash[17864]: audit 2026-03-10T05:50:17.685870+0000 mon.c (mon.1) 109 : audit [DBG] from='mgr.14409 192.168.123.102:0/4073702081' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T05:50:19.003 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:50:18 vm05 bash[17864]: audit 2026-03-10T05:50:17.690915+0000 mon.a (mon.0) 770 : audit [INF] from='mgr.14409 ' entity='mgr.y' 2026-03-10T05:50:19.003 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:50:18 vm05 bash[17864]: cephadm 2026-03-10T05:50:17.691993+0000 mgr.y (mgr.14409) 183 : cephadm [INF] Upgrade: Need to upgrade myself (mgr.y) 2026-03-10T05:50:19.003 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:50:18 vm05 bash[17864]: cluster 2026-03-10T05:50:17.721287+0000 mgr.y (mgr.14409) 184 : cluster [DBG] pgmap v138: 161 pgs: 161 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T05:50:19.003 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:50:18 vm05 bash[17864]: cephadm 2026-03-10T05:50:17.830870+0000 mgr.y (mgr.14409) 185 : cephadm [INF] Upgrade: Pulling quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df on vm05 2026-03-10T05:50:19.084 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:50:18 vm02 bash[17462]: audit 2026-03-10T05:50:17.682863+0000 mon.a (mon.0) 769 : audit [INF] from='mgr.14409 ' entity='mgr.y' 2026-03-10T05:50:19.084 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:50:18 vm02 bash[17462]: cephadm 2026-03-10T05:50:17.685219+0000 mgr.y (mgr.14409) 181 : cephadm [INF] Upgrade: Target is version 19.2.3-678-ge911bdeb (unknown) 2026-03-10T05:50:19.084 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:50:18 vm02 bash[17462]: cephadm 2026-03-10T05:50:17.685245+0000 mgr.y (mgr.14409) 182 : cephadm [INF] Upgrade: Target container is quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df, digests ['quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc'] 2026-03-10T05:50:19.084 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:50:18 vm02 bash[17462]: audit 2026-03-10T05:50:17.685870+0000 mon.c (mon.1) 109 : audit [DBG] from='mgr.14409 192.168.123.102:0/4073702081' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T05:50:19.084 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:50:18 vm02 bash[17462]: audit 2026-03-10T05:50:17.690915+0000 mon.a (mon.0) 770 : audit [INF] from='mgr.14409 ' entity='mgr.y' 2026-03-10T05:50:19.084 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:50:18 vm02 bash[17462]: cephadm 2026-03-10T05:50:17.691993+0000 mgr.y (mgr.14409) 183 : cephadm [INF] Upgrade: Need to upgrade myself (mgr.y) 2026-03-10T05:50:19.084 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:50:18 vm02 bash[17462]: cluster 2026-03-10T05:50:17.721287+0000 mgr.y (mgr.14409) 184 : cluster [DBG] pgmap v138: 161 pgs: 161 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T05:50:19.084 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:50:18 vm02 bash[17462]: cephadm 2026-03-10T05:50:17.830870+0000 mgr.y (mgr.14409) 185 : cephadm [INF] Upgrade: Pulling quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df on vm05 2026-03-10T05:50:19.084 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:50:18 vm02 bash[22526]: audit 2026-03-10T05:50:17.682863+0000 mon.a (mon.0) 769 : audit [INF] from='mgr.14409 ' entity='mgr.y' 2026-03-10T05:50:19.084 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:50:18 vm02 bash[22526]: cephadm 2026-03-10T05:50:17.685219+0000 mgr.y (mgr.14409) 181 : cephadm [INF] Upgrade: Target is version 19.2.3-678-ge911bdeb (unknown) 2026-03-10T05:50:19.084 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:50:18 vm02 bash[22526]: cephadm 2026-03-10T05:50:17.685245+0000 mgr.y (mgr.14409) 182 : cephadm [INF] Upgrade: Target container is quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df, digests ['quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc'] 2026-03-10T05:50:19.084 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:50:18 vm02 bash[22526]: audit 2026-03-10T05:50:17.685870+0000 mon.c (mon.1) 109 : audit [DBG] from='mgr.14409 192.168.123.102:0/4073702081' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T05:50:19.084 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:50:18 vm02 bash[22526]: audit 2026-03-10T05:50:17.690915+0000 mon.a (mon.0) 770 : audit [INF] from='mgr.14409 ' entity='mgr.y' 2026-03-10T05:50:19.084 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:50:18 vm02 bash[22526]: cephadm 2026-03-10T05:50:17.691993+0000 mgr.y (mgr.14409) 183 : cephadm [INF] Upgrade: Need to upgrade myself (mgr.y) 2026-03-10T05:50:19.084 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:50:18 vm02 bash[22526]: cluster 2026-03-10T05:50:17.721287+0000 mgr.y (mgr.14409) 184 : cluster [DBG] pgmap v138: 161 pgs: 161 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T05:50:19.084 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:50:18 vm02 bash[22526]: cephadm 2026-03-10T05:50:17.830870+0000 mgr.y (mgr.14409) 185 : cephadm [INF] Upgrade: Pulling quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df on vm05 2026-03-10T05:50:21.084 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:50:20 vm02 bash[17462]: cluster 2026-03-10T05:50:19.721645+0000 mgr.y (mgr.14409) 186 : cluster [DBG] pgmap v139: 161 pgs: 161 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T05:50:21.084 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:50:20 vm02 bash[22526]: cluster 2026-03-10T05:50:19.721645+0000 mgr.y (mgr.14409) 186 : cluster [DBG] pgmap v139: 161 pgs: 161 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T05:50:21.253 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:50:20 vm05 bash[17864]: cluster 2026-03-10T05:50:19.721645+0000 mgr.y (mgr.14409) 186 : cluster [DBG] pgmap v139: 161 pgs: 161 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T05:50:23.084 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:50:22 vm02 bash[17462]: cluster 2026-03-10T05:50:21.721934+0000 mgr.y (mgr.14409) 187 : cluster [DBG] pgmap v140: 161 pgs: 161 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T05:50:23.084 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:50:22 vm02 bash[22526]: cluster 2026-03-10T05:50:21.721934+0000 mgr.y (mgr.14409) 187 : cluster [DBG] pgmap v140: 161 pgs: 161 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T05:50:23.253 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:50:22 vm05 bash[17864]: cluster 2026-03-10T05:50:21.721934+0000 mgr.y (mgr.14409) 187 : cluster [DBG] pgmap v140: 161 pgs: 161 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T05:50:23.253 INFO:journalctl@ceph.mgr.x.vm05.stdout:Mar 10 05:50:22 vm05 bash[18520]: ::ffff:192.168.123.105 - - [10/Mar/2026:05:50:22] "GET /metrics HTTP/1.1" 200 - "" "Prometheus/2.33.4" 2026-03-10T05:50:23.787 INFO:journalctl@ceph.alertmanager.a.vm02.stdout:Mar 10 05:50:23 vm02 bash[43400]: level=error ts=2026-03-10T05:50:23.516Z caller=dispatch.go:354 component=dispatcher msg="Notify for alerts failed" num_alerts=10 err="ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"https://192.168.123.102:8443//api/prometheus_receiver\": x509: cannot validate certificate for 192.168.123.102 because it doesn't contain any IP SANs; ceph-dashboard/webhook[1]: notify retry canceled after 8 attempts: Post \"https://192.168.123.105:8443/api/prometheus_receiver\": x509: cannot validate certificate for 192.168.123.105 because it doesn't contain any IP SANs" 2026-03-10T05:50:23.787 INFO:journalctl@ceph.alertmanager.a.vm02.stdout:Mar 10 05:50:23 vm02 bash[43400]: level=warn ts=2026-03-10T05:50:23.518Z caller=notify.go:724 component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"https://192.168.123.105:8443/api/prometheus_receiver\": x509: cannot validate certificate for 192.168.123.105 because it doesn't contain any IP SANs" 2026-03-10T05:50:23.787 INFO:journalctl@ceph.alertmanager.a.vm02.stdout:Mar 10 05:50:23 vm02 bash[43400]: level=warn ts=2026-03-10T05:50:23.519Z caller=notify.go:724 component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"https://192.168.123.102:8443//api/prometheus_receiver\": x509: cannot validate certificate for 192.168.123.102 because it doesn't contain any IP SANs" 2026-03-10T05:50:24.084 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:50:23 vm02 bash[17462]: audit 2026-03-10T05:50:22.743844+0000 mgr.y (mgr.14409) 188 : audit [DBG] from='client.14712 -' entity='client.iscsi.foo.vm02.mxbwmh' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T05:50:24.084 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:50:23 vm02 bash[22526]: audit 2026-03-10T05:50:22.743844+0000 mgr.y (mgr.14409) 188 : audit [DBG] from='client.14712 -' entity='client.iscsi.foo.vm02.mxbwmh' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T05:50:24.253 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:50:23 vm05 bash[17864]: audit 2026-03-10T05:50:22.743844+0000 mgr.y (mgr.14409) 188 : audit [DBG] from='client.14712 -' entity='client.iscsi.foo.vm02.mxbwmh' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T05:50:25.084 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:50:24 vm02 bash[17462]: cluster 2026-03-10T05:50:23.722508+0000 mgr.y (mgr.14409) 189 : cluster [DBG] pgmap v141: 161 pgs: 161 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T05:50:25.084 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:50:24 vm02 bash[22526]: cluster 2026-03-10T05:50:23.722508+0000 mgr.y (mgr.14409) 189 : cluster [DBG] pgmap v141: 161 pgs: 161 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T05:50:25.253 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:50:24 vm05 bash[17864]: cluster 2026-03-10T05:50:23.722508+0000 mgr.y (mgr.14409) 189 : cluster [DBG] pgmap v141: 161 pgs: 161 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T05:50:27.084 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:50:26 vm02 bash[17462]: cluster 2026-03-10T05:50:25.722830+0000 mgr.y (mgr.14409) 190 : cluster [DBG] pgmap v142: 161 pgs: 161 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T05:50:27.084 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:50:26 vm02 bash[22526]: cluster 2026-03-10T05:50:25.722830+0000 mgr.y (mgr.14409) 190 : cluster [DBG] pgmap v142: 161 pgs: 161 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T05:50:27.253 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:50:26 vm05 bash[17864]: cluster 2026-03-10T05:50:25.722830+0000 mgr.y (mgr.14409) 190 : cluster [DBG] pgmap v142: 161 pgs: 161 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T05:50:27.471 INFO:journalctl@ceph.mgr.y.vm02.stdout:Mar 10 05:50:27 vm02 bash[17731]: ::ffff:192.168.123.105 - - [10/Mar/2026:05:50:27] "GET /metrics HTTP/1.1" 200 214465 "" "Prometheus/2.33.4" 2026-03-10T05:50:29.084 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:50:28 vm02 bash[17462]: cluster 2026-03-10T05:50:27.723267+0000 mgr.y (mgr.14409) 191 : cluster [DBG] pgmap v143: 161 pgs: 161 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T05:50:29.084 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:50:28 vm02 bash[22526]: cluster 2026-03-10T05:50:27.723267+0000 mgr.y (mgr.14409) 191 : cluster [DBG] pgmap v143: 161 pgs: 161 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T05:50:29.253 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:50:28 vm05 bash[17864]: cluster 2026-03-10T05:50:27.723267+0000 mgr.y (mgr.14409) 191 : cluster [DBG] pgmap v143: 161 pgs: 161 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T05:50:31.084 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:50:30 vm02 bash[17462]: cluster 2026-03-10T05:50:29.723644+0000 mgr.y (mgr.14409) 192 : cluster [DBG] pgmap v144: 161 pgs: 161 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T05:50:31.084 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:50:30 vm02 bash[22526]: cluster 2026-03-10T05:50:29.723644+0000 mgr.y (mgr.14409) 192 : cluster [DBG] pgmap v144: 161 pgs: 161 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T05:50:31.253 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:50:30 vm05 bash[17864]: cluster 2026-03-10T05:50:29.723644+0000 mgr.y (mgr.14409) 192 : cluster [DBG] pgmap v144: 161 pgs: 161 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T05:50:33.084 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:50:32 vm02 bash[17462]: cluster 2026-03-10T05:50:31.723918+0000 mgr.y (mgr.14409) 193 : cluster [DBG] pgmap v145: 161 pgs: 161 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T05:50:33.084 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:50:32 vm02 bash[22526]: cluster 2026-03-10T05:50:31.723918+0000 mgr.y (mgr.14409) 193 : cluster [DBG] pgmap v145: 161 pgs: 161 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T05:50:33.253 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:50:32 vm05 bash[17864]: cluster 2026-03-10T05:50:31.723918+0000 mgr.y (mgr.14409) 193 : cluster [DBG] pgmap v145: 161 pgs: 161 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T05:50:33.253 INFO:journalctl@ceph.mgr.x.vm05.stdout:Mar 10 05:50:32 vm05 bash[18520]: ::ffff:192.168.123.105 - - [10/Mar/2026:05:50:32] "GET /metrics HTTP/1.1" 200 - "" "Prometheus/2.33.4" 2026-03-10T05:50:33.820 INFO:journalctl@ceph.alertmanager.a.vm02.stdout:Mar 10 05:50:33 vm02 bash[43400]: level=error ts=2026-03-10T05:50:33.517Z caller=dispatch.go:354 component=dispatcher msg="Notify for alerts failed" num_alerts=10 err="ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"https://192.168.123.102:8443//api/prometheus_receiver\": x509: cannot validate certificate for 192.168.123.102 because it doesn't contain any IP SANs; ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"https://192.168.123.105:8443/api/prometheus_receiver\": x509: cannot validate certificate for 192.168.123.105 because it doesn't contain any IP SANs" 2026-03-10T05:50:33.820 INFO:journalctl@ceph.alertmanager.a.vm02.stdout:Mar 10 05:50:33 vm02 bash[43400]: level=warn ts=2026-03-10T05:50:33.519Z caller=notify.go:724 component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"https://192.168.123.102:8443//api/prometheus_receiver\": x509: cannot validate certificate for 192.168.123.102 because it doesn't contain any IP SANs" 2026-03-10T05:50:33.820 INFO:journalctl@ceph.alertmanager.a.vm02.stdout:Mar 10 05:50:33 vm02 bash[43400]: level=warn ts=2026-03-10T05:50:33.519Z caller=notify.go:724 component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"https://192.168.123.105:8443/api/prometheus_receiver\": x509: cannot validate certificate for 192.168.123.105 because it doesn't contain any IP SANs" 2026-03-10T05:50:34.084 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:50:33 vm02 bash[17462]: audit 2026-03-10T05:50:32.748301+0000 mgr.y (mgr.14409) 194 : audit [DBG] from='client.14712 -' entity='client.iscsi.foo.vm02.mxbwmh' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T05:50:34.084 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:50:33 vm02 bash[22526]: audit 2026-03-10T05:50:32.748301+0000 mgr.y (mgr.14409) 194 : audit [DBG] from='client.14712 -' entity='client.iscsi.foo.vm02.mxbwmh' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T05:50:34.253 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:50:33 vm05 bash[17864]: audit 2026-03-10T05:50:32.748301+0000 mgr.y (mgr.14409) 194 : audit [DBG] from='client.14712 -' entity='client.iscsi.foo.vm02.mxbwmh' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T05:50:35.084 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:50:34 vm02 bash[17462]: cluster 2026-03-10T05:50:33.724438+0000 mgr.y (mgr.14409) 195 : cluster [DBG] pgmap v146: 161 pgs: 161 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T05:50:35.084 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:50:34 vm02 bash[22526]: cluster 2026-03-10T05:50:33.724438+0000 mgr.y (mgr.14409) 195 : cluster [DBG] pgmap v146: 161 pgs: 161 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T05:50:35.253 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:50:34 vm05 bash[17864]: cluster 2026-03-10T05:50:33.724438+0000 mgr.y (mgr.14409) 195 : cluster [DBG] pgmap v146: 161 pgs: 161 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T05:50:37.143 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:50:36 vm02 bash[17462]: cluster 2026-03-10T05:50:35.724798+0000 mgr.y (mgr.14409) 196 : cluster [DBG] pgmap v147: 161 pgs: 161 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T05:50:37.143 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:50:36 vm02 bash[22526]: cluster 2026-03-10T05:50:35.724798+0000 mgr.y (mgr.14409) 196 : cluster [DBG] pgmap v147: 161 pgs: 161 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T05:50:37.253 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:50:36 vm05 bash[17864]: cluster 2026-03-10T05:50:35.724798+0000 mgr.y (mgr.14409) 196 : cluster [DBG] pgmap v147: 161 pgs: 161 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T05:50:37.584 INFO:journalctl@ceph.mgr.y.vm02.stdout:Mar 10 05:50:37 vm02 bash[17731]: ::ffff:192.168.123.105 - - [10/Mar/2026:05:50:37] "GET /metrics HTTP/1.1" 200 214459 "" "Prometheus/2.33.4" 2026-03-10T05:50:39.252 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:50:38 vm05 bash[17864]: cluster 2026-03-10T05:50:37.725191+0000 mgr.y (mgr.14409) 197 : cluster [DBG] pgmap v148: 161 pgs: 161 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T05:50:39.334 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:50:38 vm02 bash[17462]: cluster 2026-03-10T05:50:37.725191+0000 mgr.y (mgr.14409) 197 : cluster [DBG] pgmap v148: 161 pgs: 161 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T05:50:39.334 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:50:38 vm02 bash[22526]: cluster 2026-03-10T05:50:37.725191+0000 mgr.y (mgr.14409) 197 : cluster [DBG] pgmap v148: 161 pgs: 161 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T05:50:41.252 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:50:40 vm05 bash[17864]: cluster 2026-03-10T05:50:39.725623+0000 mgr.y (mgr.14409) 198 : cluster [DBG] pgmap v149: 161 pgs: 161 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T05:50:41.334 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:50:40 vm02 bash[17462]: cluster 2026-03-10T05:50:39.725623+0000 mgr.y (mgr.14409) 198 : cluster [DBG] pgmap v149: 161 pgs: 161 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T05:50:41.334 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:50:40 vm02 bash[22526]: cluster 2026-03-10T05:50:39.725623+0000 mgr.y (mgr.14409) 198 : cluster [DBG] pgmap v149: 161 pgs: 161 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T05:50:41.801 INFO:teuthology.orchestra.run.vm02.stdout:true 2026-03-10T05:50:42.148 INFO:teuthology.orchestra.run.vm02.stdout:NAME HOST PORTS STATUS REFRESHED AGE MEM USE MEM LIM VERSION IMAGE ID CONTAINER ID 2026-03-10T05:50:42.148 INFO:teuthology.orchestra.run.vm02.stdout:alertmanager.a vm02 *:9093,9094 running (3m) 109s ago 3m 16.4M - ba2b418f427c 3305780e5ef5 2026-03-10T05:50:42.148 INFO:teuthology.orchestra.run.vm02.stdout:grafana.a vm05 *:3000 running (3m) 109s ago 3m 42.4M - 8.3.5 dad864ee21e9 a370f3725ef2 2026-03-10T05:50:42.148 INFO:teuthology.orchestra.run.vm02.stdout:iscsi.foo.vm02.mxbwmh vm02 running (2m) 109s ago 3m 41.3M - 3.5 e1d6a67b021e c01d22afac06 2026-03-10T05:50:42.149 INFO:teuthology.orchestra.run.vm02.stdout:mgr.x vm05 *:8443 running (6m) 109s ago 6m 398M - 17.2.0 e1d6a67b021e b2f4d40768f0 2026-03-10T05:50:42.149 INFO:teuthology.orchestra.run.vm02.stdout:mgr.y vm02 *:9283 running (6m) 109s ago 6m 445M - 17.2.0 e1d6a67b021e a04e3f113661 2026-03-10T05:50:42.149 INFO:teuthology.orchestra.run.vm02.stdout:mon.a vm02 running (6m) 109s ago 6m 49.4M 2048M 17.2.0 e1d6a67b021e bf59d12a7baa 2026-03-10T05:50:42.149 INFO:teuthology.orchestra.run.vm02.stdout:mon.b vm05 running (6m) 109s ago 6m 45.9M 2048M 17.2.0 e1d6a67b021e 96a2a71fd403 2026-03-10T05:50:42.149 INFO:teuthology.orchestra.run.vm02.stdout:mon.c vm02 running (6m) 109s ago 6m 47.6M 2048M 17.2.0 e1d6a67b021e 2f6dcf491c61 2026-03-10T05:50:42.149 INFO:teuthology.orchestra.run.vm02.stdout:node-exporter.a vm02 *:9100 running (3m) 109s ago 3m 8040k - 1dbe0e931976 111574d033cc 2026-03-10T05:50:42.149 INFO:teuthology.orchestra.run.vm02.stdout:node-exporter.b vm05 *:9100 running (3m) 109s ago 3m 9784k - 1dbe0e931976 b6278e64d85c 2026-03-10T05:50:42.149 INFO:teuthology.orchestra.run.vm02.stdout:osd.0 vm02 running (5m) 109s ago 5m 47.9M 4096M 17.2.0 e1d6a67b021e 563d55a3e6a4 2026-03-10T05:50:42.149 INFO:teuthology.orchestra.run.vm02.stdout:osd.1 vm02 running (5m) 109s ago 5m 50.8M 4096M 17.2.0 e1d6a67b021e 8c25a1e89677 2026-03-10T05:50:42.149 INFO:teuthology.orchestra.run.vm02.stdout:osd.2 vm02 running (5m) 109s ago 5m 46.2M 4096M 17.2.0 e1d6a67b021e 826f54bdbc5c 2026-03-10T05:50:42.149 INFO:teuthology.orchestra.run.vm02.stdout:osd.3 vm02 running (5m) 109s ago 5m 49.1M 4096M 17.2.0 e1d6a67b021e 0c6cfa53c9fd 2026-03-10T05:50:42.149 INFO:teuthology.orchestra.run.vm02.stdout:osd.4 vm05 running (4m) 109s ago 4m 49.4M 4096M 17.2.0 e1d6a67b021e 4ffe1741f201 2026-03-10T05:50:42.149 INFO:teuthology.orchestra.run.vm02.stdout:osd.5 vm05 running (4m) 109s ago 4m 47.8M 4096M 17.2.0 e1d6a67b021e cba5583c238e 2026-03-10T05:50:42.149 INFO:teuthology.orchestra.run.vm02.stdout:osd.6 vm05 running (4m) 109s ago 4m 45.7M 4096M 17.2.0 e1d6a67b021e 9d1b370357d7 2026-03-10T05:50:42.149 INFO:teuthology.orchestra.run.vm02.stdout:osd.7 vm05 running (4m) 109s ago 4m 47.4M 4096M 17.2.0 e1d6a67b021e 8a4837b788cf 2026-03-10T05:50:42.149 INFO:teuthology.orchestra.run.vm02.stdout:prometheus.a vm05 *:9095 running (3m) 109s ago 3m 45.8M - 514e6a882f6e 6c053703db40 2026-03-10T05:50:42.149 INFO:teuthology.orchestra.run.vm02.stdout:rgw.foo.vm02.pbogjd vm02 *:8000 running (3m) 109s ago 3m 82.9M - 17.2.0 e1d6a67b021e 2ab2ffd1abaa 2026-03-10T05:50:42.149 INFO:teuthology.orchestra.run.vm02.stdout:rgw.foo.vm05.hvmsxl vm05 *:8000 running (3m) 109s ago 3m 82.9M - 17.2.0 e1d6a67b021e 85d1c77b7e9d 2026-03-10T05:50:42.149 INFO:teuthology.orchestra.run.vm02.stdout:rgw.smpl.vm02.pglcfm vm02 *:80 running (3m) 109s ago 3m 82.7M - 17.2.0 e1d6a67b021e ef152a460673 2026-03-10T05:50:42.149 INFO:teuthology.orchestra.run.vm02.stdout:rgw.smpl.vm05.hqqmap vm05 *:80 running (3m) 109s ago 3m 82.5M - 17.2.0 e1d6a67b021e 29c9ee794f34 2026-03-10T05:50:42.354 INFO:teuthology.orchestra.run.vm02.stdout:{ 2026-03-10T05:50:42.354 INFO:teuthology.orchestra.run.vm02.stdout: "mon": { 2026-03-10T05:50:42.354 INFO:teuthology.orchestra.run.vm02.stdout: "ceph version 17.2.0 (43e2e60a7559d3f46c9d53f1ca875fd499a1e35e) quincy (stable)": 3 2026-03-10T05:50:42.355 INFO:teuthology.orchestra.run.vm02.stdout: }, 2026-03-10T05:50:42.355 INFO:teuthology.orchestra.run.vm02.stdout: "mgr": { 2026-03-10T05:50:42.355 INFO:teuthology.orchestra.run.vm02.stdout: "ceph version 17.2.0 (43e2e60a7559d3f46c9d53f1ca875fd499a1e35e) quincy (stable)": 2 2026-03-10T05:50:42.355 INFO:teuthology.orchestra.run.vm02.stdout: }, 2026-03-10T05:50:42.355 INFO:teuthology.orchestra.run.vm02.stdout: "osd": { 2026-03-10T05:50:42.355 INFO:teuthology.orchestra.run.vm02.stdout: "ceph version 17.2.0 (43e2e60a7559d3f46c9d53f1ca875fd499a1e35e) quincy (stable)": 8 2026-03-10T05:50:42.355 INFO:teuthology.orchestra.run.vm02.stdout: }, 2026-03-10T05:50:42.355 INFO:teuthology.orchestra.run.vm02.stdout: "mds": {}, 2026-03-10T05:50:42.355 INFO:teuthology.orchestra.run.vm02.stdout: "rgw": { 2026-03-10T05:50:42.355 INFO:teuthology.orchestra.run.vm02.stdout: "ceph version 17.2.0 (43e2e60a7559d3f46c9d53f1ca875fd499a1e35e) quincy (stable)": 4 2026-03-10T05:50:42.355 INFO:teuthology.orchestra.run.vm02.stdout: }, 2026-03-10T05:50:42.355 INFO:teuthology.orchestra.run.vm02.stdout: "overall": { 2026-03-10T05:50:42.355 INFO:teuthology.orchestra.run.vm02.stdout: "ceph version 17.2.0 (43e2e60a7559d3f46c9d53f1ca875fd499a1e35e) quincy (stable)": 17 2026-03-10T05:50:42.355 INFO:teuthology.orchestra.run.vm02.stdout: } 2026-03-10T05:50:42.355 INFO:teuthology.orchestra.run.vm02.stdout:} 2026-03-10T05:50:42.528 INFO:teuthology.orchestra.run.vm02.stdout:{ 2026-03-10T05:50:42.528 INFO:teuthology.orchestra.run.vm02.stdout: "target_image": "quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df", 2026-03-10T05:50:42.528 INFO:teuthology.orchestra.run.vm02.stdout: "in_progress": true, 2026-03-10T05:50:42.528 INFO:teuthology.orchestra.run.vm02.stdout: "services_complete": [], 2026-03-10T05:50:42.528 INFO:teuthology.orchestra.run.vm02.stdout: "progress": "0/23 daemons upgraded", 2026-03-10T05:50:42.528 INFO:teuthology.orchestra.run.vm02.stdout: "message": "Pulling quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df image on host vm05" 2026-03-10T05:50:42.528 INFO:teuthology.orchestra.run.vm02.stdout:} 2026-03-10T05:50:42.738 INFO:teuthology.orchestra.run.vm02.stdout:HEALTH_OK 2026-03-10T05:50:43.084 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:50:42 vm02 bash[17462]: cluster 2026-03-10T05:50:41.725944+0000 mgr.y (mgr.14409) 199 : cluster [DBG] pgmap v150: 161 pgs: 161 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T05:50:43.084 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:50:42 vm02 bash[17462]: audit 2026-03-10T05:50:41.789076+0000 mgr.y (mgr.14409) 200 : audit [DBG] from='client.24838 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T05:50:43.084 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:50:42 vm02 bash[17462]: audit 2026-03-10T05:50:42.353776+0000 mon.c (mon.1) 110 : audit [DBG] from='client.? 192.168.123.102:0/1121031476' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T05:50:43.084 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:50:42 vm02 bash[17462]: audit 2026-03-10T05:50:42.737867+0000 mon.a (mon.0) 771 : audit [DBG] from='client.? 192.168.123.102:0/2328279473' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch 2026-03-10T05:50:43.084 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:50:42 vm02 bash[22526]: cluster 2026-03-10T05:50:41.725944+0000 mgr.y (mgr.14409) 199 : cluster [DBG] pgmap v150: 161 pgs: 161 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T05:50:43.084 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:50:42 vm02 bash[22526]: audit 2026-03-10T05:50:41.789076+0000 mgr.y (mgr.14409) 200 : audit [DBG] from='client.24838 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T05:50:43.084 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:50:42 vm02 bash[22526]: audit 2026-03-10T05:50:42.353776+0000 mon.c (mon.1) 110 : audit [DBG] from='client.? 192.168.123.102:0/1121031476' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T05:50:43.084 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:50:42 vm02 bash[22526]: audit 2026-03-10T05:50:42.737867+0000 mon.a (mon.0) 771 : audit [DBG] from='client.? 192.168.123.102:0/2328279473' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch 2026-03-10T05:50:43.252 INFO:journalctl@ceph.mgr.x.vm05.stdout:Mar 10 05:50:42 vm05 bash[18520]: ::ffff:192.168.123.105 - - [10/Mar/2026:05:50:42] "GET /metrics HTTP/1.1" 200 - "" "Prometheus/2.33.4" 2026-03-10T05:50:43.253 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:50:42 vm05 bash[17864]: cluster 2026-03-10T05:50:41.725944+0000 mgr.y (mgr.14409) 199 : cluster [DBG] pgmap v150: 161 pgs: 161 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T05:50:43.253 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:50:42 vm05 bash[17864]: audit 2026-03-10T05:50:41.789076+0000 mgr.y (mgr.14409) 200 : audit [DBG] from='client.24838 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T05:50:43.253 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:50:42 vm05 bash[17864]: audit 2026-03-10T05:50:42.353776+0000 mon.c (mon.1) 110 : audit [DBG] from='client.? 192.168.123.102:0/1121031476' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T05:50:43.253 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:50:42 vm05 bash[17864]: audit 2026-03-10T05:50:42.737867+0000 mon.a (mon.0) 771 : audit [DBG] from='client.? 192.168.123.102:0/2328279473' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch 2026-03-10T05:50:43.834 INFO:journalctl@ceph.alertmanager.a.vm02.stdout:Mar 10 05:50:43 vm02 bash[43400]: level=error ts=2026-03-10T05:50:43.518Z caller=dispatch.go:354 component=dispatcher msg="Notify for alerts failed" num_alerts=10 err="ceph-dashboard/webhook[0]: notify retry canceled after 8 attempts: Post \"https://192.168.123.102:8443//api/prometheus_receiver\": x509: cannot validate certificate for 192.168.123.102 because it doesn't contain any IP SANs; ceph-dashboard/webhook[1]: notify retry canceled after 8 attempts: Post \"https://192.168.123.105:8443/api/prometheus_receiver\": x509: cannot validate certificate for 192.168.123.105 because it doesn't contain any IP SANs" 2026-03-10T05:50:43.834 INFO:journalctl@ceph.alertmanager.a.vm02.stdout:Mar 10 05:50:43 vm02 bash[43400]: level=warn ts=2026-03-10T05:50:43.520Z caller=notify.go:724 component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"https://192.168.123.105:8443/api/prometheus_receiver\": x509: cannot validate certificate for 192.168.123.105 because it doesn't contain any IP SANs" 2026-03-10T05:50:43.834 INFO:journalctl@ceph.alertmanager.a.vm02.stdout:Mar 10 05:50:43 vm02 bash[43400]: level=warn ts=2026-03-10T05:50:43.520Z caller=notify.go:724 component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"https://192.168.123.102:8443//api/prometheus_receiver\": x509: cannot validate certificate for 192.168.123.102 because it doesn't contain any IP SANs" 2026-03-10T05:50:44.253 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:50:43 vm05 bash[17864]: audit 2026-03-10T05:50:41.976056+0000 mgr.y (mgr.14409) 201 : audit [DBG] from='client.14898 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T05:50:44.253 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:50:43 vm05 bash[17864]: audit 2026-03-10T05:50:42.143329+0000 mgr.y (mgr.14409) 202 : audit [DBG] from='client.14904 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T05:50:44.253 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:50:43 vm05 bash[17864]: audit 2026-03-10T05:50:42.527373+0000 mgr.y (mgr.14409) 203 : audit [DBG] from='client.24856 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T05:50:44.253 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:50:43 vm05 bash[17864]: audit 2026-03-10T05:50:42.758304+0000 mgr.y (mgr.14409) 204 : audit [DBG] from='client.14712 -' entity='client.iscsi.foo.vm02.mxbwmh' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T05:50:44.334 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:50:43 vm02 bash[17462]: audit 2026-03-10T05:50:41.976056+0000 mgr.y (mgr.14409) 201 : audit [DBG] from='client.14898 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T05:50:44.334 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:50:43 vm02 bash[17462]: audit 2026-03-10T05:50:42.143329+0000 mgr.y (mgr.14409) 202 : audit [DBG] from='client.14904 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T05:50:44.334 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:50:43 vm02 bash[17462]: audit 2026-03-10T05:50:42.527373+0000 mgr.y (mgr.14409) 203 : audit [DBG] from='client.24856 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T05:50:44.334 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:50:43 vm02 bash[17462]: audit 2026-03-10T05:50:42.758304+0000 mgr.y (mgr.14409) 204 : audit [DBG] from='client.14712 -' entity='client.iscsi.foo.vm02.mxbwmh' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T05:50:44.334 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:50:43 vm02 bash[22526]: audit 2026-03-10T05:50:41.976056+0000 mgr.y (mgr.14409) 201 : audit [DBG] from='client.14898 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T05:50:44.334 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:50:43 vm02 bash[22526]: audit 2026-03-10T05:50:42.143329+0000 mgr.y (mgr.14409) 202 : audit [DBG] from='client.14904 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T05:50:44.334 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:50:43 vm02 bash[22526]: audit 2026-03-10T05:50:42.527373+0000 mgr.y (mgr.14409) 203 : audit [DBG] from='client.24856 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T05:50:44.334 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:50:43 vm02 bash[22526]: audit 2026-03-10T05:50:42.758304+0000 mgr.y (mgr.14409) 204 : audit [DBG] from='client.14712 -' entity='client.iscsi.foo.vm02.mxbwmh' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T05:50:45.253 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:50:44 vm05 bash[17864]: cluster 2026-03-10T05:50:43.726569+0000 mgr.y (mgr.14409) 205 : cluster [DBG] pgmap v151: 161 pgs: 161 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T05:50:45.334 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:50:44 vm02 bash[17462]: cluster 2026-03-10T05:50:43.726569+0000 mgr.y (mgr.14409) 205 : cluster [DBG] pgmap v151: 161 pgs: 161 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T05:50:45.334 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:50:44 vm02 bash[22526]: cluster 2026-03-10T05:50:43.726569+0000 mgr.y (mgr.14409) 205 : cluster [DBG] pgmap v151: 161 pgs: 161 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T05:50:47.143 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:50:46 vm02 bash[17462]: cluster 2026-03-10T05:50:45.726844+0000 mgr.y (mgr.14409) 206 : cluster [DBG] pgmap v152: 161 pgs: 161 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T05:50:47.143 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:50:46 vm02 bash[22526]: cluster 2026-03-10T05:50:45.726844+0000 mgr.y (mgr.14409) 206 : cluster [DBG] pgmap v152: 161 pgs: 161 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T05:50:47.252 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:50:46 vm05 bash[17864]: cluster 2026-03-10T05:50:45.726844+0000 mgr.y (mgr.14409) 206 : cluster [DBG] pgmap v152: 161 pgs: 161 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T05:50:47.584 INFO:journalctl@ceph.mgr.y.vm02.stdout:Mar 10 05:50:47 vm02 bash[17731]: ::ffff:192.168.123.105 - - [10/Mar/2026:05:50:47] "GET /metrics HTTP/1.1" 200 214459 "" "Prometheus/2.33.4" 2026-03-10T05:50:49.502 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:50:49 vm05 bash[17864]: cluster 2026-03-10T05:50:47.727346+0000 mgr.y (mgr.14409) 207 : cluster [DBG] pgmap v153: 161 pgs: 161 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T05:50:49.584 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:50:49 vm02 bash[17462]: cluster 2026-03-10T05:50:47.727346+0000 mgr.y (mgr.14409) 207 : cluster [DBG] pgmap v153: 161 pgs: 161 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T05:50:49.584 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:50:49 vm02 bash[22526]: cluster 2026-03-10T05:50:47.727346+0000 mgr.y (mgr.14409) 207 : cluster [DBG] pgmap v153: 161 pgs: 161 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T05:50:50.502 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:50:50 vm05 bash[17864]: cluster 2026-03-10T05:50:49.727681+0000 mgr.y (mgr.14409) 208 : cluster [DBG] pgmap v154: 161 pgs: 161 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T05:50:50.584 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:50:50 vm02 bash[17462]: cluster 2026-03-10T05:50:49.727681+0000 mgr.y (mgr.14409) 208 : cluster [DBG] pgmap v154: 161 pgs: 161 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T05:50:50.584 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:50:50 vm02 bash[22526]: cluster 2026-03-10T05:50:49.727681+0000 mgr.y (mgr.14409) 208 : cluster [DBG] pgmap v154: 161 pgs: 161 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T05:50:52.252 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:50:51 vm05 bash[17864]: audit 2026-03-10T05:50:51.775869+0000 mon.c (mon.1) 111 : audit [INF] from='mgr.14409 192.168.123.102:0/4073702081' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/mirror_snapshot_schedule"}]: dispatch 2026-03-10T05:50:52.253 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:50:51 vm05 bash[17864]: audit 2026-03-10T05:50:51.776263+0000 mon.a (mon.0) 772 : audit [INF] from='mgr.14409 ' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/mirror_snapshot_schedule"}]: dispatch 2026-03-10T05:50:52.253 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:50:51 vm05 bash[17864]: audit 2026-03-10T05:50:51.778134+0000 mon.c (mon.1) 112 : audit [INF] from='mgr.14409 192.168.123.102:0/4073702081' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/trash_purge_schedule"}]: dispatch 2026-03-10T05:50:52.253 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:50:51 vm05 bash[17864]: audit 2026-03-10T05:50:51.778357+0000 mon.a (mon.0) 773 : audit [INF] from='mgr.14409 ' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/trash_purge_schedule"}]: dispatch 2026-03-10T05:50:52.334 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:50:51 vm02 bash[17462]: audit 2026-03-10T05:50:51.775869+0000 mon.c (mon.1) 111 : audit [INF] from='mgr.14409 192.168.123.102:0/4073702081' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/mirror_snapshot_schedule"}]: dispatch 2026-03-10T05:50:52.334 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:50:51 vm02 bash[17462]: audit 2026-03-10T05:50:51.776263+0000 mon.a (mon.0) 772 : audit [INF] from='mgr.14409 ' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/mirror_snapshot_schedule"}]: dispatch 2026-03-10T05:50:52.334 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:50:51 vm02 bash[17462]: audit 2026-03-10T05:50:51.778134+0000 mon.c (mon.1) 112 : audit [INF] from='mgr.14409 192.168.123.102:0/4073702081' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/trash_purge_schedule"}]: dispatch 2026-03-10T05:50:52.334 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:50:51 vm02 bash[17462]: audit 2026-03-10T05:50:51.778357+0000 mon.a (mon.0) 773 : audit [INF] from='mgr.14409 ' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/trash_purge_schedule"}]: dispatch 2026-03-10T05:50:52.334 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:50:51 vm02 bash[22526]: audit 2026-03-10T05:50:51.775869+0000 mon.c (mon.1) 111 : audit [INF] from='mgr.14409 192.168.123.102:0/4073702081' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/mirror_snapshot_schedule"}]: dispatch 2026-03-10T05:50:52.335 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:50:51 vm02 bash[22526]: audit 2026-03-10T05:50:51.776263+0000 mon.a (mon.0) 772 : audit [INF] from='mgr.14409 ' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/mirror_snapshot_schedule"}]: dispatch 2026-03-10T05:50:52.335 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:50:51 vm02 bash[22526]: audit 2026-03-10T05:50:51.778134+0000 mon.c (mon.1) 112 : audit [INF] from='mgr.14409 192.168.123.102:0/4073702081' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/trash_purge_schedule"}]: dispatch 2026-03-10T05:50:52.335 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:50:51 vm02 bash[22526]: audit 2026-03-10T05:50:51.778357+0000 mon.a (mon.0) 773 : audit [INF] from='mgr.14409 ' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/trash_purge_schedule"}]: dispatch 2026-03-10T05:50:53.084 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:50:52 vm02 bash[17462]: cluster 2026-03-10T05:50:51.727948+0000 mgr.y (mgr.14409) 209 : cluster [DBG] pgmap v155: 161 pgs: 161 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T05:50:53.084 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:50:52 vm02 bash[22526]: cluster 2026-03-10T05:50:51.727948+0000 mgr.y (mgr.14409) 209 : cluster [DBG] pgmap v155: 161 pgs: 161 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T05:50:53.252 INFO:journalctl@ceph.mgr.x.vm05.stdout:Mar 10 05:50:52 vm05 bash[18520]: ::ffff:192.168.123.105 - - [10/Mar/2026:05:50:52] "GET /metrics HTTP/1.1" 200 - "" "Prometheus/2.33.4" 2026-03-10T05:50:53.253 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:50:52 vm05 bash[17864]: cluster 2026-03-10T05:50:51.727948+0000 mgr.y (mgr.14409) 209 : cluster [DBG] pgmap v155: 161 pgs: 161 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T05:50:53.834 INFO:journalctl@ceph.alertmanager.a.vm02.stdout:Mar 10 05:50:53 vm02 bash[43400]: level=error ts=2026-03-10T05:50:53.519Z caller=dispatch.go:354 component=dispatcher msg="Notify for alerts failed" num_alerts=10 err="ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"https://192.168.123.102:8443//api/prometheus_receiver\": x509: cannot validate certificate for 192.168.123.102 because it doesn't contain any IP SANs; ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"https://192.168.123.105:8443/api/prometheus_receiver\": x509: cannot validate certificate for 192.168.123.105 because it doesn't contain any IP SANs" 2026-03-10T05:50:53.834 INFO:journalctl@ceph.alertmanager.a.vm02.stdout:Mar 10 05:50:53 vm02 bash[43400]: level=warn ts=2026-03-10T05:50:53.521Z caller=notify.go:724 component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"https://192.168.123.105:8443/api/prometheus_receiver\": x509: cannot validate certificate for 192.168.123.105 because it doesn't contain any IP SANs" 2026-03-10T05:50:53.834 INFO:journalctl@ceph.alertmanager.a.vm02.stdout:Mar 10 05:50:53 vm02 bash[43400]: level=warn ts=2026-03-10T05:50:53.521Z caller=notify.go:724 component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"https://192.168.123.102:8443//api/prometheus_receiver\": x509: cannot validate certificate for 192.168.123.102 because it doesn't contain any IP SANs" 2026-03-10T05:50:54.252 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:50:53 vm05 bash[17864]: audit 2026-03-10T05:50:52.768250+0000 mgr.y (mgr.14409) 210 : audit [DBG] from='client.14712 -' entity='client.iscsi.foo.vm02.mxbwmh' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T05:50:54.334 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:50:53 vm02 bash[17462]: audit 2026-03-10T05:50:52.768250+0000 mgr.y (mgr.14409) 210 : audit [DBG] from='client.14712 -' entity='client.iscsi.foo.vm02.mxbwmh' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T05:50:54.334 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:50:53 vm02 bash[22526]: audit 2026-03-10T05:50:52.768250+0000 mgr.y (mgr.14409) 210 : audit [DBG] from='client.14712 -' entity='client.iscsi.foo.vm02.mxbwmh' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T05:50:55.252 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:50:54 vm05 bash[17864]: cluster 2026-03-10T05:50:53.728567+0000 mgr.y (mgr.14409) 211 : cluster [DBG] pgmap v156: 161 pgs: 161 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T05:50:55.334 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:50:54 vm02 bash[17462]: cluster 2026-03-10T05:50:53.728567+0000 mgr.y (mgr.14409) 211 : cluster [DBG] pgmap v156: 161 pgs: 161 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T05:50:55.334 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:50:54 vm02 bash[22526]: cluster 2026-03-10T05:50:53.728567+0000 mgr.y (mgr.14409) 211 : cluster [DBG] pgmap v156: 161 pgs: 161 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T05:50:57.334 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:50:57 vm02 bash[17462]: cluster 2026-03-10T05:50:55.728865+0000 mgr.y (mgr.14409) 212 : cluster [DBG] pgmap v157: 161 pgs: 161 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T05:50:57.334 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:50:57 vm02 bash[22526]: cluster 2026-03-10T05:50:55.728865+0000 mgr.y (mgr.14409) 212 : cluster [DBG] pgmap v157: 161 pgs: 161 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T05:50:57.334 INFO:journalctl@ceph.mgr.y.vm02.stdout:Mar 10 05:50:57 vm02 bash[17731]: ::ffff:192.168.123.105 - - [10/Mar/2026:05:50:57] "GET /metrics HTTP/1.1" 200 214466 "" "Prometheus/2.33.4" 2026-03-10T05:50:57.502 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:50:57 vm05 bash[17864]: cluster 2026-03-10T05:50:55.728865+0000 mgr.y (mgr.14409) 212 : cluster [DBG] pgmap v157: 161 pgs: 161 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T05:50:58.334 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:50:58 vm02 bash[17462]: cluster 2026-03-10T05:50:57.729245+0000 mgr.y (mgr.14409) 213 : cluster [DBG] pgmap v158: 161 pgs: 161 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T05:50:58.334 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:50:58 vm02 bash[22526]: cluster 2026-03-10T05:50:57.729245+0000 mgr.y (mgr.14409) 213 : cluster [DBG] pgmap v158: 161 pgs: 161 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T05:50:58.502 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:50:58 vm05 bash[17864]: cluster 2026-03-10T05:50:57.729245+0000 mgr.y (mgr.14409) 213 : cluster [DBG] pgmap v158: 161 pgs: 161 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T05:51:01.084 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:51:00 vm02 bash[17462]: cluster 2026-03-10T05:50:59.729501+0000 mgr.y (mgr.14409) 214 : cluster [DBG] pgmap v159: 161 pgs: 161 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T05:51:01.084 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:51:00 vm02 bash[22526]: cluster 2026-03-10T05:50:59.729501+0000 mgr.y (mgr.14409) 214 : cluster [DBG] pgmap v159: 161 pgs: 161 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T05:51:01.252 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:51:00 vm05 bash[17864]: cluster 2026-03-10T05:50:59.729501+0000 mgr.y (mgr.14409) 214 : cluster [DBG] pgmap v159: 161 pgs: 161 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T05:51:03.084 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:51:02 vm02 bash[17462]: cluster 2026-03-10T05:51:01.729759+0000 mgr.y (mgr.14409) 215 : cluster [DBG] pgmap v160: 161 pgs: 161 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T05:51:03.084 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:51:02 vm02 bash[22526]: cluster 2026-03-10T05:51:01.729759+0000 mgr.y (mgr.14409) 215 : cluster [DBG] pgmap v160: 161 pgs: 161 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T05:51:03.252 INFO:journalctl@ceph.mgr.x.vm05.stdout:Mar 10 05:51:02 vm05 bash[18520]: ::ffff:192.168.123.105 - - [10/Mar/2026:05:51:02] "GET /metrics HTTP/1.1" 200 - "" "Prometheus/2.33.4" 2026-03-10T05:51:03.252 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:51:02 vm05 bash[17864]: cluster 2026-03-10T05:51:01.729759+0000 mgr.y (mgr.14409) 215 : cluster [DBG] pgmap v160: 161 pgs: 161 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T05:51:03.799 INFO:journalctl@ceph.alertmanager.a.vm02.stdout:Mar 10 05:51:03 vm02 bash[43400]: level=error ts=2026-03-10T05:51:03.520Z caller=dispatch.go:354 component=dispatcher msg="Notify for alerts failed" num_alerts=10 err="ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"https://192.168.123.102:8443//api/prometheus_receiver\": x509: cannot validate certificate for 192.168.123.102 because it doesn't contain any IP SANs; ceph-dashboard/webhook[1]: notify retry canceled after 8 attempts: Post \"https://192.168.123.105:8443/api/prometheus_receiver\": x509: cannot validate certificate for 192.168.123.105 because it doesn't contain any IP SANs" 2026-03-10T05:51:03.799 INFO:journalctl@ceph.alertmanager.a.vm02.stdout:Mar 10 05:51:03 vm02 bash[43400]: level=warn ts=2026-03-10T05:51:03.521Z caller=notify.go:724 component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"https://192.168.123.105:8443/api/prometheus_receiver\": x509: cannot validate certificate for 192.168.123.105 because it doesn't contain any IP SANs" 2026-03-10T05:51:03.799 INFO:journalctl@ceph.alertmanager.a.vm02.stdout:Mar 10 05:51:03 vm02 bash[43400]: level=warn ts=2026-03-10T05:51:03.522Z caller=notify.go:724 component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"https://192.168.123.102:8443//api/prometheus_receiver\": x509: cannot validate certificate for 192.168.123.102 because it doesn't contain any IP SANs" 2026-03-10T05:51:04.084 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:51:03 vm02 bash[17462]: audit 2026-03-10T05:51:02.778395+0000 mgr.y (mgr.14409) 216 : audit [DBG] from='client.14712 -' entity='client.iscsi.foo.vm02.mxbwmh' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T05:51:04.084 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:51:03 vm02 bash[22526]: audit 2026-03-10T05:51:02.778395+0000 mgr.y (mgr.14409) 216 : audit [DBG] from='client.14712 -' entity='client.iscsi.foo.vm02.mxbwmh' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T05:51:04.252 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:51:03 vm05 bash[17864]: audit 2026-03-10T05:51:02.778395+0000 mgr.y (mgr.14409) 216 : audit [DBG] from='client.14712 -' entity='client.iscsi.foo.vm02.mxbwmh' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T05:51:05.084 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:51:04 vm02 bash[17462]: cluster 2026-03-10T05:51:03.730276+0000 mgr.y (mgr.14409) 217 : cluster [DBG] pgmap v161: 161 pgs: 161 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T05:51:05.084 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:51:04 vm02 bash[22526]: cluster 2026-03-10T05:51:03.730276+0000 mgr.y (mgr.14409) 217 : cluster [DBG] pgmap v161: 161 pgs: 161 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T05:51:05.252 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:51:04 vm05 bash[17864]: cluster 2026-03-10T05:51:03.730276+0000 mgr.y (mgr.14409) 217 : cluster [DBG] pgmap v161: 161 pgs: 161 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T05:51:07.084 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:51:06 vm02 bash[17462]: cluster 2026-03-10T05:51:05.730662+0000 mgr.y (mgr.14409) 218 : cluster [DBG] pgmap v162: 161 pgs: 161 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T05:51:07.084 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:51:06 vm02 bash[22526]: cluster 2026-03-10T05:51:05.730662+0000 mgr.y (mgr.14409) 218 : cluster [DBG] pgmap v162: 161 pgs: 161 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T05:51:07.252 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:51:06 vm05 bash[17864]: cluster 2026-03-10T05:51:05.730662+0000 mgr.y (mgr.14409) 218 : cluster [DBG] pgmap v162: 161 pgs: 161 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T05:51:07.584 INFO:journalctl@ceph.mgr.y.vm02.stdout:Mar 10 05:51:07 vm02 bash[17731]: ::ffff:192.168.123.105 - - [10/Mar/2026:05:51:07] "GET /metrics HTTP/1.1" 200 214461 "" "Prometheus/2.33.4" 2026-03-10T05:51:09.084 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:51:08 vm02 bash[17462]: cluster 2026-03-10T05:51:07.731192+0000 mgr.y (mgr.14409) 219 : cluster [DBG] pgmap v163: 161 pgs: 161 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T05:51:09.084 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:51:08 vm02 bash[22526]: cluster 2026-03-10T05:51:07.731192+0000 mgr.y (mgr.14409) 219 : cluster [DBG] pgmap v163: 161 pgs: 161 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T05:51:09.252 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:51:08 vm05 bash[17864]: cluster 2026-03-10T05:51:07.731192+0000 mgr.y (mgr.14409) 219 : cluster [DBG] pgmap v163: 161 pgs: 161 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T05:51:11.252 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:51:10 vm05 bash[17864]: cluster 2026-03-10T05:51:09.731482+0000 mgr.y (mgr.14409) 220 : cluster [DBG] pgmap v164: 161 pgs: 161 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T05:51:11.334 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:51:10 vm02 bash[17462]: cluster 2026-03-10T05:51:09.731482+0000 mgr.y (mgr.14409) 220 : cluster [DBG] pgmap v164: 161 pgs: 161 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T05:51:11.334 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:51:10 vm02 bash[22526]: cluster 2026-03-10T05:51:09.731482+0000 mgr.y (mgr.14409) 220 : cluster [DBG] pgmap v164: 161 pgs: 161 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T05:51:12.939 INFO:teuthology.orchestra.run.vm02.stdout:true 2026-03-10T05:51:13.084 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:51:12 vm02 bash[17462]: cluster 2026-03-10T05:51:11.731730+0000 mgr.y (mgr.14409) 221 : cluster [DBG] pgmap v165: 161 pgs: 161 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T05:51:13.084 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:51:12 vm02 bash[22526]: cluster 2026-03-10T05:51:11.731730+0000 mgr.y (mgr.14409) 221 : cluster [DBG] pgmap v165: 161 pgs: 161 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T05:51:13.252 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:51:12 vm05 bash[17864]: cluster 2026-03-10T05:51:11.731730+0000 mgr.y (mgr.14409) 221 : cluster [DBG] pgmap v165: 161 pgs: 161 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T05:51:13.252 INFO:journalctl@ceph.mgr.x.vm05.stdout:Mar 10 05:51:12 vm05 bash[18520]: ::ffff:192.168.123.105 - - [10/Mar/2026:05:51:12] "GET /metrics HTTP/1.1" 200 - "" "Prometheus/2.33.4" 2026-03-10T05:51:13.290 INFO:teuthology.orchestra.run.vm02.stdout:NAME HOST PORTS STATUS REFRESHED AGE MEM USE MEM LIM VERSION IMAGE ID CONTAINER ID 2026-03-10T05:51:13.291 INFO:teuthology.orchestra.run.vm02.stdout:alertmanager.a vm02 *:9093,9094 running (3m) 2m ago 4m 16.4M - ba2b418f427c 3305780e5ef5 2026-03-10T05:51:13.291 INFO:teuthology.orchestra.run.vm02.stdout:grafana.a vm05 *:3000 running (3m) 2m ago 3m 42.4M - 8.3.5 dad864ee21e9 a370f3725ef2 2026-03-10T05:51:13.291 INFO:teuthology.orchestra.run.vm02.stdout:iscsi.foo.vm02.mxbwmh vm02 running (3m) 2m ago 3m 41.3M - 3.5 e1d6a67b021e c01d22afac06 2026-03-10T05:51:13.291 INFO:teuthology.orchestra.run.vm02.stdout:mgr.x vm05 *:8443 running (6m) 2m ago 6m 398M - 17.2.0 e1d6a67b021e b2f4d40768f0 2026-03-10T05:51:13.291 INFO:teuthology.orchestra.run.vm02.stdout:mgr.y vm02 *:9283 running (7m) 2m ago 7m 445M - 17.2.0 e1d6a67b021e a04e3f113661 2026-03-10T05:51:13.291 INFO:teuthology.orchestra.run.vm02.stdout:mon.a vm02 running (7m) 2m ago 7m 49.4M 2048M 17.2.0 e1d6a67b021e bf59d12a7baa 2026-03-10T05:51:13.291 INFO:teuthology.orchestra.run.vm02.stdout:mon.b vm05 running (6m) 2m ago 6m 45.9M 2048M 17.2.0 e1d6a67b021e 96a2a71fd403 2026-03-10T05:51:13.291 INFO:teuthology.orchestra.run.vm02.stdout:mon.c vm02 running (6m) 2m ago 6m 47.6M 2048M 17.2.0 e1d6a67b021e 2f6dcf491c61 2026-03-10T05:51:13.291 INFO:teuthology.orchestra.run.vm02.stdout:node-exporter.a vm02 *:9100 running (4m) 2m ago 4m 8040k - 1dbe0e931976 111574d033cc 2026-03-10T05:51:13.291 INFO:teuthology.orchestra.run.vm02.stdout:node-exporter.b vm05 *:9100 running (4m) 2m ago 4m 9784k - 1dbe0e931976 b6278e64d85c 2026-03-10T05:51:13.291 INFO:teuthology.orchestra.run.vm02.stdout:osd.0 vm02 running (6m) 2m ago 6m 47.9M 4096M 17.2.0 e1d6a67b021e 563d55a3e6a4 2026-03-10T05:51:13.291 INFO:teuthology.orchestra.run.vm02.stdout:osd.1 vm02 running (6m) 2m ago 6m 50.8M 4096M 17.2.0 e1d6a67b021e 8c25a1e89677 2026-03-10T05:51:13.291 INFO:teuthology.orchestra.run.vm02.stdout:osd.2 vm02 running (5m) 2m ago 5m 46.2M 4096M 17.2.0 e1d6a67b021e 826f54bdbc5c 2026-03-10T05:51:13.291 INFO:teuthology.orchestra.run.vm02.stdout:osd.3 vm02 running (5m) 2m ago 5m 49.1M 4096M 17.2.0 e1d6a67b021e 0c6cfa53c9fd 2026-03-10T05:51:13.291 INFO:teuthology.orchestra.run.vm02.stdout:osd.4 vm05 running (5m) 2m ago 5m 49.4M 4096M 17.2.0 e1d6a67b021e 4ffe1741f201 2026-03-10T05:51:13.291 INFO:teuthology.orchestra.run.vm02.stdout:osd.5 vm05 running (5m) 2m ago 5m 47.8M 4096M 17.2.0 e1d6a67b021e cba5583c238e 2026-03-10T05:51:13.291 INFO:teuthology.orchestra.run.vm02.stdout:osd.6 vm05 running (4m) 2m ago 4m 45.7M 4096M 17.2.0 e1d6a67b021e 9d1b370357d7 2026-03-10T05:51:13.291 INFO:teuthology.orchestra.run.vm02.stdout:osd.7 vm05 running (4m) 2m ago 4m 47.4M 4096M 17.2.0 e1d6a67b021e 8a4837b788cf 2026-03-10T05:51:13.291 INFO:teuthology.orchestra.run.vm02.stdout:prometheus.a vm05 *:9095 running (3m) 2m ago 4m 45.8M - 514e6a882f6e 6c053703db40 2026-03-10T05:51:13.291 INFO:teuthology.orchestra.run.vm02.stdout:rgw.foo.vm02.pbogjd vm02 *:8000 running (3m) 2m ago 3m 82.9M - 17.2.0 e1d6a67b021e 2ab2ffd1abaa 2026-03-10T05:51:13.291 INFO:teuthology.orchestra.run.vm02.stdout:rgw.foo.vm05.hvmsxl vm05 *:8000 running (3m) 2m ago 3m 82.9M - 17.2.0 e1d6a67b021e 85d1c77b7e9d 2026-03-10T05:51:13.291 INFO:teuthology.orchestra.run.vm02.stdout:rgw.smpl.vm02.pglcfm vm02 *:80 running (3m) 2m ago 3m 82.7M - 17.2.0 e1d6a67b021e ef152a460673 2026-03-10T05:51:13.291 INFO:teuthology.orchestra.run.vm02.stdout:rgw.smpl.vm05.hqqmap vm05 *:80 running (3m) 2m ago 3m 82.5M - 17.2.0 e1d6a67b021e 29c9ee794f34 2026-03-10T05:51:13.492 INFO:teuthology.orchestra.run.vm02.stdout:{ 2026-03-10T05:51:13.492 INFO:teuthology.orchestra.run.vm02.stdout: "mon": { 2026-03-10T05:51:13.492 INFO:teuthology.orchestra.run.vm02.stdout: "ceph version 17.2.0 (43e2e60a7559d3f46c9d53f1ca875fd499a1e35e) quincy (stable)": 3 2026-03-10T05:51:13.492 INFO:teuthology.orchestra.run.vm02.stdout: }, 2026-03-10T05:51:13.492 INFO:teuthology.orchestra.run.vm02.stdout: "mgr": { 2026-03-10T05:51:13.492 INFO:teuthology.orchestra.run.vm02.stdout: "ceph version 17.2.0 (43e2e60a7559d3f46c9d53f1ca875fd499a1e35e) quincy (stable)": 2 2026-03-10T05:51:13.492 INFO:teuthology.orchestra.run.vm02.stdout: }, 2026-03-10T05:51:13.492 INFO:teuthology.orchestra.run.vm02.stdout: "osd": { 2026-03-10T05:51:13.492 INFO:teuthology.orchestra.run.vm02.stdout: "ceph version 17.2.0 (43e2e60a7559d3f46c9d53f1ca875fd499a1e35e) quincy (stable)": 8 2026-03-10T05:51:13.492 INFO:teuthology.orchestra.run.vm02.stdout: }, 2026-03-10T05:51:13.492 INFO:teuthology.orchestra.run.vm02.stdout: "mds": {}, 2026-03-10T05:51:13.492 INFO:teuthology.orchestra.run.vm02.stdout: "rgw": { 2026-03-10T05:51:13.492 INFO:teuthology.orchestra.run.vm02.stdout: "ceph version 17.2.0 (43e2e60a7559d3f46c9d53f1ca875fd499a1e35e) quincy (stable)": 4 2026-03-10T05:51:13.492 INFO:teuthology.orchestra.run.vm02.stdout: }, 2026-03-10T05:51:13.492 INFO:teuthology.orchestra.run.vm02.stdout: "overall": { 2026-03-10T05:51:13.492 INFO:teuthology.orchestra.run.vm02.stdout: "ceph version 17.2.0 (43e2e60a7559d3f46c9d53f1ca875fd499a1e35e) quincy (stable)": 17 2026-03-10T05:51:13.492 INFO:teuthology.orchestra.run.vm02.stdout: } 2026-03-10T05:51:13.492 INFO:teuthology.orchestra.run.vm02.stdout:} 2026-03-10T05:51:13.671 INFO:teuthology.orchestra.run.vm02.stdout:{ 2026-03-10T05:51:13.671 INFO:teuthology.orchestra.run.vm02.stdout: "target_image": "quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df", 2026-03-10T05:51:13.671 INFO:teuthology.orchestra.run.vm02.stdout: "in_progress": true, 2026-03-10T05:51:13.672 INFO:teuthology.orchestra.run.vm02.stdout: "services_complete": [], 2026-03-10T05:51:13.672 INFO:teuthology.orchestra.run.vm02.stdout: "progress": "0/23 daemons upgraded", 2026-03-10T05:51:13.672 INFO:teuthology.orchestra.run.vm02.stdout: "message": "Pulling quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df image on host vm05" 2026-03-10T05:51:13.672 INFO:teuthology.orchestra.run.vm02.stdout:} 2026-03-10T05:51:13.834 INFO:journalctl@ceph.alertmanager.a.vm02.stdout:Mar 10 05:51:13 vm02 bash[43400]: level=error ts=2026-03-10T05:51:13.520Z caller=dispatch.go:354 component=dispatcher msg="Notify for alerts failed" num_alerts=10 err="ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"https://192.168.123.102:8443//api/prometheus_receiver\": x509: cannot validate certificate for 192.168.123.102 because it doesn't contain any IP SANs; ceph-dashboard/webhook[1]: notify retry canceled after 8 attempts: Post \"https://192.168.123.105:8443/api/prometheus_receiver\": x509: cannot validate certificate for 192.168.123.105 because it doesn't contain any IP SANs" 2026-03-10T05:51:13.834 INFO:journalctl@ceph.alertmanager.a.vm02.stdout:Mar 10 05:51:13 vm02 bash[43400]: level=warn ts=2026-03-10T05:51:13.523Z caller=notify.go:724 component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"https://192.168.123.105:8443/api/prometheus_receiver\": x509: cannot validate certificate for 192.168.123.105 because it doesn't contain any IP SANs" 2026-03-10T05:51:13.834 INFO:journalctl@ceph.alertmanager.a.vm02.stdout:Mar 10 05:51:13 vm02 bash[43400]: level=warn ts=2026-03-10T05:51:13.526Z caller=notify.go:724 component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"https://192.168.123.102:8443//api/prometheus_receiver\": x509: cannot validate certificate for 192.168.123.102 because it doesn't contain any IP SANs" 2026-03-10T05:51:13.882 INFO:teuthology.orchestra.run.vm02.stdout:HEALTH_OK 2026-03-10T05:51:14.252 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:51:13 vm05 bash[17864]: audit 2026-03-10T05:51:12.788132+0000 mgr.y (mgr.14409) 222 : audit [DBG] from='client.14712 -' entity='client.iscsi.foo.vm02.mxbwmh' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T05:51:14.252 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:51:13 vm05 bash[17864]: audit 2026-03-10T05:51:13.491462+0000 mon.a (mon.0) 774 : audit [DBG] from='client.? 192.168.123.102:0/4099192968' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T05:51:14.334 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:51:13 vm02 bash[17462]: audit 2026-03-10T05:51:12.788132+0000 mgr.y (mgr.14409) 222 : audit [DBG] from='client.14712 -' entity='client.iscsi.foo.vm02.mxbwmh' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T05:51:14.334 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:51:13 vm02 bash[17462]: audit 2026-03-10T05:51:13.491462+0000 mon.a (mon.0) 774 : audit [DBG] from='client.? 192.168.123.102:0/4099192968' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T05:51:14.334 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:51:13 vm02 bash[22526]: audit 2026-03-10T05:51:12.788132+0000 mgr.y (mgr.14409) 222 : audit [DBG] from='client.14712 -' entity='client.iscsi.foo.vm02.mxbwmh' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T05:51:14.334 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:51:13 vm02 bash[22526]: audit 2026-03-10T05:51:13.491462+0000 mon.a (mon.0) 774 : audit [DBG] from='client.? 192.168.123.102:0/4099192968' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T05:51:15.252 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:51:14 vm05 bash[17864]: audit 2026-03-10T05:51:12.928128+0000 mgr.y (mgr.14409) 223 : audit [DBG] from='client.14922 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T05:51:15.252 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:51:14 vm05 bash[17864]: audit 2026-03-10T05:51:13.107548+0000 mgr.y (mgr.14409) 224 : audit [DBG] from='client.14928 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T05:51:15.252 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:51:14 vm05 bash[17864]: audit 2026-03-10T05:51:13.285543+0000 mgr.y (mgr.14409) 225 : audit [DBG] from='client.14934 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T05:51:15.252 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:51:14 vm05 bash[17864]: audit 2026-03-10T05:51:13.671192+0000 mgr.y (mgr.14409) 226 : audit [DBG] from='client.24877 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T05:51:15.252 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:51:14 vm05 bash[17864]: cluster 2026-03-10T05:51:13.732212+0000 mgr.y (mgr.14409) 227 : cluster [DBG] pgmap v166: 161 pgs: 161 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T05:51:15.252 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:51:14 vm05 bash[17864]: audit 2026-03-10T05:51:13.881598+0000 mon.c (mon.1) 113 : audit [DBG] from='client.? 192.168.123.102:0/1272522019' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch 2026-03-10T05:51:15.334 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:51:14 vm02 bash[17462]: audit 2026-03-10T05:51:12.928128+0000 mgr.y (mgr.14409) 223 : audit [DBG] from='client.14922 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T05:51:15.334 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:51:14 vm02 bash[17462]: audit 2026-03-10T05:51:13.107548+0000 mgr.y (mgr.14409) 224 : audit [DBG] from='client.14928 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T05:51:15.334 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:51:14 vm02 bash[17462]: audit 2026-03-10T05:51:13.285543+0000 mgr.y (mgr.14409) 225 : audit [DBG] from='client.14934 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T05:51:15.334 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:51:14 vm02 bash[17462]: audit 2026-03-10T05:51:13.671192+0000 mgr.y (mgr.14409) 226 : audit [DBG] from='client.24877 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T05:51:15.334 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:51:14 vm02 bash[17462]: cluster 2026-03-10T05:51:13.732212+0000 mgr.y (mgr.14409) 227 : cluster [DBG] pgmap v166: 161 pgs: 161 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T05:51:15.334 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:51:14 vm02 bash[17462]: audit 2026-03-10T05:51:13.881598+0000 mon.c (mon.1) 113 : audit [DBG] from='client.? 192.168.123.102:0/1272522019' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch 2026-03-10T05:51:15.335 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:51:14 vm02 bash[22526]: audit 2026-03-10T05:51:12.928128+0000 mgr.y (mgr.14409) 223 : audit [DBG] from='client.14922 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T05:51:15.335 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:51:14 vm02 bash[22526]: audit 2026-03-10T05:51:13.107548+0000 mgr.y (mgr.14409) 224 : audit [DBG] from='client.14928 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T05:51:15.335 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:51:14 vm02 bash[22526]: audit 2026-03-10T05:51:13.285543+0000 mgr.y (mgr.14409) 225 : audit [DBG] from='client.14934 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T05:51:15.335 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:51:14 vm02 bash[22526]: audit 2026-03-10T05:51:13.671192+0000 mgr.y (mgr.14409) 226 : audit [DBG] from='client.24877 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T05:51:15.335 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:51:14 vm02 bash[22526]: cluster 2026-03-10T05:51:13.732212+0000 mgr.y (mgr.14409) 227 : cluster [DBG] pgmap v166: 161 pgs: 161 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T05:51:15.335 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:51:14 vm02 bash[22526]: audit 2026-03-10T05:51:13.881598+0000 mon.c (mon.1) 113 : audit [DBG] from='client.? 192.168.123.102:0/1272522019' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch 2026-03-10T05:51:16.252 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:51:15 vm05 bash[17864]: cluster 2026-03-10T05:51:15.732481+0000 mgr.y (mgr.14409) 228 : cluster [DBG] pgmap v167: 161 pgs: 161 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T05:51:16.334 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:51:15 vm02 bash[17462]: cluster 2026-03-10T05:51:15.732481+0000 mgr.y (mgr.14409) 228 : cluster [DBG] pgmap v167: 161 pgs: 161 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T05:51:16.334 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:51:15 vm02 bash[22526]: cluster 2026-03-10T05:51:15.732481+0000 mgr.y (mgr.14409) 228 : cluster [DBG] pgmap v167: 161 pgs: 161 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T05:51:17.584 INFO:journalctl@ceph.mgr.y.vm02.stdout:Mar 10 05:51:17 vm02 bash[17731]: ::ffff:192.168.123.105 - - [10/Mar/2026:05:51:17] "GET /metrics HTTP/1.1" 200 214461 "" "Prometheus/2.33.4" 2026-03-10T05:51:19.084 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:51:18 vm02 bash[17462]: cluster 2026-03-10T05:51:17.732991+0000 mgr.y (mgr.14409) 229 : cluster [DBG] pgmap v168: 161 pgs: 161 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T05:51:19.084 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:51:18 vm02 bash[22526]: cluster 2026-03-10T05:51:17.732991+0000 mgr.y (mgr.14409) 229 : cluster [DBG] pgmap v168: 161 pgs: 161 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T05:51:19.252 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:51:18 vm05 bash[17864]: cluster 2026-03-10T05:51:17.732991+0000 mgr.y (mgr.14409) 229 : cluster [DBG] pgmap v168: 161 pgs: 161 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T05:51:21.252 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:51:20 vm05 bash[17864]: cluster 2026-03-10T05:51:19.733275+0000 mgr.y (mgr.14409) 230 : cluster [DBG] pgmap v169: 161 pgs: 161 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T05:51:21.334 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:51:20 vm02 bash[17462]: cluster 2026-03-10T05:51:19.733275+0000 mgr.y (mgr.14409) 230 : cluster [DBG] pgmap v169: 161 pgs: 161 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T05:51:21.334 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:51:20 vm02 bash[22526]: cluster 2026-03-10T05:51:19.733275+0000 mgr.y (mgr.14409) 230 : cluster [DBG] pgmap v169: 161 pgs: 161 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T05:51:23.324 INFO:journalctl@ceph.mgr.x.vm05.stdout:Mar 10 05:51:22 vm05 bash[18520]: ::ffff:192.168.123.105 - - [10/Mar/2026:05:51:22] "GET /metrics HTTP/1.1" 200 - "" "Prometheus/2.33.4" 2026-03-10T05:51:23.834 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:51:23 vm02 bash[17462]: cluster 2026-03-10T05:51:21.733737+0000 mgr.y (mgr.14409) 231 : cluster [DBG] pgmap v170: 161 pgs: 161 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T05:51:23.834 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:51:23 vm02 bash[17462]: audit 2026-03-10T05:51:22.798261+0000 mgr.y (mgr.14409) 232 : audit [DBG] from='client.14712 -' entity='client.iscsi.foo.vm02.mxbwmh' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T05:51:23.834 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:51:23 vm02 bash[22526]: cluster 2026-03-10T05:51:21.733737+0000 mgr.y (mgr.14409) 231 : cluster [DBG] pgmap v170: 161 pgs: 161 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T05:51:23.834 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:51:23 vm02 bash[22526]: audit 2026-03-10T05:51:22.798261+0000 mgr.y (mgr.14409) 232 : audit [DBG] from='client.14712 -' entity='client.iscsi.foo.vm02.mxbwmh' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T05:51:23.835 INFO:journalctl@ceph.alertmanager.a.vm02.stdout:Mar 10 05:51:23 vm02 bash[43400]: level=error ts=2026-03-10T05:51:23.521Z caller=dispatch.go:354 component=dispatcher msg="Notify for alerts failed" num_alerts=10 err="ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"https://192.168.123.102:8443//api/prometheus_receiver\": x509: cannot validate certificate for 192.168.123.102 because it doesn't contain any IP SANs; ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"https://192.168.123.105:8443/api/prometheus_receiver\": x509: cannot validate certificate for 192.168.123.105 because it doesn't contain any IP SANs" 2026-03-10T05:51:23.835 INFO:journalctl@ceph.alertmanager.a.vm02.stdout:Mar 10 05:51:23 vm02 bash[43400]: level=warn ts=2026-03-10T05:51:23.523Z caller=notify.go:724 component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"https://192.168.123.105:8443/api/prometheus_receiver\": x509: cannot validate certificate for 192.168.123.105 because it doesn't contain any IP SANs" 2026-03-10T05:51:23.835 INFO:journalctl@ceph.alertmanager.a.vm02.stdout:Mar 10 05:51:23 vm02 bash[43400]: level=warn ts=2026-03-10T05:51:23.523Z caller=notify.go:724 component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"https://192.168.123.102:8443//api/prometheus_receiver\": x509: cannot validate certificate for 192.168.123.102 because it doesn't contain any IP SANs" 2026-03-10T05:51:24.002 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:51:23 vm05 bash[17864]: cluster 2026-03-10T05:51:21.733737+0000 mgr.y (mgr.14409) 231 : cluster [DBG] pgmap v170: 161 pgs: 161 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T05:51:24.002 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:51:23 vm05 bash[17864]: audit 2026-03-10T05:51:22.798261+0000 mgr.y (mgr.14409) 232 : audit [DBG] from='client.14712 -' entity='client.iscsi.foo.vm02.mxbwmh' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T05:51:25.002 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:51:24 vm05 bash[17864]: cluster 2026-03-10T05:51:23.734327+0000 mgr.y (mgr.14409) 233 : cluster [DBG] pgmap v171: 161 pgs: 161 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T05:51:25.084 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:51:24 vm02 bash[17462]: cluster 2026-03-10T05:51:23.734327+0000 mgr.y (mgr.14409) 233 : cluster [DBG] pgmap v171: 161 pgs: 161 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T05:51:25.084 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:51:24 vm02 bash[22526]: cluster 2026-03-10T05:51:23.734327+0000 mgr.y (mgr.14409) 233 : cluster [DBG] pgmap v171: 161 pgs: 161 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T05:51:27.430 INFO:journalctl@ceph.mgr.y.vm02.stdout:Mar 10 05:51:27 vm02 bash[17731]: ::ffff:192.168.123.105 - - [10/Mar/2026:05:51:27] "GET /metrics HTTP/1.1" 200 214463 "" "Prometheus/2.33.4" 2026-03-10T05:51:27.752 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:51:27 vm05 bash[17864]: cluster 2026-03-10T05:51:25.734680+0000 mgr.y (mgr.14409) 234 : cluster [DBG] pgmap v172: 161 pgs: 161 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T05:51:27.834 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:51:27 vm02 bash[17462]: cluster 2026-03-10T05:51:25.734680+0000 mgr.y (mgr.14409) 234 : cluster [DBG] pgmap v172: 161 pgs: 161 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T05:51:27.834 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:51:27 vm02 bash[22526]: cluster 2026-03-10T05:51:25.734680+0000 mgr.y (mgr.14409) 234 : cluster [DBG] pgmap v172: 161 pgs: 161 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T05:51:29.084 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:51:28 vm02 bash[17462]: cluster 2026-03-10T05:51:27.735134+0000 mgr.y (mgr.14409) 235 : cluster [DBG] pgmap v173: 161 pgs: 161 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T05:51:29.084 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:51:28 vm02 bash[22526]: cluster 2026-03-10T05:51:27.735134+0000 mgr.y (mgr.14409) 235 : cluster [DBG] pgmap v173: 161 pgs: 161 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T05:51:29.126 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:51:28 vm05 bash[17864]: cluster 2026-03-10T05:51:27.735134+0000 mgr.y (mgr.14409) 235 : cluster [DBG] pgmap v173: 161 pgs: 161 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T05:51:29.376 INFO:journalctl@ceph.mgr.x.vm05.stdout:Mar 10 05:51:29 vm05 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:51:29.377 INFO:journalctl@ceph.osd.6.vm05.stdout:Mar 10 05:51:29 vm05 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:51:29.377 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:51:29 vm05 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:51:29.377 INFO:journalctl@ceph.node-exporter.b.vm05.stdout:Mar 10 05:51:29 vm05 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:51:29.377 INFO:journalctl@ceph.osd.4.vm05.stdout:Mar 10 05:51:29 vm05 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:51:29.377 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:51:29 vm05 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:51:29.377 INFO:journalctl@ceph.osd.5.vm05.stdout:Mar 10 05:51:29 vm05 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:51:29.377 INFO:journalctl@ceph.osd.7.vm05.stdout:Mar 10 05:51:29 vm05 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:51:29.377 INFO:journalctl@ceph.prometheus.a.vm05.stdout:Mar 10 05:51:29 vm05 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:51:29.665 INFO:journalctl@ceph.mgr.x.vm05.stdout:Mar 10 05:51:29 vm05 systemd[1]: Stopping Ceph mgr.x for 107483ae-1c44-11f1-b530-c1172cd6122a... 2026-03-10T05:51:29.665 INFO:journalctl@ceph.mgr.x.vm05.stdout:Mar 10 05:51:29 vm05 bash[37496]: Error response from daemon: No such container: ceph-107483ae-1c44-11f1-b530-c1172cd6122a-mgr.x 2026-03-10T05:51:29.665 INFO:journalctl@ceph.mgr.x.vm05.stdout:Mar 10 05:51:29 vm05 bash[37504]: ceph-107483ae-1c44-11f1-b530-c1172cd6122a-mgr-x 2026-03-10T05:51:29.665 INFO:journalctl@ceph.mgr.x.vm05.stdout:Mar 10 05:51:29 vm05 systemd[1]: ceph-107483ae-1c44-11f1-b530-c1172cd6122a@mgr.x.service: Main process exited, code=exited, status=143/n/a 2026-03-10T05:51:29.665 INFO:journalctl@ceph.mgr.x.vm05.stdout:Mar 10 05:51:29 vm05 bash[37537]: Error response from daemon: No such container: ceph-107483ae-1c44-11f1-b530-c1172cd6122a-mgr.x 2026-03-10T05:51:29.665 INFO:journalctl@ceph.mgr.x.vm05.stdout:Mar 10 05:51:29 vm05 systemd[1]: ceph-107483ae-1c44-11f1-b530-c1172cd6122a@mgr.x.service: Failed with result 'exit-code'. 2026-03-10T05:51:29.666 INFO:journalctl@ceph.mgr.x.vm05.stdout:Mar 10 05:51:29 vm05 systemd[1]: Stopped Ceph mgr.x for 107483ae-1c44-11f1-b530-c1172cd6122a. 2026-03-10T05:51:29.666 INFO:journalctl@ceph.mgr.x.vm05.stdout:Mar 10 05:51:29 vm05 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:51:29.666 INFO:journalctl@ceph.osd.6.vm05.stdout:Mar 10 05:51:29 vm05 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:51:29.666 INFO:journalctl@ceph.node-exporter.b.vm05.stdout:Mar 10 05:51:29 vm05 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:51:29.666 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:51:29 vm05 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:51:29.666 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:51:29 vm05 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:51:29.667 INFO:journalctl@ceph.osd.4.vm05.stdout:Mar 10 05:51:29 vm05 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:51:29.667 INFO:journalctl@ceph.osd.5.vm05.stdout:Mar 10 05:51:29 vm05 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:51:29.667 INFO:journalctl@ceph.osd.7.vm05.stdout:Mar 10 05:51:29 vm05 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:51:29.667 INFO:journalctl@ceph.prometheus.a.vm05.stdout:Mar 10 05:51:29 vm05 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:51:29.970 INFO:journalctl@ceph.mgr.x.vm05.stdout:Mar 10 05:51:29 vm05 systemd[1]: Started Ceph mgr.x for 107483ae-1c44-11f1-b530-c1172cd6122a. 2026-03-10T05:51:29.970 INFO:journalctl@ceph.mgr.x.vm05.stdout:Mar 10 05:51:29 vm05 bash[37598]: debug 2026-03-10T05:51:29.872+0000 7fa756a74140 -1 mgr[py] Module status has missing NOTIFY_TYPES member 2026-03-10T05:51:29.970 INFO:journalctl@ceph.mgr.x.vm05.stdout:Mar 10 05:51:29 vm05 bash[37598]: debug 2026-03-10T05:51:29.908+0000 7fa756a74140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member 2026-03-10T05:51:29.971 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:51:29 vm02 bash[17462]: cephadm 2026-03-10T05:51:28.961695+0000 mgr.y (mgr.14409) 236 : cephadm [INF] Upgrade: Updating mgr.x 2026-03-10T05:51:29.971 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:51:29 vm02 bash[17462]: audit 2026-03-10T05:51:28.966589+0000 mon.a (mon.0) 775 : audit [INF] from='mgr.14409 ' entity='mgr.y' 2026-03-10T05:51:29.971 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:51:29 vm02 bash[17462]: audit 2026-03-10T05:51:28.969165+0000 mon.c (mon.1) 114 : audit [INF] from='mgr.14409 192.168.123.102:0/4073702081' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.x", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch 2026-03-10T05:51:29.971 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:51:29 vm02 bash[17462]: audit 2026-03-10T05:51:28.969397+0000 mon.a (mon.0) 776 : audit [INF] from='mgr.14409 ' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.x", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch 2026-03-10T05:51:29.971 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:51:29 vm02 bash[22526]: cephadm 2026-03-10T05:51:28.961695+0000 mgr.y (mgr.14409) 236 : cephadm [INF] Upgrade: Updating mgr.x 2026-03-10T05:51:29.971 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:51:29 vm02 bash[22526]: audit 2026-03-10T05:51:28.966589+0000 mon.a (mon.0) 775 : audit [INF] from='mgr.14409 ' entity='mgr.y' 2026-03-10T05:51:29.971 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:51:29 vm02 bash[22526]: audit 2026-03-10T05:51:28.969165+0000 mon.c (mon.1) 114 : audit [INF] from='mgr.14409 192.168.123.102:0/4073702081' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.x", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch 2026-03-10T05:51:29.971 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:51:29 vm02 bash[22526]: audit 2026-03-10T05:51:28.969397+0000 mon.a (mon.0) 776 : audit [INF] from='mgr.14409 ' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.x", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch 2026-03-10T05:51:29.971 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:51:29 vm02 bash[22526]: audit 2026-03-10T05:51:28.970291+0000 mon.c (mon.1) 115 : audit [DBG] from='mgr.14409 192.168.123.102:0/4073702081' entity='mgr.y' cmd=[{"prefix": "mgr services"}]: dispatch 2026-03-10T05:51:29.971 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:51:29 vm02 bash[22526]: audit 2026-03-10T05:51:28.971109+0000 mon.c (mon.1) 116 : audit [DBG] from='mgr.14409 192.168.123.102:0/4073702081' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:51:29.971 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:51:29 vm02 bash[22526]: cephadm 2026-03-10T05:51:28.971797+0000 mgr.y (mgr.14409) 237 : cephadm [INF] Deploying daemon mgr.x on vm05 2026-03-10T05:51:29.971 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:51:29 vm02 bash[22526]: audit 2026-03-10T05:51:29.693108+0000 mon.a (mon.0) 777 : audit [INF] from='mgr.14409 ' entity='mgr.y' 2026-03-10T05:51:29.971 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:51:29 vm02 bash[22526]: audit 2026-03-10T05:51:29.697794+0000 mon.c (mon.1) 117 : audit [DBG] from='mgr.14409 192.168.123.102:0/4073702081' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:51:29.971 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:51:29 vm02 bash[22526]: audit 2026-03-10T05:51:29.698855+0000 mon.c (mon.1) 118 : audit [INF] from='mgr.14409 192.168.123.102:0/4073702081' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T05:51:29.971 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:51:29 vm02 bash[22526]: cluster 2026-03-10T05:51:29.735404+0000 mgr.y (mgr.14409) 238 : cluster [DBG] pgmap v174: 161 pgs: 161 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T05:51:30.252 INFO:journalctl@ceph.mgr.x.vm05.stdout:Mar 10 05:51:30 vm05 bash[37598]: debug 2026-03-10T05:51:30.028+0000 7fa756a74140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member 2026-03-10T05:51:30.252 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:51:29 vm05 bash[17864]: cephadm 2026-03-10T05:51:28.961695+0000 mgr.y (mgr.14409) 236 : cephadm [INF] Upgrade: Updating mgr.x 2026-03-10T05:51:30.252 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:51:29 vm05 bash[17864]: audit 2026-03-10T05:51:28.966589+0000 mon.a (mon.0) 775 : audit [INF] from='mgr.14409 ' entity='mgr.y' 2026-03-10T05:51:30.252 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:51:29 vm05 bash[17864]: audit 2026-03-10T05:51:28.969165+0000 mon.c (mon.1) 114 : audit [INF] from='mgr.14409 192.168.123.102:0/4073702081' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.x", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch 2026-03-10T05:51:30.252 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:51:29 vm05 bash[17864]: audit 2026-03-10T05:51:28.969397+0000 mon.a (mon.0) 776 : audit [INF] from='mgr.14409 ' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.x", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch 2026-03-10T05:51:30.252 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:51:29 vm05 bash[17864]: audit 2026-03-10T05:51:28.970291+0000 mon.c (mon.1) 115 : audit [DBG] from='mgr.14409 192.168.123.102:0/4073702081' entity='mgr.y' cmd=[{"prefix": "mgr services"}]: dispatch 2026-03-10T05:51:30.252 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:51:29 vm05 bash[17864]: audit 2026-03-10T05:51:28.971109+0000 mon.c (mon.1) 116 : audit [DBG] from='mgr.14409 192.168.123.102:0/4073702081' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:51:30.252 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:51:29 vm05 bash[17864]: cephadm 2026-03-10T05:51:28.971797+0000 mgr.y (mgr.14409) 237 : cephadm [INF] Deploying daemon mgr.x on vm05 2026-03-10T05:51:30.252 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:51:29 vm05 bash[17864]: audit 2026-03-10T05:51:29.693108+0000 mon.a (mon.0) 777 : audit [INF] from='mgr.14409 ' entity='mgr.y' 2026-03-10T05:51:30.252 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:51:29 vm05 bash[17864]: audit 2026-03-10T05:51:29.697794+0000 mon.c (mon.1) 117 : audit [DBG] from='mgr.14409 192.168.123.102:0/4073702081' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:51:30.252 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:51:29 vm05 bash[17864]: audit 2026-03-10T05:51:29.698855+0000 mon.c (mon.1) 118 : audit [INF] from='mgr.14409 192.168.123.102:0/4073702081' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T05:51:30.252 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:51:29 vm05 bash[17864]: cluster 2026-03-10T05:51:29.735404+0000 mgr.y (mgr.14409) 238 : cluster [DBG] pgmap v174: 161 pgs: 161 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T05:51:30.334 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:51:29 vm02 bash[17462]: audit 2026-03-10T05:51:28.970291+0000 mon.c (mon.1) 115 : audit [DBG] from='mgr.14409 192.168.123.102:0/4073702081' entity='mgr.y' cmd=[{"prefix": "mgr services"}]: dispatch 2026-03-10T05:51:30.334 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:51:29 vm02 bash[17462]: audit 2026-03-10T05:51:28.971109+0000 mon.c (mon.1) 116 : audit [DBG] from='mgr.14409 192.168.123.102:0/4073702081' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:51:30.334 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:51:29 vm02 bash[17462]: cephadm 2026-03-10T05:51:28.971797+0000 mgr.y (mgr.14409) 237 : cephadm [INF] Deploying daemon mgr.x on vm05 2026-03-10T05:51:30.334 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:51:29 vm02 bash[17462]: audit 2026-03-10T05:51:29.693108+0000 mon.a (mon.0) 777 : audit [INF] from='mgr.14409 ' entity='mgr.y' 2026-03-10T05:51:30.334 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:51:29 vm02 bash[17462]: audit 2026-03-10T05:51:29.697794+0000 mon.c (mon.1) 117 : audit [DBG] from='mgr.14409 192.168.123.102:0/4073702081' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:51:30.334 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:51:29 vm02 bash[17462]: audit 2026-03-10T05:51:29.698855+0000 mon.c (mon.1) 118 : audit [INF] from='mgr.14409 192.168.123.102:0/4073702081' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T05:51:30.334 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:51:29 vm02 bash[17462]: cluster 2026-03-10T05:51:29.735404+0000 mgr.y (mgr.14409) 238 : cluster [DBG] pgmap v174: 161 pgs: 161 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T05:51:30.745 INFO:journalctl@ceph.mgr.x.vm05.stdout:Mar 10 05:51:30 vm05 bash[37598]: debug 2026-03-10T05:51:30.308+0000 7fa756a74140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member 2026-03-10T05:51:30.834 INFO:journalctl@ceph.alertmanager.a.vm02.stdout:Mar 10 05:51:30 vm02 bash[43400]: level=warn ts=2026-03-10T05:51:30.334Z caller=notify.go:724 component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=6 err="Post \"https://192.168.123.105:8443/api/prometheus_receiver\": dial tcp 192.168.123.105:8443: connect: connection refused" 2026-03-10T05:51:31.002 INFO:journalctl@ceph.mgr.x.vm05.stdout:Mar 10 05:51:30 vm05 bash[37598]: debug 2026-03-10T05:51:30.744+0000 7fa756a74140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member 2026-03-10T05:51:31.002 INFO:journalctl@ceph.mgr.x.vm05.stdout:Mar 10 05:51:30 vm05 bash[37598]: debug 2026-03-10T05:51:30.824+0000 7fa756a74140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member 2026-03-10T05:51:31.002 INFO:journalctl@ceph.mgr.x.vm05.stdout:Mar 10 05:51:30 vm05 bash[37598]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode. 2026-03-10T05:51:31.002 INFO:journalctl@ceph.mgr.x.vm05.stdout:Mar 10 05:51:30 vm05 bash[37598]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve. 2026-03-10T05:51:31.002 INFO:journalctl@ceph.mgr.x.vm05.stdout:Mar 10 05:51:30 vm05 bash[37598]: from numpy import show_config as show_numpy_config 2026-03-10T05:51:31.002 INFO:journalctl@ceph.mgr.x.vm05.stdout:Mar 10 05:51:30 vm05 bash[37598]: debug 2026-03-10T05:51:30.964+0000 7fa756a74140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member 2026-03-10T05:51:31.502 INFO:journalctl@ceph.mgr.x.vm05.stdout:Mar 10 05:51:31 vm05 bash[37598]: debug 2026-03-10T05:51:31.100+0000 7fa756a74140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member 2026-03-10T05:51:31.502 INFO:journalctl@ceph.mgr.x.vm05.stdout:Mar 10 05:51:31 vm05 bash[37598]: debug 2026-03-10T05:51:31.136+0000 7fa756a74140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member 2026-03-10T05:51:31.502 INFO:journalctl@ceph.mgr.x.vm05.stdout:Mar 10 05:51:31 vm05 bash[37598]: debug 2026-03-10T05:51:31.176+0000 7fa756a74140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member 2026-03-10T05:51:31.502 INFO:journalctl@ceph.mgr.x.vm05.stdout:Mar 10 05:51:31 vm05 bash[37598]: debug 2026-03-10T05:51:31.220+0000 7fa756a74140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member 2026-03-10T05:51:31.502 INFO:journalctl@ceph.mgr.x.vm05.stdout:Mar 10 05:51:31 vm05 bash[37598]: debug 2026-03-10T05:51:31.272+0000 7fa756a74140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member 2026-03-10T05:51:31.965 INFO:journalctl@ceph.mgr.x.vm05.stdout:Mar 10 05:51:31 vm05 bash[37598]: debug 2026-03-10T05:51:31.696+0000 7fa756a74140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member 2026-03-10T05:51:31.966 INFO:journalctl@ceph.mgr.x.vm05.stdout:Mar 10 05:51:31 vm05 bash[37598]: debug 2026-03-10T05:51:31.736+0000 7fa756a74140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member 2026-03-10T05:51:31.966 INFO:journalctl@ceph.mgr.x.vm05.stdout:Mar 10 05:51:31 vm05 bash[37598]: debug 2026-03-10T05:51:31.772+0000 7fa756a74140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member 2026-03-10T05:51:31.966 INFO:journalctl@ceph.mgr.x.vm05.stdout:Mar 10 05:51:31 vm05 bash[37598]: debug 2026-03-10T05:51:31.924+0000 7fa756a74140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member 2026-03-10T05:51:32.252 INFO:journalctl@ceph.mgr.x.vm05.stdout:Mar 10 05:51:31 vm05 bash[37598]: debug 2026-03-10T05:51:31.964+0000 7fa756a74140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member 2026-03-10T05:51:32.252 INFO:journalctl@ceph.mgr.x.vm05.stdout:Mar 10 05:51:32 vm05 bash[37598]: debug 2026-03-10T05:51:32.000+0000 7fa756a74140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member 2026-03-10T05:51:32.252 INFO:journalctl@ceph.mgr.x.vm05.stdout:Mar 10 05:51:32 vm05 bash[37598]: debug 2026-03-10T05:51:32.108+0000 7fa756a74140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member 2026-03-10T05:51:32.571 INFO:journalctl@ceph.mgr.x.vm05.stdout:Mar 10 05:51:32 vm05 bash[37598]: debug 2026-03-10T05:51:32.268+0000 7fa756a74140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member 2026-03-10T05:51:32.571 INFO:journalctl@ceph.mgr.x.vm05.stdout:Mar 10 05:51:32 vm05 bash[37598]: debug 2026-03-10T05:51:32.456+0000 7fa756a74140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member 2026-03-10T05:51:32.571 INFO:journalctl@ceph.mgr.x.vm05.stdout:Mar 10 05:51:32 vm05 bash[37598]: debug 2026-03-10T05:51:32.508+0000 7fa756a74140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member 2026-03-10T05:51:32.991 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:51:32 vm05 bash[17864]: cluster 2026-03-10T05:51:31.735705+0000 mgr.y (mgr.14409) 239 : cluster [DBG] pgmap v175: 161 pgs: 161 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T05:51:32.991 INFO:journalctl@ceph.mgr.x.vm05.stdout:Mar 10 05:51:32 vm05 bash[37598]: debug 2026-03-10T05:51:32.568+0000 7fa756a74140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member 2026-03-10T05:51:32.991 INFO:journalctl@ceph.mgr.x.vm05.stdout:Mar 10 05:51:32 vm05 bash[37598]: debug 2026-03-10T05:51:32.756+0000 7fa756a74140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member 2026-03-10T05:51:33.084 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:51:32 vm02 bash[17462]: cluster 2026-03-10T05:51:31.735705+0000 mgr.y (mgr.14409) 239 : cluster [DBG] pgmap v175: 161 pgs: 161 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T05:51:33.084 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:51:32 vm02 bash[22526]: cluster 2026-03-10T05:51:31.735705+0000 mgr.y (mgr.14409) 239 : cluster [DBG] pgmap v175: 161 pgs: 161 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T05:51:33.252 INFO:journalctl@ceph.mgr.x.vm05.stdout:Mar 10 05:51:33 vm05 bash[37598]: debug 2026-03-10T05:51:33.056+0000 7fa756a74140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member 2026-03-10T05:51:33.252 INFO:journalctl@ceph.mgr.x.vm05.stdout:Mar 10 05:51:33 vm05 bash[37598]: [10/Mar/2026:05:51:33] ENGINE Bus STARTING 2026-03-10T05:51:33.252 INFO:journalctl@ceph.mgr.x.vm05.stdout:Mar 10 05:51:33 vm05 bash[37598]: CherryPy Checker: 2026-03-10T05:51:33.252 INFO:journalctl@ceph.mgr.x.vm05.stdout:Mar 10 05:51:33 vm05 bash[37598]: The Application mounted at '' has an empty config. 2026-03-10T05:51:33.252 INFO:journalctl@ceph.mgr.x.vm05.stdout:Mar 10 05:51:33 vm05 bash[37598]: [10/Mar/2026:05:51:33] ENGINE Serving on http://:::9283 2026-03-10T05:51:33.252 INFO:journalctl@ceph.mgr.x.vm05.stdout:Mar 10 05:51:33 vm05 bash[37598]: [10/Mar/2026:05:51:33] ENGINE Bus STARTED 2026-03-10T05:51:33.796 INFO:journalctl@ceph.alertmanager.a.vm02.stdout:Mar 10 05:51:33 vm02 bash[43400]: level=error ts=2026-03-10T05:51:33.522Z caller=dispatch.go:354 component=dispatcher msg="Notify for alerts failed" num_alerts=10 err="ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"https://192.168.123.105:8443/api/prometheus_receiver\": dial tcp 192.168.123.105:8443: connect: connection refused; ceph-dashboard/webhook[0]: notify retry canceled after 8 attempts: Post \"https://192.168.123.102:8443//api/prometheus_receiver\": x509: cannot validate certificate for 192.168.123.102 because it doesn't contain any IP SANs" 2026-03-10T05:51:33.796 INFO:journalctl@ceph.alertmanager.a.vm02.stdout:Mar 10 05:51:33 vm02 bash[43400]: level=warn ts=2026-03-10T05:51:33.524Z caller=notify.go:724 component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"https://192.168.123.105:8443/api/prometheus_receiver\": x509: cannot validate certificate for 192.168.123.105 because it doesn't contain any IP SANs" 2026-03-10T05:51:33.796 INFO:journalctl@ceph.alertmanager.a.vm02.stdout:Mar 10 05:51:33 vm02 bash[43400]: level=warn ts=2026-03-10T05:51:33.525Z caller=notify.go:724 component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"https://192.168.123.102:8443//api/prometheus_receiver\": x509: cannot validate certificate for 192.168.123.102 because it doesn't contain any IP SANs" 2026-03-10T05:51:33.796 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:51:33 vm02 bash[17462]: audit 2026-03-10T05:51:32.807127+0000 mgr.y (mgr.14409) 240 : audit [DBG] from='client.14712 -' entity='client.iscsi.foo.vm02.mxbwmh' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T05:51:34.084 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:51:33 vm02 bash[17462]: audit 2026-03-10T05:51:33.003213+0000 mon.a (mon.0) 778 : audit [INF] from='mgr.14409 ' entity='mgr.y' 2026-03-10T05:51:34.084 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:51:33 vm02 bash[17462]: cluster 2026-03-10T05:51:33.059598+0000 mon.a (mon.0) 779 : cluster [DBG] Standby manager daemon x restarted 2026-03-10T05:51:34.084 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:51:33 vm02 bash[17462]: cluster 2026-03-10T05:51:33.059687+0000 mon.a (mon.0) 780 : cluster [DBG] Standby manager daemon x started 2026-03-10T05:51:34.084 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:51:33 vm02 bash[17462]: audit 2026-03-10T05:51:33.064375+0000 mon.b (mon.2) 31 : audit [DBG] from='mgr.? 192.168.123.105:0/3918521054' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/x/crt"}]: dispatch 2026-03-10T05:51:34.084 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:51:33 vm02 bash[17462]: audit 2026-03-10T05:51:33.066838+0000 mon.b (mon.2) 32 : audit [DBG] from='mgr.? 192.168.123.105:0/3918521054' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/crt"}]: dispatch 2026-03-10T05:51:34.084 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:51:33 vm02 bash[17462]: audit 2026-03-10T05:51:33.067575+0000 mon.b (mon.2) 33 : audit [DBG] from='mgr.? 192.168.123.105:0/3918521054' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/x/key"}]: dispatch 2026-03-10T05:51:34.084 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:51:33 vm02 bash[17462]: audit 2026-03-10T05:51:33.069443+0000 mon.b (mon.2) 34 : audit [DBG] from='mgr.? 192.168.123.105:0/3918521054' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/key"}]: dispatch 2026-03-10T05:51:34.084 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:51:33 vm02 bash[17462]: audit 2026-03-10T05:51:33.295673+0000 mon.a (mon.0) 781 : audit [INF] from='mgr.14409 ' entity='mgr.y' 2026-03-10T05:51:34.084 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:51:33 vm02 bash[22526]: audit 2026-03-10T05:51:32.807127+0000 mgr.y (mgr.14409) 240 : audit [DBG] from='client.14712 -' entity='client.iscsi.foo.vm02.mxbwmh' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T05:51:34.084 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:51:33 vm02 bash[22526]: audit 2026-03-10T05:51:33.003213+0000 mon.a (mon.0) 778 : audit [INF] from='mgr.14409 ' entity='mgr.y' 2026-03-10T05:51:34.084 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:51:33 vm02 bash[22526]: cluster 2026-03-10T05:51:33.059598+0000 mon.a (mon.0) 779 : cluster [DBG] Standby manager daemon x restarted 2026-03-10T05:51:34.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:51:33 vm02 bash[22526]: cluster 2026-03-10T05:51:33.059687+0000 mon.a (mon.0) 780 : cluster [DBG] Standby manager daemon x started 2026-03-10T05:51:34.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:51:33 vm02 bash[22526]: audit 2026-03-10T05:51:33.064375+0000 mon.b (mon.2) 31 : audit [DBG] from='mgr.? 192.168.123.105:0/3918521054' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/x/crt"}]: dispatch 2026-03-10T05:51:34.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:51:33 vm02 bash[22526]: audit 2026-03-10T05:51:33.066838+0000 mon.b (mon.2) 32 : audit [DBG] from='mgr.? 192.168.123.105:0/3918521054' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/crt"}]: dispatch 2026-03-10T05:51:34.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:51:33 vm02 bash[22526]: audit 2026-03-10T05:51:33.067575+0000 mon.b (mon.2) 33 : audit [DBG] from='mgr.? 192.168.123.105:0/3918521054' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/x/key"}]: dispatch 2026-03-10T05:51:34.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:51:33 vm02 bash[22526]: audit 2026-03-10T05:51:33.069443+0000 mon.b (mon.2) 34 : audit [DBG] from='mgr.? 192.168.123.105:0/3918521054' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/key"}]: dispatch 2026-03-10T05:51:34.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:51:33 vm02 bash[22526]: audit 2026-03-10T05:51:33.295673+0000 mon.a (mon.0) 781 : audit [INF] from='mgr.14409 ' entity='mgr.y' 2026-03-10T05:51:34.252 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:51:33 vm05 bash[17864]: audit 2026-03-10T05:51:32.807127+0000 mgr.y (mgr.14409) 240 : audit [DBG] from='client.14712 -' entity='client.iscsi.foo.vm02.mxbwmh' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T05:51:34.252 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:51:33 vm05 bash[17864]: audit 2026-03-10T05:51:33.003213+0000 mon.a (mon.0) 778 : audit [INF] from='mgr.14409 ' entity='mgr.y' 2026-03-10T05:51:34.252 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:51:33 vm05 bash[17864]: cluster 2026-03-10T05:51:33.059598+0000 mon.a (mon.0) 779 : cluster [DBG] Standby manager daemon x restarted 2026-03-10T05:51:34.252 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:51:33 vm05 bash[17864]: cluster 2026-03-10T05:51:33.059687+0000 mon.a (mon.0) 780 : cluster [DBG] Standby manager daemon x started 2026-03-10T05:51:34.252 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:51:33 vm05 bash[17864]: audit 2026-03-10T05:51:33.064375+0000 mon.b (mon.2) 31 : audit [DBG] from='mgr.? 192.168.123.105:0/3918521054' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/x/crt"}]: dispatch 2026-03-10T05:51:34.252 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:51:33 vm05 bash[17864]: audit 2026-03-10T05:51:33.066838+0000 mon.b (mon.2) 32 : audit [DBG] from='mgr.? 192.168.123.105:0/3918521054' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/crt"}]: dispatch 2026-03-10T05:51:34.252 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:51:33 vm05 bash[17864]: audit 2026-03-10T05:51:33.067575+0000 mon.b (mon.2) 33 : audit [DBG] from='mgr.? 192.168.123.105:0/3918521054' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/x/key"}]: dispatch 2026-03-10T05:51:34.252 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:51:33 vm05 bash[17864]: audit 2026-03-10T05:51:33.069443+0000 mon.b (mon.2) 34 : audit [DBG] from='mgr.? 192.168.123.105:0/3918521054' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/key"}]: dispatch 2026-03-10T05:51:34.252 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:51:33 vm05 bash[17864]: audit 2026-03-10T05:51:33.295673+0000 mon.a (mon.0) 781 : audit [INF] from='mgr.14409 ' entity='mgr.y' 2026-03-10T05:51:35.084 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:51:34 vm02 bash[17462]: cluster 2026-03-10T05:51:33.736135+0000 mgr.y (mgr.14409) 241 : cluster [DBG] pgmap v176: 161 pgs: 161 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T05:51:35.084 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:51:34 vm02 bash[17462]: cluster 2026-03-10T05:51:34.017269+0000 mon.a (mon.0) 782 : cluster [DBG] mgrmap e21: y(active, since 4m), standbys: x 2026-03-10T05:51:35.084 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:51:34 vm02 bash[22526]: cluster 2026-03-10T05:51:33.736135+0000 mgr.y (mgr.14409) 241 : cluster [DBG] pgmap v176: 161 pgs: 161 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T05:51:35.084 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:51:34 vm02 bash[22526]: cluster 2026-03-10T05:51:34.017269+0000 mon.a (mon.0) 782 : cluster [DBG] mgrmap e21: y(active, since 4m), standbys: x 2026-03-10T05:51:35.252 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:51:34 vm05 bash[17864]: cluster 2026-03-10T05:51:33.736135+0000 mgr.y (mgr.14409) 241 : cluster [DBG] pgmap v176: 161 pgs: 161 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T05:51:35.252 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:51:34 vm05 bash[17864]: cluster 2026-03-10T05:51:34.017269+0000 mon.a (mon.0) 782 : cluster [DBG] mgrmap e21: y(active, since 4m), standbys: x 2026-03-10T05:51:37.084 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:51:36 vm02 bash[17462]: cluster 2026-03-10T05:51:35.736390+0000 mgr.y (mgr.14409) 242 : cluster [DBG] pgmap v177: 161 pgs: 161 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T05:51:37.084 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:51:36 vm02 bash[22526]: cluster 2026-03-10T05:51:35.736390+0000 mgr.y (mgr.14409) 242 : cluster [DBG] pgmap v177: 161 pgs: 161 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T05:51:37.252 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:51:36 vm05 bash[17864]: cluster 2026-03-10T05:51:35.736390+0000 mgr.y (mgr.14409) 242 : cluster [DBG] pgmap v177: 161 pgs: 161 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T05:51:37.584 INFO:journalctl@ceph.mgr.y.vm02.stdout:Mar 10 05:51:37 vm02 bash[17731]: ::ffff:192.168.123.105 - - [10/Mar/2026:05:51:37] "GET /metrics HTTP/1.1" 200 214471 "" "Prometheus/2.33.4" 2026-03-10T05:51:38.966 INFO:journalctl@ceph.mgr.x.vm05.stdout:Mar 10 05:51:38 vm05 bash[37598]: [10/Mar/2026:05:51:38] ENGINE Bus STOPPING 2026-03-10T05:51:38.966 INFO:journalctl@ceph.mgr.x.vm05.stdout:Mar 10 05:51:38 vm05 bash[37598]: [10/Mar/2026:05:51:38] ENGINE HTTP Server cherrypy._cpwsgi_server.CPWSGIServer(('::', 9283)) shut down 2026-03-10T05:51:38.966 INFO:journalctl@ceph.mgr.x.vm05.stdout:Mar 10 05:51:38 vm05 bash[37598]: [10/Mar/2026:05:51:38] ENGINE Bus STOPPED 2026-03-10T05:51:38.966 INFO:journalctl@ceph.mgr.x.vm05.stdout:Mar 10 05:51:38 vm05 bash[37598]: [10/Mar/2026:05:51:38] ENGINE Bus STARTING 2026-03-10T05:51:38.966 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:51:38 vm05 bash[17864]: audit 2026-03-10T05:51:37.596693+0000 mon.a (mon.0) 783 : audit [INF] from='mgr.14409 ' entity='mgr.y' 2026-03-10T05:51:38.966 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:51:38 vm05 bash[17864]: audit 2026-03-10T05:51:37.601519+0000 mon.a (mon.0) 784 : audit [INF] from='mgr.14409 ' entity='mgr.y' 2026-03-10T05:51:38.966 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:51:38 vm05 bash[17864]: audit 2026-03-10T05:51:37.604233+0000 mon.c (mon.1) 119 : audit [DBG] from='mgr.14409 192.168.123.102:0/4073702081' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T05:51:38.966 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:51:38 vm05 bash[17864]: cephadm 2026-03-10T05:51:37.605477+0000 mgr.y (mgr.14409) 243 : cephadm [INF] Upgrade: Need to upgrade myself (mgr.y) 2026-03-10T05:51:38.966 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:51:38 vm05 bash[17864]: cephadm 2026-03-10T05:51:37.607283+0000 mgr.y (mgr.14409) 244 : cephadm [INF] Failing over to other MGR 2026-03-10T05:51:38.966 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:51:38 vm05 bash[17864]: audit 2026-03-10T05:51:37.607405+0000 mon.c (mon.1) 120 : audit [INF] from='mgr.14409 192.168.123.102:0/4073702081' entity='mgr.y' cmd=[{"prefix": "mgr fail", "who": "y"}]: dispatch 2026-03-10T05:51:38.966 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:51:38 vm05 bash[17864]: audit 2026-03-10T05:51:37.607609+0000 mon.a (mon.0) 785 : audit [INF] from='mgr.14409 ' entity='mgr.y' cmd=[{"prefix": "mgr fail", "who": "y"}]: dispatch 2026-03-10T05:51:38.966 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:51:38 vm05 bash[17864]: cluster 2026-03-10T05:51:37.613270+0000 mon.a (mon.0) 786 : cluster [DBG] osdmap e82: 8 total, 8 up, 8 in 2026-03-10T05:51:38.966 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:51:38 vm05 bash[17864]: cluster 2026-03-10T05:51:37.675365+0000 mon.a (mon.0) 787 : cluster [DBG] Standby manager daemon y started 2026-03-10T05:51:38.966 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:51:38 vm05 bash[17864]: cluster 2026-03-10T05:51:37.736774+0000 mgr.y (mgr.14409) 245 : cluster [DBG] pgmap v179: 161 pgs: 161 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-10T05:51:38.968 INFO:journalctl@ceph.mgr.y.vm02.stdout:Mar 10 05:51:38 vm02 bash[17731]: ignoring --setuser ceph since I am not root 2026-03-10T05:51:38.968 INFO:journalctl@ceph.mgr.y.vm02.stdout:Mar 10 05:51:38 vm02 bash[17731]: ignoring --setgroup ceph since I am not root 2026-03-10T05:51:38.968 INFO:journalctl@ceph.mgr.y.vm02.stdout:Mar 10 05:51:38 vm02 bash[17731]: debug 2026-03-10T05:51:38.663+0000 7f76a0c95700 1 -- 192.168.123.102:0/1524387842 <== mon.1 v2:192.168.123.102:3301/0 4 ==== auth_reply(proto 2 0 (0) Success) v1 ==== 194+0+0 (secure 0 0 0) 0x55b4dba26340 con 0x55b4dbb2c400 2026-03-10T05:51:38.968 INFO:journalctl@ceph.mgr.y.vm02.stdout:Mar 10 05:51:38 vm02 bash[17731]: debug 2026-03-10T05:51:38.743+0000 7f76a96f1000 -1 mgr[py] Module status has missing NOTIFY_TYPES member 2026-03-10T05:51:38.968 INFO:journalctl@ceph.mgr.y.vm02.stdout:Mar 10 05:51:38 vm02 bash[17731]: debug 2026-03-10T05:51:38.791+0000 7f76a96f1000 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member 2026-03-10T05:51:38.968 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:51:38 vm02 bash[17462]: audit 2026-03-10T05:51:37.596693+0000 mon.a (mon.0) 783 : audit [INF] from='mgr.14409 ' entity='mgr.y' 2026-03-10T05:51:38.968 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:51:38 vm02 bash[17462]: audit 2026-03-10T05:51:37.601519+0000 mon.a (mon.0) 784 : audit [INF] from='mgr.14409 ' entity='mgr.y' 2026-03-10T05:51:38.969 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:51:38 vm02 bash[17462]: audit 2026-03-10T05:51:37.604233+0000 mon.c (mon.1) 119 : audit [DBG] from='mgr.14409 192.168.123.102:0/4073702081' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T05:51:38.969 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:51:38 vm02 bash[17462]: cephadm 2026-03-10T05:51:37.605477+0000 mgr.y (mgr.14409) 243 : cephadm [INF] Upgrade: Need to upgrade myself (mgr.y) 2026-03-10T05:51:38.969 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:51:38 vm02 bash[17462]: cephadm 2026-03-10T05:51:37.607283+0000 mgr.y (mgr.14409) 244 : cephadm [INF] Failing over to other MGR 2026-03-10T05:51:38.969 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:51:38 vm02 bash[17462]: audit 2026-03-10T05:51:37.607405+0000 mon.c (mon.1) 120 : audit [INF] from='mgr.14409 192.168.123.102:0/4073702081' entity='mgr.y' cmd=[{"prefix": "mgr fail", "who": "y"}]: dispatch 2026-03-10T05:51:38.969 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:51:38 vm02 bash[17462]: audit 2026-03-10T05:51:37.607609+0000 mon.a (mon.0) 785 : audit [INF] from='mgr.14409 ' entity='mgr.y' cmd=[{"prefix": "mgr fail", "who": "y"}]: dispatch 2026-03-10T05:51:38.969 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:51:38 vm02 bash[17462]: cluster 2026-03-10T05:51:37.613270+0000 mon.a (mon.0) 786 : cluster [DBG] osdmap e82: 8 total, 8 up, 8 in 2026-03-10T05:51:38.969 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:51:38 vm02 bash[17462]: cluster 2026-03-10T05:51:37.675365+0000 mon.a (mon.0) 787 : cluster [DBG] Standby manager daemon y started 2026-03-10T05:51:38.969 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:51:38 vm02 bash[17462]: cluster 2026-03-10T05:51:37.736774+0000 mgr.y (mgr.14409) 245 : cluster [DBG] pgmap v179: 161 pgs: 161 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-10T05:51:38.969 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:51:38 vm02 bash[22526]: audit 2026-03-10T05:51:37.596693+0000 mon.a (mon.0) 783 : audit [INF] from='mgr.14409 ' entity='mgr.y' 2026-03-10T05:51:38.969 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:51:38 vm02 bash[22526]: audit 2026-03-10T05:51:37.601519+0000 mon.a (mon.0) 784 : audit [INF] from='mgr.14409 ' entity='mgr.y' 2026-03-10T05:51:38.969 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:51:38 vm02 bash[22526]: audit 2026-03-10T05:51:37.604233+0000 mon.c (mon.1) 119 : audit [DBG] from='mgr.14409 192.168.123.102:0/4073702081' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T05:51:38.969 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:51:38 vm02 bash[22526]: cephadm 2026-03-10T05:51:37.605477+0000 mgr.y (mgr.14409) 243 : cephadm [INF] Upgrade: Need to upgrade myself (mgr.y) 2026-03-10T05:51:38.969 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:51:38 vm02 bash[22526]: cephadm 2026-03-10T05:51:37.607283+0000 mgr.y (mgr.14409) 244 : cephadm [INF] Failing over to other MGR 2026-03-10T05:51:38.969 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:51:38 vm02 bash[22526]: audit 2026-03-10T05:51:37.607405+0000 mon.c (mon.1) 120 : audit [INF] from='mgr.14409 192.168.123.102:0/4073702081' entity='mgr.y' cmd=[{"prefix": "mgr fail", "who": "y"}]: dispatch 2026-03-10T05:51:38.969 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:51:38 vm02 bash[22526]: audit 2026-03-10T05:51:37.607609+0000 mon.a (mon.0) 785 : audit [INF] from='mgr.14409 ' entity='mgr.y' cmd=[{"prefix": "mgr fail", "who": "y"}]: dispatch 2026-03-10T05:51:38.969 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:51:38 vm02 bash[22526]: cluster 2026-03-10T05:51:37.613270+0000 mon.a (mon.0) 786 : cluster [DBG] osdmap e82: 8 total, 8 up, 8 in 2026-03-10T05:51:38.969 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:51:38 vm02 bash[22526]: cluster 2026-03-10T05:51:37.675365+0000 mon.a (mon.0) 787 : cluster [DBG] Standby manager daemon y started 2026-03-10T05:51:38.969 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:51:38 vm02 bash[22526]: cluster 2026-03-10T05:51:37.736774+0000 mgr.y (mgr.14409) 245 : cluster [DBG] pgmap v179: 161 pgs: 161 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-10T05:51:38.969 INFO:journalctl@ceph.alertmanager.a.vm02.stdout:Mar 10 05:51:38 vm02 bash[43400]: level=warn ts=2026-03-10T05:51:38.721Z caller=notify.go:724 component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=5 err="Post \"https://192.168.123.105:8443/api/prometheus_receiver\": dial tcp 192.168.123.105:8443: connect: connection refused" 2026-03-10T05:51:39.252 INFO:journalctl@ceph.mgr.x.vm05.stdout:Mar 10 05:51:39 vm05 bash[37598]: [10/Mar/2026:05:51:39] ENGINE Serving on http://:::9283 2026-03-10T05:51:39.252 INFO:journalctl@ceph.mgr.x.vm05.stdout:Mar 10 05:51:39 vm05 bash[37598]: [10/Mar/2026:05:51:39] ENGINE Bus STARTED 2026-03-10T05:51:39.334 INFO:journalctl@ceph.mgr.y.vm02.stdout:Mar 10 05:51:39 vm02 bash[17731]: debug 2026-03-10T05:51:39.087+0000 7f76a96f1000 -1 mgr[py] Module rook has missing NOTIFY_TYPES member 2026-03-10T05:51:39.834 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:51:39 vm02 bash[17462]: audit 2026-03-10T05:51:38.625066+0000 mon.a (mon.0) 788 : audit [INF] from='mgr.14409 ' entity='mgr.y' cmd='[{"prefix": "mgr fail", "who": "y"}]': finished 2026-03-10T05:51:39.834 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:51:39 vm02 bash[17462]: cluster 2026-03-10T05:51:38.625157+0000 mon.a (mon.0) 789 : cluster [DBG] mgrmap e22: x(active, starting, since 1.01656s), standbys: y 2026-03-10T05:51:39.834 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:51:39 vm02 bash[17462]: audit 2026-03-10T05:51:38.625551+0000 mon.b (mon.2) 35 : audit [DBG] from='mgr.24773 192.168.123.105:0/3918521054' entity='mgr.x' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-10T05:51:39.834 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:51:39 vm02 bash[17462]: audit 2026-03-10T05:51:38.625631+0000 mon.b (mon.2) 36 : audit [DBG] from='mgr.24773 192.168.123.105:0/3918521054' entity='mgr.x' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T05:51:39.834 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:51:39 vm02 bash[17462]: audit 2026-03-10T05:51:38.625678+0000 mon.b (mon.2) 37 : audit [DBG] from='mgr.24773 192.168.123.105:0/3918521054' entity='mgr.x' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T05:51:39.834 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:51:39 vm02 bash[17462]: audit 2026-03-10T05:51:38.628605+0000 mon.b (mon.2) 38 : audit [DBG] from='mgr.24773 192.168.123.105:0/3918521054' entity='mgr.x' cmd=[{"prefix": "mgr metadata", "who": "x", "id": "x"}]: dispatch 2026-03-10T05:51:39.834 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:51:39 vm02 bash[17462]: audit 2026-03-10T05:51:38.628720+0000 mon.b (mon.2) 39 : audit [DBG] from='mgr.24773 192.168.123.105:0/3918521054' entity='mgr.x' cmd=[{"prefix": "mgr metadata", "who": "y", "id": "y"}]: dispatch 2026-03-10T05:51:39.835 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:51:39 vm02 bash[17462]: audit 2026-03-10T05:51:38.628803+0000 mon.b (mon.2) 40 : audit [DBG] from='mgr.24773 192.168.123.105:0/3918521054' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T05:51:39.835 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:51:39 vm02 bash[17462]: audit 2026-03-10T05:51:38.629081+0000 mon.b (mon.2) 41 : audit [DBG] from='mgr.24773 192.168.123.105:0/3918521054' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T05:51:39.835 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:51:39 vm02 bash[17462]: audit 2026-03-10T05:51:38.629203+0000 mon.b (mon.2) 42 : audit [DBG] from='mgr.24773 192.168.123.105:0/3918521054' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T05:51:39.835 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:51:39 vm02 bash[17462]: audit 2026-03-10T05:51:38.629308+0000 mon.b (mon.2) 43 : audit [DBG] from='mgr.24773 192.168.123.105:0/3918521054' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-10T05:51:39.835 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:51:39 vm02 bash[17462]: audit 2026-03-10T05:51:38.629513+0000 mon.b (mon.2) 44 : audit [DBG] from='mgr.24773 192.168.123.105:0/3918521054' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-10T05:51:39.835 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:51:39 vm02 bash[17462]: audit 2026-03-10T05:51:38.629629+0000 mon.b (mon.2) 45 : audit [DBG] from='mgr.24773 192.168.123.105:0/3918521054' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-10T05:51:39.835 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:51:39 vm02 bash[17462]: audit 2026-03-10T05:51:38.629722+0000 mon.b (mon.2) 46 : audit [DBG] from='mgr.24773 192.168.123.105:0/3918521054' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-10T05:51:39.835 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:51:39 vm02 bash[17462]: audit 2026-03-10T05:51:38.629827+0000 mon.b (mon.2) 47 : audit [DBG] from='mgr.24773 192.168.123.105:0/3918521054' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-10T05:51:39.835 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:51:39 vm02 bash[17462]: audit 2026-03-10T05:51:38.629925+0000 mon.b (mon.2) 48 : audit [DBG] from='mgr.24773 192.168.123.105:0/3918521054' entity='mgr.x' cmd=[{"prefix": "mds metadata"}]: dispatch 2026-03-10T05:51:39.835 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:51:39 vm02 bash[17462]: audit 2026-03-10T05:51:38.629989+0000 mon.b (mon.2) 49 : audit [DBG] from='mgr.24773 192.168.123.105:0/3918521054' entity='mgr.x' cmd=[{"prefix": "osd metadata"}]: dispatch 2026-03-10T05:51:39.835 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:51:39 vm02 bash[17462]: audit 2026-03-10T05:51:38.630217+0000 mon.b (mon.2) 50 : audit [DBG] from='mgr.24773 192.168.123.105:0/3918521054' entity='mgr.x' cmd=[{"prefix": "mon metadata"}]: dispatch 2026-03-10T05:51:39.835 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:51:39 vm02 bash[17462]: cluster 2026-03-10T05:51:38.689253+0000 mon.a (mon.0) 790 : cluster [INF] Manager daemon x is now available 2026-03-10T05:51:39.835 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:51:39 vm02 bash[17462]: audit 2026-03-10T05:51:38.703362+0000 mon.a (mon.0) 791 : audit [INF] from='mgr.24773 ' entity='mgr.x' 2026-03-10T05:51:39.835 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:51:39 vm02 bash[17462]: cephadm 2026-03-10T05:51:38.705141+0000 mgr.x (mgr.24773) 1 : cephadm [INF] Queued rgw.foo for migration 2026-03-10T05:51:39.835 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:51:39 vm02 bash[17462]: audit 2026-03-10T05:51:38.709227+0000 mon.a (mon.0) 792 : audit [INF] from='mgr.24773 ' entity='mgr.x' 2026-03-10T05:51:39.835 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:51:39 vm02 bash[17462]: cephadm 2026-03-10T05:51:38.711044+0000 mgr.x (mgr.24773) 2 : cephadm [INF] Queued rgw.smpl for migration 2026-03-10T05:51:39.835 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:51:39 vm02 bash[17462]: cephadm 2026-03-10T05:51:38.711295+0000 mgr.x (mgr.24773) 3 : cephadm [INF] No Migration is needed for rgw spec: {'placement': {'count': 2}, 'service_id': 'foo', 'service_name': 'rgw.foo', 'service_type': 'rgw', 'spec': {'rgw_frontend_port': 8000, 'rgw_realm': 'r', 'rgw_zone': 'z'}} 2026-03-10T05:51:39.835 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:51:39 vm02 bash[17462]: cephadm 2026-03-10T05:51:38.711312+0000 mgr.x (mgr.24773) 4 : cephadm [INF] No Migration is needed for rgw spec: {'placement': {'count': 2}, 'service_id': 'smpl', 'service_name': 'rgw.smpl', 'service_type': 'rgw'} 2026-03-10T05:51:39.835 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:51:39 vm02 bash[17462]: audit 2026-03-10T05:51:38.716604+0000 mon.a (mon.0) 793 : audit [INF] from='mgr.24773 ' entity='mgr.x' 2026-03-10T05:51:39.835 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:51:39 vm02 bash[17462]: cephadm 2026-03-10T05:51:38.718511+0000 mgr.x (mgr.24773) 5 : cephadm [INF] Migrating certs/keys for iscsi.foo spec to cert store 2026-03-10T05:51:39.835 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:51:39 vm02 bash[17462]: cephadm 2026-03-10T05:51:38.718542+0000 mgr.x (mgr.24773) 6 : cephadm [INF] Migrating certs/keys for rgw.foo spec to cert store 2026-03-10T05:51:39.835 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:51:39 vm02 bash[17462]: cephadm 2026-03-10T05:51:38.718560+0000 mgr.x (mgr.24773) 7 : cephadm [INF] Migrating certs/keys for rgw.smpl spec to cert store 2026-03-10T05:51:39.835 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:51:39 vm02 bash[17462]: cephadm 2026-03-10T05:51:38.718631+0000 mgr.x (mgr.24773) 8 : cephadm [INF] Checking for cert/key for grafana.a 2026-03-10T05:51:39.835 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:51:39 vm02 bash[17462]: audit 2026-03-10T05:51:38.724699+0000 mon.a (mon.0) 794 : audit [INF] from='mgr.24773 ' entity='mgr.x' 2026-03-10T05:51:39.835 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:51:39 vm02 bash[17462]: audit 2026-03-10T05:51:38.751931+0000 mon.b (mon.2) 51 : audit [DBG] from='mgr.24773 192.168.123.105:0/3918521054' entity='mgr.x' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T05:51:39.835 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:51:39 vm02 bash[17462]: audit 2026-03-10T05:51:38.752003+0000 mon.a (mon.0) 795 : audit [INF] from='mgr.24773 ' entity='mgr.x' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/x/mirror_snapshot_schedule"}]: dispatch 2026-03-10T05:51:39.835 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:51:39 vm02 bash[17462]: audit 2026-03-10T05:51:38.753078+0000 mon.b (mon.2) 52 : audit [INF] from='mgr.24773 192.168.123.105:0/3918521054' entity='mgr.x' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/x/mirror_snapshot_schedule"}]: dispatch 2026-03-10T05:51:39.835 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:51:39 vm02 bash[17462]: audit 2026-03-10T05:51:38.818853+0000 mon.a (mon.0) 796 : audit [INF] from='mgr.24773 ' entity='mgr.x' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/x/trash_purge_schedule"}]: dispatch 2026-03-10T05:51:39.835 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:51:39 vm02 bash[17462]: audit 2026-03-10T05:51:38.819812+0000 mon.b (mon.2) 53 : audit [INF] from='mgr.24773 192.168.123.105:0/3918521054' entity='mgr.x' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/x/trash_purge_schedule"}]: dispatch 2026-03-10T05:51:39.835 INFO:journalctl@ceph.mgr.y.vm02.stdout:Mar 10 05:51:39 vm02 bash[17731]: debug 2026-03-10T05:51:39.559+0000 7f76a96f1000 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member 2026-03-10T05:51:39.835 INFO:journalctl@ceph.mgr.y.vm02.stdout:Mar 10 05:51:39 vm02 bash[17731]: debug 2026-03-10T05:51:39.663+0000 7f76a96f1000 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member 2026-03-10T05:51:39.835 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:51:39 vm02 bash[22526]: audit 2026-03-10T05:51:38.625066+0000 mon.a (mon.0) 788 : audit [INF] from='mgr.14409 ' entity='mgr.y' cmd='[{"prefix": "mgr fail", "who": "y"}]': finished 2026-03-10T05:51:39.835 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:51:39 vm02 bash[22526]: cluster 2026-03-10T05:51:38.625157+0000 mon.a (mon.0) 789 : cluster [DBG] mgrmap e22: x(active, starting, since 1.01656s), standbys: y 2026-03-10T05:51:39.835 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:51:39 vm02 bash[22526]: audit 2026-03-10T05:51:38.625551+0000 mon.b (mon.2) 35 : audit [DBG] from='mgr.24773 192.168.123.105:0/3918521054' entity='mgr.x' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-10T05:51:39.835 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:51:39 vm02 bash[22526]: audit 2026-03-10T05:51:38.625631+0000 mon.b (mon.2) 36 : audit [DBG] from='mgr.24773 192.168.123.105:0/3918521054' entity='mgr.x' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T05:51:39.835 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:51:39 vm02 bash[22526]: audit 2026-03-10T05:51:38.625678+0000 mon.b (mon.2) 37 : audit [DBG] from='mgr.24773 192.168.123.105:0/3918521054' entity='mgr.x' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T05:51:39.835 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:51:39 vm02 bash[22526]: audit 2026-03-10T05:51:38.628605+0000 mon.b (mon.2) 38 : audit [DBG] from='mgr.24773 192.168.123.105:0/3918521054' entity='mgr.x' cmd=[{"prefix": "mgr metadata", "who": "x", "id": "x"}]: dispatch 2026-03-10T05:51:39.835 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:51:39 vm02 bash[22526]: audit 2026-03-10T05:51:38.628720+0000 mon.b (mon.2) 39 : audit [DBG] from='mgr.24773 192.168.123.105:0/3918521054' entity='mgr.x' cmd=[{"prefix": "mgr metadata", "who": "y", "id": "y"}]: dispatch 2026-03-10T05:51:39.835 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:51:39 vm02 bash[22526]: audit 2026-03-10T05:51:38.628803+0000 mon.b (mon.2) 40 : audit [DBG] from='mgr.24773 192.168.123.105:0/3918521054' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T05:51:39.835 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:51:39 vm02 bash[22526]: audit 2026-03-10T05:51:38.629081+0000 mon.b (mon.2) 41 : audit [DBG] from='mgr.24773 192.168.123.105:0/3918521054' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T05:51:39.835 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:51:39 vm02 bash[22526]: audit 2026-03-10T05:51:38.629203+0000 mon.b (mon.2) 42 : audit [DBG] from='mgr.24773 192.168.123.105:0/3918521054' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T05:51:39.835 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:51:39 vm02 bash[22526]: audit 2026-03-10T05:51:38.629308+0000 mon.b (mon.2) 43 : audit [DBG] from='mgr.24773 192.168.123.105:0/3918521054' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-10T05:51:39.835 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:51:39 vm02 bash[22526]: audit 2026-03-10T05:51:38.629513+0000 mon.b (mon.2) 44 : audit [DBG] from='mgr.24773 192.168.123.105:0/3918521054' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-10T05:51:39.835 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:51:39 vm02 bash[22526]: audit 2026-03-10T05:51:38.629629+0000 mon.b (mon.2) 45 : audit [DBG] from='mgr.24773 192.168.123.105:0/3918521054' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-10T05:51:39.835 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:51:39 vm02 bash[22526]: audit 2026-03-10T05:51:38.629722+0000 mon.b (mon.2) 46 : audit [DBG] from='mgr.24773 192.168.123.105:0/3918521054' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-10T05:51:39.835 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:51:39 vm02 bash[22526]: audit 2026-03-10T05:51:38.629827+0000 mon.b (mon.2) 47 : audit [DBG] from='mgr.24773 192.168.123.105:0/3918521054' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-10T05:51:39.835 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:51:39 vm02 bash[22526]: audit 2026-03-10T05:51:38.629925+0000 mon.b (mon.2) 48 : audit [DBG] from='mgr.24773 192.168.123.105:0/3918521054' entity='mgr.x' cmd=[{"prefix": "mds metadata"}]: dispatch 2026-03-10T05:51:39.835 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:51:39 vm02 bash[22526]: audit 2026-03-10T05:51:38.629989+0000 mon.b (mon.2) 49 : audit [DBG] from='mgr.24773 192.168.123.105:0/3918521054' entity='mgr.x' cmd=[{"prefix": "osd metadata"}]: dispatch 2026-03-10T05:51:39.836 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:51:39 vm02 bash[22526]: audit 2026-03-10T05:51:38.630217+0000 mon.b (mon.2) 50 : audit [DBG] from='mgr.24773 192.168.123.105:0/3918521054' entity='mgr.x' cmd=[{"prefix": "mon metadata"}]: dispatch 2026-03-10T05:51:39.836 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:51:39 vm02 bash[22526]: cluster 2026-03-10T05:51:38.689253+0000 mon.a (mon.0) 790 : cluster [INF] Manager daemon x is now available 2026-03-10T05:51:39.836 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:51:39 vm02 bash[22526]: audit 2026-03-10T05:51:38.703362+0000 mon.a (mon.0) 791 : audit [INF] from='mgr.24773 ' entity='mgr.x' 2026-03-10T05:51:39.836 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:51:39 vm02 bash[22526]: cephadm 2026-03-10T05:51:38.705141+0000 mgr.x (mgr.24773) 1 : cephadm [INF] Queued rgw.foo for migration 2026-03-10T05:51:39.836 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:51:39 vm02 bash[22526]: audit 2026-03-10T05:51:38.709227+0000 mon.a (mon.0) 792 : audit [INF] from='mgr.24773 ' entity='mgr.x' 2026-03-10T05:51:39.836 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:51:39 vm02 bash[22526]: cephadm 2026-03-10T05:51:38.711044+0000 mgr.x (mgr.24773) 2 : cephadm [INF] Queued rgw.smpl for migration 2026-03-10T05:51:39.836 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:51:39 vm02 bash[22526]: cephadm 2026-03-10T05:51:38.711295+0000 mgr.x (mgr.24773) 3 : cephadm [INF] No Migration is needed for rgw spec: {'placement': {'count': 2}, 'service_id': 'foo', 'service_name': 'rgw.foo', 'service_type': 'rgw', 'spec': {'rgw_frontend_port': 8000, 'rgw_realm': 'r', 'rgw_zone': 'z'}} 2026-03-10T05:51:39.836 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:51:39 vm02 bash[22526]: cephadm 2026-03-10T05:51:38.711312+0000 mgr.x (mgr.24773) 4 : cephadm [INF] No Migration is needed for rgw spec: {'placement': {'count': 2}, 'service_id': 'smpl', 'service_name': 'rgw.smpl', 'service_type': 'rgw'} 2026-03-10T05:51:39.836 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:51:39 vm02 bash[22526]: audit 2026-03-10T05:51:38.716604+0000 mon.a (mon.0) 793 : audit [INF] from='mgr.24773 ' entity='mgr.x' 2026-03-10T05:51:39.836 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:51:39 vm02 bash[22526]: cephadm 2026-03-10T05:51:38.718511+0000 mgr.x (mgr.24773) 5 : cephadm [INF] Migrating certs/keys for iscsi.foo spec to cert store 2026-03-10T05:51:39.836 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:51:39 vm02 bash[22526]: cephadm 2026-03-10T05:51:38.718542+0000 mgr.x (mgr.24773) 6 : cephadm [INF] Migrating certs/keys for rgw.foo spec to cert store 2026-03-10T05:51:39.836 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:51:39 vm02 bash[22526]: cephadm 2026-03-10T05:51:38.718560+0000 mgr.x (mgr.24773) 7 : cephadm [INF] Migrating certs/keys for rgw.smpl spec to cert store 2026-03-10T05:51:39.836 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:51:39 vm02 bash[22526]: cephadm 2026-03-10T05:51:38.718631+0000 mgr.x (mgr.24773) 8 : cephadm [INF] Checking for cert/key for grafana.a 2026-03-10T05:51:39.836 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:51:39 vm02 bash[22526]: audit 2026-03-10T05:51:38.724699+0000 mon.a (mon.0) 794 : audit [INF] from='mgr.24773 ' entity='mgr.x' 2026-03-10T05:51:39.836 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:51:39 vm02 bash[22526]: audit 2026-03-10T05:51:38.751931+0000 mon.b (mon.2) 51 : audit [DBG] from='mgr.24773 192.168.123.105:0/3918521054' entity='mgr.x' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T05:51:39.836 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:51:39 vm02 bash[22526]: audit 2026-03-10T05:51:38.752003+0000 mon.a (mon.0) 795 : audit [INF] from='mgr.24773 ' entity='mgr.x' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/x/mirror_snapshot_schedule"}]: dispatch 2026-03-10T05:51:39.836 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:51:39 vm02 bash[22526]: audit 2026-03-10T05:51:38.753078+0000 mon.b (mon.2) 52 : audit [INF] from='mgr.24773 192.168.123.105:0/3918521054' entity='mgr.x' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/x/mirror_snapshot_schedule"}]: dispatch 2026-03-10T05:51:39.836 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:51:39 vm02 bash[22526]: audit 2026-03-10T05:51:38.818853+0000 mon.a (mon.0) 796 : audit [INF] from='mgr.24773 ' entity='mgr.x' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/x/trash_purge_schedule"}]: dispatch 2026-03-10T05:51:39.836 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:51:39 vm02 bash[22526]: audit 2026-03-10T05:51:38.819812+0000 mon.b (mon.2) 53 : audit [INF] from='mgr.24773 192.168.123.105:0/3918521054' entity='mgr.x' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/x/trash_purge_schedule"}]: dispatch 2026-03-10T05:51:40.002 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:51:39 vm05 bash[17864]: audit 2026-03-10T05:51:38.625066+0000 mon.a (mon.0) 788 : audit [INF] from='mgr.14409 ' entity='mgr.y' cmd='[{"prefix": "mgr fail", "who": "y"}]': finished 2026-03-10T05:51:40.002 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:51:39 vm05 bash[17864]: cluster 2026-03-10T05:51:38.625157+0000 mon.a (mon.0) 789 : cluster [DBG] mgrmap e22: x(active, starting, since 1.01656s), standbys: y 2026-03-10T05:51:40.002 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:51:39 vm05 bash[17864]: audit 2026-03-10T05:51:38.625551+0000 mon.b (mon.2) 35 : audit [DBG] from='mgr.24773 192.168.123.105:0/3918521054' entity='mgr.x' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-10T05:51:40.002 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:51:39 vm05 bash[17864]: audit 2026-03-10T05:51:38.625631+0000 mon.b (mon.2) 36 : audit [DBG] from='mgr.24773 192.168.123.105:0/3918521054' entity='mgr.x' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T05:51:40.002 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:51:39 vm05 bash[17864]: audit 2026-03-10T05:51:38.625678+0000 mon.b (mon.2) 37 : audit [DBG] from='mgr.24773 192.168.123.105:0/3918521054' entity='mgr.x' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T05:51:40.002 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:51:39 vm05 bash[17864]: audit 2026-03-10T05:51:38.628605+0000 mon.b (mon.2) 38 : audit [DBG] from='mgr.24773 192.168.123.105:0/3918521054' entity='mgr.x' cmd=[{"prefix": "mgr metadata", "who": "x", "id": "x"}]: dispatch 2026-03-10T05:51:40.002 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:51:39 vm05 bash[17864]: audit 2026-03-10T05:51:38.628720+0000 mon.b (mon.2) 39 : audit [DBG] from='mgr.24773 192.168.123.105:0/3918521054' entity='mgr.x' cmd=[{"prefix": "mgr metadata", "who": "y", "id": "y"}]: dispatch 2026-03-10T05:51:40.002 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:51:39 vm05 bash[17864]: audit 2026-03-10T05:51:38.628803+0000 mon.b (mon.2) 40 : audit [DBG] from='mgr.24773 192.168.123.105:0/3918521054' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T05:51:40.002 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:51:39 vm05 bash[17864]: audit 2026-03-10T05:51:38.629081+0000 mon.b (mon.2) 41 : audit [DBG] from='mgr.24773 192.168.123.105:0/3918521054' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T05:51:40.002 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:51:39 vm05 bash[17864]: audit 2026-03-10T05:51:38.629203+0000 mon.b (mon.2) 42 : audit [DBG] from='mgr.24773 192.168.123.105:0/3918521054' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T05:51:40.002 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:51:39 vm05 bash[17864]: audit 2026-03-10T05:51:38.629308+0000 mon.b (mon.2) 43 : audit [DBG] from='mgr.24773 192.168.123.105:0/3918521054' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-10T05:51:40.002 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:51:39 vm05 bash[17864]: audit 2026-03-10T05:51:38.629513+0000 mon.b (mon.2) 44 : audit [DBG] from='mgr.24773 192.168.123.105:0/3918521054' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-10T05:51:40.002 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:51:39 vm05 bash[17864]: audit 2026-03-10T05:51:38.629629+0000 mon.b (mon.2) 45 : audit [DBG] from='mgr.24773 192.168.123.105:0/3918521054' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-10T05:51:40.002 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:51:39 vm05 bash[17864]: audit 2026-03-10T05:51:38.629722+0000 mon.b (mon.2) 46 : audit [DBG] from='mgr.24773 192.168.123.105:0/3918521054' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-10T05:51:40.002 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:51:39 vm05 bash[17864]: audit 2026-03-10T05:51:38.629827+0000 mon.b (mon.2) 47 : audit [DBG] from='mgr.24773 192.168.123.105:0/3918521054' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-10T05:51:40.002 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:51:39 vm05 bash[17864]: audit 2026-03-10T05:51:38.629925+0000 mon.b (mon.2) 48 : audit [DBG] from='mgr.24773 192.168.123.105:0/3918521054' entity='mgr.x' cmd=[{"prefix": "mds metadata"}]: dispatch 2026-03-10T05:51:40.002 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:51:39 vm05 bash[17864]: audit 2026-03-10T05:51:38.629989+0000 mon.b (mon.2) 49 : audit [DBG] from='mgr.24773 192.168.123.105:0/3918521054' entity='mgr.x' cmd=[{"prefix": "osd metadata"}]: dispatch 2026-03-10T05:51:40.002 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:51:39 vm05 bash[17864]: audit 2026-03-10T05:51:38.630217+0000 mon.b (mon.2) 50 : audit [DBG] from='mgr.24773 192.168.123.105:0/3918521054' entity='mgr.x' cmd=[{"prefix": "mon metadata"}]: dispatch 2026-03-10T05:51:40.002 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:51:39 vm05 bash[17864]: cluster 2026-03-10T05:51:38.689253+0000 mon.a (mon.0) 790 : cluster [INF] Manager daemon x is now available 2026-03-10T05:51:40.003 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:51:39 vm05 bash[17864]: audit 2026-03-10T05:51:38.703362+0000 mon.a (mon.0) 791 : audit [INF] from='mgr.24773 ' entity='mgr.x' 2026-03-10T05:51:40.003 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:51:39 vm05 bash[17864]: cephadm 2026-03-10T05:51:38.705141+0000 mgr.x (mgr.24773) 1 : cephadm [INF] Queued rgw.foo for migration 2026-03-10T05:51:40.003 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:51:39 vm05 bash[17864]: audit 2026-03-10T05:51:38.709227+0000 mon.a (mon.0) 792 : audit [INF] from='mgr.24773 ' entity='mgr.x' 2026-03-10T05:51:40.003 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:51:39 vm05 bash[17864]: cephadm 2026-03-10T05:51:38.711044+0000 mgr.x (mgr.24773) 2 : cephadm [INF] Queued rgw.smpl for migration 2026-03-10T05:51:40.003 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:51:39 vm05 bash[17864]: cephadm 2026-03-10T05:51:38.711295+0000 mgr.x (mgr.24773) 3 : cephadm [INF] No Migration is needed for rgw spec: {'placement': {'count': 2}, 'service_id': 'foo', 'service_name': 'rgw.foo', 'service_type': 'rgw', 'spec': {'rgw_frontend_port': 8000, 'rgw_realm': 'r', 'rgw_zone': 'z'}} 2026-03-10T05:51:40.003 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:51:39 vm05 bash[17864]: cephadm 2026-03-10T05:51:38.711312+0000 mgr.x (mgr.24773) 4 : cephadm [INF] No Migration is needed for rgw spec: {'placement': {'count': 2}, 'service_id': 'smpl', 'service_name': 'rgw.smpl', 'service_type': 'rgw'} 2026-03-10T05:51:40.003 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:51:39 vm05 bash[17864]: audit 2026-03-10T05:51:38.716604+0000 mon.a (mon.0) 793 : audit [INF] from='mgr.24773 ' entity='mgr.x' 2026-03-10T05:51:40.003 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:51:39 vm05 bash[17864]: cephadm 2026-03-10T05:51:38.718511+0000 mgr.x (mgr.24773) 5 : cephadm [INF] Migrating certs/keys for iscsi.foo spec to cert store 2026-03-10T05:51:40.003 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:51:39 vm05 bash[17864]: cephadm 2026-03-10T05:51:38.718542+0000 mgr.x (mgr.24773) 6 : cephadm [INF] Migrating certs/keys for rgw.foo spec to cert store 2026-03-10T05:51:40.003 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:51:39 vm05 bash[17864]: cephadm 2026-03-10T05:51:38.718560+0000 mgr.x (mgr.24773) 7 : cephadm [INF] Migrating certs/keys for rgw.smpl spec to cert store 2026-03-10T05:51:40.003 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:51:39 vm05 bash[17864]: cephadm 2026-03-10T05:51:38.718631+0000 mgr.x (mgr.24773) 8 : cephadm [INF] Checking for cert/key for grafana.a 2026-03-10T05:51:40.003 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:51:39 vm05 bash[17864]: audit 2026-03-10T05:51:38.724699+0000 mon.a (mon.0) 794 : audit [INF] from='mgr.24773 ' entity='mgr.x' 2026-03-10T05:51:40.003 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:51:39 vm05 bash[17864]: audit 2026-03-10T05:51:38.751931+0000 mon.b (mon.2) 51 : audit [DBG] from='mgr.24773 192.168.123.105:0/3918521054' entity='mgr.x' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T05:51:40.003 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:51:39 vm05 bash[17864]: audit 2026-03-10T05:51:38.752003+0000 mon.a (mon.0) 795 : audit [INF] from='mgr.24773 ' entity='mgr.x' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/x/mirror_snapshot_schedule"}]: dispatch 2026-03-10T05:51:40.003 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:51:39 vm05 bash[17864]: audit 2026-03-10T05:51:38.753078+0000 mon.b (mon.2) 52 : audit [INF] from='mgr.24773 192.168.123.105:0/3918521054' entity='mgr.x' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/x/mirror_snapshot_schedule"}]: dispatch 2026-03-10T05:51:40.003 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:51:39 vm05 bash[17864]: audit 2026-03-10T05:51:38.818853+0000 mon.a (mon.0) 796 : audit [INF] from='mgr.24773 ' entity='mgr.x' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/x/trash_purge_schedule"}]: dispatch 2026-03-10T05:51:40.003 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:51:39 vm05 bash[17864]: audit 2026-03-10T05:51:38.819812+0000 mon.b (mon.2) 53 : audit [INF] from='mgr.24773 192.168.123.105:0/3918521054' entity='mgr.x' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/x/trash_purge_schedule"}]: dispatch 2026-03-10T05:51:40.140 INFO:journalctl@ceph.mgr.y.vm02.stdout:Mar 10 05:51:39 vm02 bash[17731]: debug 2026-03-10T05:51:39.855+0000 7f76a96f1000 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member 2026-03-10T05:51:40.140 INFO:journalctl@ceph.mgr.y.vm02.stdout:Mar 10 05:51:39 vm02 bash[17731]: debug 2026-03-10T05:51:39.955+0000 7f76a96f1000 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member 2026-03-10T05:51:40.140 INFO:journalctl@ceph.mgr.y.vm02.stdout:Mar 10 05:51:40 vm02 bash[17731]: debug 2026-03-10T05:51:40.003+0000 7f76a96f1000 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member 2026-03-10T05:51:40.140 INFO:journalctl@ceph.alertmanager.a.vm02.stdout:Mar 10 05:51:39 vm02 bash[43400]: level=warn ts=2026-03-10T05:51:39.935Z caller=notify.go:724 component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=6 err="Post \"https://192.168.123.102:8443//api/prometheus_receiver\": dial tcp 192.168.123.102:8443: connect: connection refused" 2026-03-10T05:51:40.584 INFO:journalctl@ceph.mgr.y.vm02.stdout:Mar 10 05:51:40 vm02 bash[17731]: debug 2026-03-10T05:51:40.135+0000 7f76a96f1000 -1 mgr[py] Module influx has missing NOTIFY_TYPES member 2026-03-10T05:51:40.584 INFO:journalctl@ceph.mgr.y.vm02.stdout:Mar 10 05:51:40 vm02 bash[17731]: debug 2026-03-10T05:51:40.199+0000 7f76a96f1000 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member 2026-03-10T05:51:40.584 INFO:journalctl@ceph.mgr.y.vm02.stdout:Mar 10 05:51:40 vm02 bash[17731]: debug 2026-03-10T05:51:40.263+0000 7f76a96f1000 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member 2026-03-10T05:51:40.584 INFO:journalctl@ceph.alertmanager.a.vm02.stdout:Mar 10 05:51:40 vm02 bash[43400]: level=warn ts=2026-03-10T05:51:40.148Z caller=notify.go:724 component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=6 err="Post \"https://192.168.123.105:8443/api/prometheus_receiver\": x509: cannot validate certificate for 192.168.123.105 because it doesn't contain any IP SANs" 2026-03-10T05:51:41.002 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:51:40 vm05 bash[17864]: cephadm 2026-03-10T05:51:39.361996+0000 mgr.x (mgr.24773) 9 : cephadm [INF] Deploying cephadm binary to vm05 2026-03-10T05:51:41.002 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:51:40 vm05 bash[17864]: cluster 2026-03-10T05:51:39.660175+0000 mon.a (mon.0) 797 : cluster [DBG] mgrmap e23: x(active, since 2s), standbys: y 2026-03-10T05:51:41.002 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:51:40 vm05 bash[17864]: cluster 2026-03-10T05:51:39.675695+0000 mgr.x (mgr.24773) 10 : cluster [DBG] pgmap v3: 161 pgs: 161 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail 2026-03-10T05:51:41.002 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:51:40 vm05 bash[17864]: cephadm 2026-03-10T05:51:39.762883+0000 mgr.x (mgr.24773) 11 : cephadm [INF] Deploying cephadm binary to vm02 2026-03-10T05:51:41.002 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:51:40 vm05 bash[17864]: audit 2026-03-10T05:51:39.802150+0000 mon.a (mon.0) 798 : audit [INF] from='mgr.24773 ' entity='mgr.x' 2026-03-10T05:51:41.002 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:51:40 vm05 bash[17864]: audit 2026-03-10T05:51:39.812982+0000 mon.a (mon.0) 799 : audit [INF] from='mgr.24773 ' entity='mgr.x' 2026-03-10T05:51:41.084 INFO:journalctl@ceph.mgr.y.vm02.stdout:Mar 10 05:51:40 vm02 bash[17731]: debug 2026-03-10T05:51:40.771+0000 7f76a96f1000 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member 2026-03-10T05:51:41.084 INFO:journalctl@ceph.mgr.y.vm02.stdout:Mar 10 05:51:40 vm02 bash[17731]: debug 2026-03-10T05:51:40.823+0000 7f76a96f1000 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member 2026-03-10T05:51:41.084 INFO:journalctl@ceph.mgr.y.vm02.stdout:Mar 10 05:51:40 vm02 bash[17731]: debug 2026-03-10T05:51:40.875+0000 7f76a96f1000 -1 mgr[py] Module progress has missing NOTIFY_TYPES member 2026-03-10T05:51:41.084 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:51:40 vm02 bash[17462]: cephadm 2026-03-10T05:51:39.361996+0000 mgr.x (mgr.24773) 9 : cephadm [INF] Deploying cephadm binary to vm05 2026-03-10T05:51:41.084 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:51:40 vm02 bash[17462]: cluster 2026-03-10T05:51:39.660175+0000 mon.a (mon.0) 797 : cluster [DBG] mgrmap e23: x(active, since 2s), standbys: y 2026-03-10T05:51:41.084 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:51:40 vm02 bash[17462]: cluster 2026-03-10T05:51:39.675695+0000 mgr.x (mgr.24773) 10 : cluster [DBG] pgmap v3: 161 pgs: 161 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail 2026-03-10T05:51:41.084 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:51:40 vm02 bash[17462]: cephadm 2026-03-10T05:51:39.762883+0000 mgr.x (mgr.24773) 11 : cephadm [INF] Deploying cephadm binary to vm02 2026-03-10T05:51:41.084 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:51:40 vm02 bash[17462]: audit 2026-03-10T05:51:39.802150+0000 mon.a (mon.0) 798 : audit [INF] from='mgr.24773 ' entity='mgr.x' 2026-03-10T05:51:41.084 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:51:40 vm02 bash[17462]: audit 2026-03-10T05:51:39.812982+0000 mon.a (mon.0) 799 : audit [INF] from='mgr.24773 ' entity='mgr.x' 2026-03-10T05:51:41.084 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:51:40 vm02 bash[22526]: cephadm 2026-03-10T05:51:39.361996+0000 mgr.x (mgr.24773) 9 : cephadm [INF] Deploying cephadm binary to vm05 2026-03-10T05:51:41.084 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:51:40 vm02 bash[22526]: cluster 2026-03-10T05:51:39.660175+0000 mon.a (mon.0) 797 : cluster [DBG] mgrmap e23: x(active, since 2s), standbys: y 2026-03-10T05:51:41.084 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:51:40 vm02 bash[22526]: cluster 2026-03-10T05:51:39.675695+0000 mgr.x (mgr.24773) 10 : cluster [DBG] pgmap v3: 161 pgs: 161 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail 2026-03-10T05:51:41.084 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:51:40 vm02 bash[22526]: cephadm 2026-03-10T05:51:39.762883+0000 mgr.x (mgr.24773) 11 : cephadm [INF] Deploying cephadm binary to vm02 2026-03-10T05:51:41.084 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:51:40 vm02 bash[22526]: audit 2026-03-10T05:51:39.802150+0000 mon.a (mon.0) 798 : audit [INF] from='mgr.24773 ' entity='mgr.x' 2026-03-10T05:51:41.084 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:51:40 vm02 bash[22526]: audit 2026-03-10T05:51:39.812982+0000 mon.a (mon.0) 799 : audit [INF] from='mgr.24773 ' entity='mgr.x' 2026-03-10T05:51:41.584 INFO:journalctl@ceph.mgr.y.vm02.stdout:Mar 10 05:51:41 vm02 bash[17731]: debug 2026-03-10T05:51:41.183+0000 7f76a96f1000 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member 2026-03-10T05:51:41.584 INFO:journalctl@ceph.mgr.y.vm02.stdout:Mar 10 05:51:41 vm02 bash[17731]: debug 2026-03-10T05:51:41.239+0000 7f76a96f1000 -1 mgr[py] Module crash has missing NOTIFY_TYPES member 2026-03-10T05:51:41.584 INFO:journalctl@ceph.mgr.y.vm02.stdout:Mar 10 05:51:41 vm02 bash[17731]: debug 2026-03-10T05:51:41.295+0000 7f76a96f1000 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member 2026-03-10T05:51:41.584 INFO:journalctl@ceph.mgr.y.vm02.stdout:Mar 10 05:51:41 vm02 bash[17731]: debug 2026-03-10T05:51:41.371+0000 7f76a96f1000 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member 2026-03-10T05:51:41.951 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:51:41 vm02 bash[17462]: cluster 2026-03-10T05:51:40.628585+0000 mgr.x (mgr.24773) 12 : cluster [DBG] pgmap v4: 161 pgs: 161 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail 2026-03-10T05:51:41.951 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:51:41 vm02 bash[17462]: cephadm 2026-03-10T05:51:40.669227+0000 mgr.x (mgr.24773) 13 : cephadm [INF] [10/Mar/2026:05:51:40] ENGINE Bus STARTING 2026-03-10T05:51:41.951 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:51:41 vm02 bash[17462]: cephadm 2026-03-10T05:51:40.770551+0000 mgr.x (mgr.24773) 14 : cephadm [INF] [10/Mar/2026:05:51:40] ENGINE Serving on http://192.168.123.105:8765 2026-03-10T05:51:41.951 INFO:journalctl@ceph.mgr.y.vm02.stdout:Mar 10 05:51:41 vm02 bash[17731]: debug 2026-03-10T05:51:41.667+0000 7f76a96f1000 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member 2026-03-10T05:51:41.951 INFO:journalctl@ceph.mgr.y.vm02.stdout:Mar 10 05:51:41 vm02 bash[17731]: debug 2026-03-10T05:51:41.839+0000 7f76a96f1000 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member 2026-03-10T05:51:41.951 INFO:journalctl@ceph.mgr.y.vm02.stdout:Mar 10 05:51:41 vm02 bash[17731]: debug 2026-03-10T05:51:41.891+0000 7f76a96f1000 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member 2026-03-10T05:51:41.951 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:51:41 vm02 bash[22526]: cluster 2026-03-10T05:51:40.628585+0000 mgr.x (mgr.24773) 12 : cluster [DBG] pgmap v4: 161 pgs: 161 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail 2026-03-10T05:51:41.951 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:51:41 vm02 bash[22526]: cephadm 2026-03-10T05:51:40.669227+0000 mgr.x (mgr.24773) 13 : cephadm [INF] [10/Mar/2026:05:51:40] ENGINE Bus STARTING 2026-03-10T05:51:41.951 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:51:41 vm02 bash[22526]: cephadm 2026-03-10T05:51:40.770551+0000 mgr.x (mgr.24773) 14 : cephadm [INF] [10/Mar/2026:05:51:40] ENGINE Serving on http://192.168.123.105:8765 2026-03-10T05:51:42.002 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:51:41 vm05 bash[17864]: cluster 2026-03-10T05:51:40.628585+0000 mgr.x (mgr.24773) 12 : cluster [DBG] pgmap v4: 161 pgs: 161 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail 2026-03-10T05:51:42.002 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:51:41 vm05 bash[17864]: cephadm 2026-03-10T05:51:40.669227+0000 mgr.x (mgr.24773) 13 : cephadm [INF] [10/Mar/2026:05:51:40] ENGINE Bus STARTING 2026-03-10T05:51:42.002 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:51:41 vm05 bash[17864]: cephadm 2026-03-10T05:51:40.770551+0000 mgr.x (mgr.24773) 14 : cephadm [INF] [10/Mar/2026:05:51:40] ENGINE Serving on http://192.168.123.105:8765 2026-03-10T05:51:42.334 INFO:journalctl@ceph.mgr.y.vm02.stdout:Mar 10 05:51:41 vm02 bash[17731]: debug 2026-03-10T05:51:41.947+0000 7f76a96f1000 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member 2026-03-10T05:51:42.334 INFO:journalctl@ceph.mgr.y.vm02.stdout:Mar 10 05:51:42 vm02 bash[17731]: debug 2026-03-10T05:51:42.083+0000 7f76a96f1000 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member 2026-03-10T05:51:42.808 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:51:42 vm02 bash[22526]: cephadm 2026-03-10T05:51:40.881879+0000 mgr.x (mgr.24773) 15 : cephadm [INF] [10/Mar/2026:05:51:40] ENGINE Serving on https://192.168.123.105:7150 2026-03-10T05:51:42.808 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:51:42 vm02 bash[22526]: cephadm 2026-03-10T05:51:40.881933+0000 mgr.x (mgr.24773) 16 : cephadm [INF] [10/Mar/2026:05:51:40] ENGINE Bus STARTED 2026-03-10T05:51:42.808 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:51:42 vm02 bash[22526]: cephadm 2026-03-10T05:51:40.882258+0000 mgr.x (mgr.24773) 17 : cephadm [INF] [10/Mar/2026:05:51:40] ENGINE Client ('192.168.123.105', 50396) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)') 2026-03-10T05:51:42.808 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:51:42 vm02 bash[22526]: cluster 2026-03-10T05:51:42.539620+0000 mon.a (mon.0) 800 : cluster [DBG] Standby manager daemon y restarted 2026-03-10T05:51:42.808 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:51:42 vm02 bash[22526]: cluster 2026-03-10T05:51:42.539704+0000 mon.a (mon.0) 801 : cluster [DBG] Standby manager daemon y started 2026-03-10T05:51:42.808 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:51:42 vm02 bash[22526]: audit 2026-03-10T05:51:42.542017+0000 mon.c (mon.1) 121 : audit [DBG] from='mgr.? 192.168.123.102:0/3853167779' entity='mgr.y' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/y/crt"}]: dispatch 2026-03-10T05:51:42.809 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:51:42 vm02 bash[22526]: audit 2026-03-10T05:51:42.542738+0000 mon.c (mon.1) 122 : audit [DBG] from='mgr.? 192.168.123.102:0/3853167779' entity='mgr.y' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/crt"}]: dispatch 2026-03-10T05:51:42.809 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:51:42 vm02 bash[22526]: audit 2026-03-10T05:51:42.544146+0000 mon.c (mon.1) 123 : audit [DBG] from='mgr.? 192.168.123.102:0/3853167779' entity='mgr.y' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/y/key"}]: dispatch 2026-03-10T05:51:42.809 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:51:42 vm02 bash[22526]: audit 2026-03-10T05:51:42.544889+0000 mon.c (mon.1) 124 : audit [DBG] from='mgr.? 192.168.123.102:0/3853167779' entity='mgr.y' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/key"}]: dispatch 2026-03-10T05:51:42.809 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:51:42 vm02 bash[17462]: cephadm 2026-03-10T05:51:40.881879+0000 mgr.x (mgr.24773) 15 : cephadm [INF] [10/Mar/2026:05:51:40] ENGINE Serving on https://192.168.123.105:7150 2026-03-10T05:51:42.809 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:51:42 vm02 bash[17462]: cephadm 2026-03-10T05:51:40.881933+0000 mgr.x (mgr.24773) 16 : cephadm [INF] [10/Mar/2026:05:51:40] ENGINE Bus STARTED 2026-03-10T05:51:42.809 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:51:42 vm02 bash[17462]: cephadm 2026-03-10T05:51:40.882258+0000 mgr.x (mgr.24773) 17 : cephadm [INF] [10/Mar/2026:05:51:40] ENGINE Client ('192.168.123.105', 50396) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)') 2026-03-10T05:51:42.809 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:51:42 vm02 bash[17462]: cluster 2026-03-10T05:51:42.539620+0000 mon.a (mon.0) 800 : cluster [DBG] Standby manager daemon y restarted 2026-03-10T05:51:42.809 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:51:42 vm02 bash[17462]: cluster 2026-03-10T05:51:42.539704+0000 mon.a (mon.0) 801 : cluster [DBG] Standby manager daemon y started 2026-03-10T05:51:42.809 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:51:42 vm02 bash[17462]: audit 2026-03-10T05:51:42.542017+0000 mon.c (mon.1) 121 : audit [DBG] from='mgr.? 192.168.123.102:0/3853167779' entity='mgr.y' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/y/crt"}]: dispatch 2026-03-10T05:51:42.809 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:51:42 vm02 bash[17462]: audit 2026-03-10T05:51:42.542738+0000 mon.c (mon.1) 122 : audit [DBG] from='mgr.? 192.168.123.102:0/3853167779' entity='mgr.y' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/crt"}]: dispatch 2026-03-10T05:51:42.809 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:51:42 vm02 bash[17462]: audit 2026-03-10T05:51:42.544146+0000 mon.c (mon.1) 123 : audit [DBG] from='mgr.? 192.168.123.102:0/3853167779' entity='mgr.y' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/y/key"}]: dispatch 2026-03-10T05:51:42.809 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:51:42 vm02 bash[17462]: audit 2026-03-10T05:51:42.544889+0000 mon.c (mon.1) 124 : audit [DBG] from='mgr.? 192.168.123.102:0/3853167779' entity='mgr.y' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/key"}]: dispatch 2026-03-10T05:51:42.809 INFO:journalctl@ceph.mgr.y.vm02.stdout:Mar 10 05:51:42 vm02 bash[17731]: debug 2026-03-10T05:51:42.535+0000 7f76a96f1000 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member 2026-03-10T05:51:42.809 INFO:journalctl@ceph.mgr.y.vm02.stdout:Mar 10 05:51:42 vm02 bash[17731]: [10/Mar/2026:05:51:42] ENGINE Bus STARTING 2026-03-10T05:51:42.809 INFO:journalctl@ceph.mgr.y.vm02.stdout:Mar 10 05:51:42 vm02 bash[17731]: CherryPy Checker: 2026-03-10T05:51:42.809 INFO:journalctl@ceph.mgr.y.vm02.stdout:Mar 10 05:51:42 vm02 bash[17731]: The Application mounted at '' has an empty config. 2026-03-10T05:51:42.809 INFO:journalctl@ceph.mgr.y.vm02.stdout:Mar 10 05:51:42 vm02 bash[17731]: [10/Mar/2026:05:51:42] ENGINE Serving on http://:::9283 2026-03-10T05:51:42.809 INFO:journalctl@ceph.mgr.y.vm02.stdout:Mar 10 05:51:42 vm02 bash[17731]: [10/Mar/2026:05:51:42] ENGINE Bus STARTED 2026-03-10T05:51:43.002 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:51:42 vm05 bash[17864]: cephadm 2026-03-10T05:51:40.881879+0000 mgr.x (mgr.24773) 15 : cephadm [INF] [10/Mar/2026:05:51:40] ENGINE Serving on https://192.168.123.105:7150 2026-03-10T05:51:43.002 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:51:42 vm05 bash[17864]: cephadm 2026-03-10T05:51:40.881933+0000 mgr.x (mgr.24773) 16 : cephadm [INF] [10/Mar/2026:05:51:40] ENGINE Bus STARTED 2026-03-10T05:51:43.002 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:51:42 vm05 bash[17864]: cephadm 2026-03-10T05:51:40.882258+0000 mgr.x (mgr.24773) 17 : cephadm [INF] [10/Mar/2026:05:51:40] ENGINE Client ('192.168.123.105', 50396) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)') 2026-03-10T05:51:43.002 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:51:42 vm05 bash[17864]: cluster 2026-03-10T05:51:42.539620+0000 mon.a (mon.0) 800 : cluster [DBG] Standby manager daemon y restarted 2026-03-10T05:51:43.002 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:51:42 vm05 bash[17864]: cluster 2026-03-10T05:51:42.539704+0000 mon.a (mon.0) 801 : cluster [DBG] Standby manager daemon y started 2026-03-10T05:51:43.002 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:51:42 vm05 bash[17864]: audit 2026-03-10T05:51:42.542017+0000 mon.c (mon.1) 121 : audit [DBG] from='mgr.? 192.168.123.102:0/3853167779' entity='mgr.y' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/y/crt"}]: dispatch 2026-03-10T05:51:43.002 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:51:42 vm05 bash[17864]: audit 2026-03-10T05:51:42.542738+0000 mon.c (mon.1) 122 : audit [DBG] from='mgr.? 192.168.123.102:0/3853167779' entity='mgr.y' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/crt"}]: dispatch 2026-03-10T05:51:43.002 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:51:42 vm05 bash[17864]: audit 2026-03-10T05:51:42.544146+0000 mon.c (mon.1) 123 : audit [DBG] from='mgr.? 192.168.123.102:0/3853167779' entity='mgr.y' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/y/key"}]: dispatch 2026-03-10T05:51:43.002 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:51:42 vm05 bash[17864]: audit 2026-03-10T05:51:42.544889+0000 mon.c (mon.1) 124 : audit [DBG] from='mgr.? 192.168.123.102:0/3853167779' entity='mgr.y' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/key"}]: dispatch 2026-03-10T05:51:43.002 INFO:journalctl@ceph.mgr.x.vm05.stdout:Mar 10 05:51:42 vm05 bash[37598]: ::ffff:192.168.123.105 - - [10/Mar/2026:05:51:42] "GET /metrics HTTP/1.1" 200 34963 "" "Prometheus/2.33.4" 2026-03-10T05:51:43.084 INFO:journalctl@ceph.alertmanager.a.vm02.stdout:Mar 10 05:51:43 vm02 bash[43400]: level=warn ts=2026-03-10T05:51:43.012Z caller=notify.go:724 component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=7 err="Post \"https://192.168.123.102:8443//api/prometheus_receiver\": x509: cannot validate certificate for 192.168.123.102 because it doesn't contain any IP SANs" 2026-03-10T05:51:43.834 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:51:43 vm02 bash[17462]: cluster 2026-03-10T05:51:42.628910+0000 mgr.x (mgr.24773) 18 : cluster [DBG] pgmap v5: 161 pgs: 161 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail 2026-03-10T05:51:43.834 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:51:43 vm02 bash[17462]: cluster 2026-03-10T05:51:42.676227+0000 mon.a (mon.0) 802 : cluster [DBG] mgrmap e24: x(active, since 5s), standbys: y 2026-03-10T05:51:43.834 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:51:43 vm02 bash[17462]: audit 2026-03-10T05:51:42.809242+0000 mgr.x (mgr.24773) 19 : audit [DBG] from='client.14712 -' entity='client.iscsi.foo.vm02.mxbwmh' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T05:51:43.834 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:51:43 vm02 bash[22526]: cluster 2026-03-10T05:51:42.628910+0000 mgr.x (mgr.24773) 18 : cluster [DBG] pgmap v5: 161 pgs: 161 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail 2026-03-10T05:51:43.835 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:51:43 vm02 bash[22526]: cluster 2026-03-10T05:51:42.676227+0000 mon.a (mon.0) 802 : cluster [DBG] mgrmap e24: x(active, since 5s), standbys: y 2026-03-10T05:51:43.835 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:51:43 vm02 bash[22526]: audit 2026-03-10T05:51:42.809242+0000 mgr.x (mgr.24773) 19 : audit [DBG] from='client.14712 -' entity='client.iscsi.foo.vm02.mxbwmh' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T05:51:43.835 INFO:journalctl@ceph.alertmanager.a.vm02.stdout:Mar 10 05:51:43 vm02 bash[43400]: level=error ts=2026-03-10T05:51:43.523Z caller=dispatch.go:354 component=dispatcher msg="Notify for alerts failed" num_alerts=10 err="ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"https://192.168.123.105:8443/api/prometheus_receiver\": x509: cannot validate certificate for 192.168.123.105 because it doesn't contain any IP SANs; ceph-dashboard/webhook[0]: notify retry canceled after 8 attempts: Post \"https://192.168.123.102:8443//api/prometheus_receiver\": x509: cannot validate certificate for 192.168.123.102 because it doesn't contain any IP SANs" 2026-03-10T05:51:43.835 INFO:journalctl@ceph.alertmanager.a.vm02.stdout:Mar 10 05:51:43 vm02 bash[43400]: level=warn ts=2026-03-10T05:51:43.524Z caller=notify.go:724 component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"https://192.168.123.105:8443/api/prometheus_receiver\": x509: cannot validate certificate for 192.168.123.105 because it doesn't contain any IP SANs" 2026-03-10T05:51:43.835 INFO:journalctl@ceph.alertmanager.a.vm02.stdout:Mar 10 05:51:43 vm02 bash[43400]: level=warn ts=2026-03-10T05:51:43.524Z caller=notify.go:724 component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"https://192.168.123.102:8443//api/prometheus_receiver\": x509: cannot validate certificate for 192.168.123.102 because it doesn't contain any IP SANs" 2026-03-10T05:51:44.002 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:51:43 vm05 bash[17864]: cluster 2026-03-10T05:51:42.628910+0000 mgr.x (mgr.24773) 18 : cluster [DBG] pgmap v5: 161 pgs: 161 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail 2026-03-10T05:51:44.002 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:51:43 vm05 bash[17864]: cluster 2026-03-10T05:51:42.676227+0000 mon.a (mon.0) 802 : cluster [DBG] mgrmap e24: x(active, since 5s), standbys: y 2026-03-10T05:51:44.002 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:51:43 vm05 bash[17864]: audit 2026-03-10T05:51:42.809242+0000 mgr.x (mgr.24773) 19 : audit [DBG] from='client.14712 -' entity='client.iscsi.foo.vm02.mxbwmh' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T05:51:44.096 INFO:teuthology.orchestra.run.vm02.stdout:true 2026-03-10T05:51:44.490 INFO:teuthology.orchestra.run.vm02.stdout:NAME HOST PORTS STATUS REFRESHED AGE MEM USE MEM LIM VERSION IMAGE ID CONTAINER ID 2026-03-10T05:51:44.490 INFO:teuthology.orchestra.run.vm02.stdout:alertmanager.a vm02 *:9093,9094 running (4m) 2m ago 4m 16.4M - ba2b418f427c 3305780e5ef5 2026-03-10T05:51:44.490 INFO:teuthology.orchestra.run.vm02.stdout:grafana.a vm05 *:3000 running (4m) 11s ago 4m 40.7M - 8.3.5 dad864ee21e9 a370f3725ef2 2026-03-10T05:51:44.491 INFO:teuthology.orchestra.run.vm02.stdout:iscsi.foo.vm02.mxbwmh vm02 running (4m) 2m ago 4m 41.3M - 3.5 e1d6a67b021e c01d22afac06 2026-03-10T05:51:44.491 INFO:teuthology.orchestra.run.vm02.stdout:mgr.x vm05 *:8443,9283 running (14s) 11s ago 7m 166M - 19.2.3-678-ge911bdeb 654f31e6858e eefd57c0b61c 2026-03-10T05:51:44.491 INFO:teuthology.orchestra.run.vm02.stdout:mgr.y vm02 *:9283 running (7m) 2m ago 7m 445M - 17.2.0 e1d6a67b021e a04e3f113661 2026-03-10T05:51:44.491 INFO:teuthology.orchestra.run.vm02.stdout:mon.a vm02 running (7m) 2m ago 7m 49.4M 2048M 17.2.0 e1d6a67b021e bf59d12a7baa 2026-03-10T05:51:44.491 INFO:teuthology.orchestra.run.vm02.stdout:mon.b vm05 running (7m) 11s ago 7m 37.5M 2048M 17.2.0 e1d6a67b021e 96a2a71fd403 2026-03-10T05:51:44.491 INFO:teuthology.orchestra.run.vm02.stdout:mon.c vm02 running (7m) 2m ago 7m 47.6M 2048M 17.2.0 e1d6a67b021e 2f6dcf491c61 2026-03-10T05:51:44.491 INFO:teuthology.orchestra.run.vm02.stdout:node-exporter.a vm02 *:9100 running (4m) 2m ago 4m 8040k - 1dbe0e931976 111574d033cc 2026-03-10T05:51:44.491 INFO:teuthology.orchestra.run.vm02.stdout:node-exporter.b vm05 *:9100 running (4m) 11s ago 4m 9543k - 1dbe0e931976 b6278e64d85c 2026-03-10T05:51:44.491 INFO:teuthology.orchestra.run.vm02.stdout:osd.0 vm02 running (6m) 2m ago 6m 47.9M 4096M 17.2.0 e1d6a67b021e 563d55a3e6a4 2026-03-10T05:51:44.491 INFO:teuthology.orchestra.run.vm02.stdout:osd.1 vm02 running (6m) 2m ago 6m 50.8M 4096M 17.2.0 e1d6a67b021e 8c25a1e89677 2026-03-10T05:51:44.491 INFO:teuthology.orchestra.run.vm02.stdout:osd.2 vm02 running (6m) 2m ago 6m 46.2M 4096M 17.2.0 e1d6a67b021e 826f54bdbc5c 2026-03-10T05:51:44.491 INFO:teuthology.orchestra.run.vm02.stdout:osd.3 vm02 running (6m) 2m ago 6m 49.1M 4096M 17.2.0 e1d6a67b021e 0c6cfa53c9fd 2026-03-10T05:51:44.491 INFO:teuthology.orchestra.run.vm02.stdout:osd.4 vm05 running (5m) 11s ago 5m 51.0M 4096M 17.2.0 e1d6a67b021e 4ffe1741f201 2026-03-10T05:51:44.491 INFO:teuthology.orchestra.run.vm02.stdout:osd.5 vm05 running (5m) 11s ago 5m 49.4M 4096M 17.2.0 e1d6a67b021e cba5583c238e 2026-03-10T05:51:44.491 INFO:teuthology.orchestra.run.vm02.stdout:osd.6 vm05 running (5m) 11s ago 5m 47.4M 4096M 17.2.0 e1d6a67b021e 9d1b370357d7 2026-03-10T05:51:44.491 INFO:teuthology.orchestra.run.vm02.stdout:osd.7 vm05 running (5m) 11s ago 5m 48.9M 4096M 17.2.0 e1d6a67b021e 8a4837b788cf 2026-03-10T05:51:44.491 INFO:teuthology.orchestra.run.vm02.stdout:prometheus.a vm05 *:9095 running (4m) 11s ago 4m 50.9M - 514e6a882f6e 6c053703db40 2026-03-10T05:51:44.491 INFO:teuthology.orchestra.run.vm02.stdout:rgw.foo.vm02.pbogjd vm02 *:8000 running (4m) 2m ago 4m 82.9M - 17.2.0 e1d6a67b021e 2ab2ffd1abaa 2026-03-10T05:51:44.491 INFO:teuthology.orchestra.run.vm02.stdout:rgw.foo.vm05.hvmsxl vm05 *:8000 running (4m) 11s ago 4m 83.5M - 17.2.0 e1d6a67b021e 85d1c77b7e9d 2026-03-10T05:51:44.491 INFO:teuthology.orchestra.run.vm02.stdout:rgw.smpl.vm02.pglcfm vm02 *:80 running (4m) 2m ago 4m 82.7M - 17.2.0 e1d6a67b021e ef152a460673 2026-03-10T05:51:44.491 INFO:teuthology.orchestra.run.vm02.stdout:rgw.smpl.vm05.hqqmap vm05 *:80 running (4m) 11s ago 4m 83.5M - 17.2.0 e1d6a67b021e 29c9ee794f34 2026-03-10T05:51:44.747 INFO:teuthology.orchestra.run.vm02.stdout:{ 2026-03-10T05:51:44.747 INFO:teuthology.orchestra.run.vm02.stdout: "mon": { 2026-03-10T05:51:44.747 INFO:teuthology.orchestra.run.vm02.stdout: "ceph version 17.2.0 (43e2e60a7559d3f46c9d53f1ca875fd499a1e35e) quincy (stable)": 3 2026-03-10T05:51:44.747 INFO:teuthology.orchestra.run.vm02.stdout: }, 2026-03-10T05:51:44.747 INFO:teuthology.orchestra.run.vm02.stdout: "mgr": { 2026-03-10T05:51:44.747 INFO:teuthology.orchestra.run.vm02.stdout: "ceph version 17.2.0 (43e2e60a7559d3f46c9d53f1ca875fd499a1e35e) quincy (stable)": 1, 2026-03-10T05:51:44.747 INFO:teuthology.orchestra.run.vm02.stdout: "ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable)": 1 2026-03-10T05:51:44.747 INFO:teuthology.orchestra.run.vm02.stdout: }, 2026-03-10T05:51:44.747 INFO:teuthology.orchestra.run.vm02.stdout: "osd": { 2026-03-10T05:51:44.747 INFO:teuthology.orchestra.run.vm02.stdout: "ceph version 17.2.0 (43e2e60a7559d3f46c9d53f1ca875fd499a1e35e) quincy (stable)": 8 2026-03-10T05:51:44.747 INFO:teuthology.orchestra.run.vm02.stdout: }, 2026-03-10T05:51:44.747 INFO:teuthology.orchestra.run.vm02.stdout: "mds": {}, 2026-03-10T05:51:44.747 INFO:teuthology.orchestra.run.vm02.stdout: "rgw": { 2026-03-10T05:51:44.747 INFO:teuthology.orchestra.run.vm02.stdout: "ceph version 17.2.0 (43e2e60a7559d3f46c9d53f1ca875fd499a1e35e) quincy (stable)": 4 2026-03-10T05:51:44.747 INFO:teuthology.orchestra.run.vm02.stdout: }, 2026-03-10T05:51:44.747 INFO:teuthology.orchestra.run.vm02.stdout: "overall": { 2026-03-10T05:51:44.747 INFO:teuthology.orchestra.run.vm02.stdout: "ceph version 17.2.0 (43e2e60a7559d3f46c9d53f1ca875fd499a1e35e) quincy (stable)": 16, 2026-03-10T05:51:44.747 INFO:teuthology.orchestra.run.vm02.stdout: "ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable)": 1 2026-03-10T05:51:44.747 INFO:teuthology.orchestra.run.vm02.stdout: } 2026-03-10T05:51:44.747 INFO:teuthology.orchestra.run.vm02.stdout:} 2026-03-10T05:51:44.953 INFO:teuthology.orchestra.run.vm02.stdout:{ 2026-03-10T05:51:44.953 INFO:teuthology.orchestra.run.vm02.stdout: "target_image": "quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df", 2026-03-10T05:51:44.953 INFO:teuthology.orchestra.run.vm02.stdout: "in_progress": true, 2026-03-10T05:51:44.953 INFO:teuthology.orchestra.run.vm02.stdout: "which": "Upgrading all daemon types on all hosts", 2026-03-10T05:51:44.953 INFO:teuthology.orchestra.run.vm02.stdout: "services_complete": [], 2026-03-10T05:51:44.953 INFO:teuthology.orchestra.run.vm02.stdout: "progress": "1/23 daemons upgraded", 2026-03-10T05:51:44.953 INFO:teuthology.orchestra.run.vm02.stdout: "message": "", 2026-03-10T05:51:44.953 INFO:teuthology.orchestra.run.vm02.stdout: "is_paused": false 2026-03-10T05:51:44.953 INFO:teuthology.orchestra.run.vm02.stdout:} 2026-03-10T05:51:45.002 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:51:44 vm05 bash[17864]: cluster 2026-03-10T05:51:43.684156+0000 mon.a (mon.0) 803 : cluster [DBG] mgrmap e25: x(active, since 6s), standbys: y 2026-03-10T05:51:45.084 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:51:44 vm02 bash[17462]: cluster 2026-03-10T05:51:43.684156+0000 mon.a (mon.0) 803 : cluster [DBG] mgrmap e25: x(active, since 6s), standbys: y 2026-03-10T05:51:45.084 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:51:44 vm02 bash[22526]: cluster 2026-03-10T05:51:43.684156+0000 mon.a (mon.0) 803 : cluster [DBG] mgrmap e25: x(active, since 6s), standbys: y 2026-03-10T05:51:45.258 INFO:teuthology.orchestra.run.vm02.stdout:HEALTH_OK 2026-03-10T05:51:46.002 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:51:45 vm05 bash[17864]: audit 2026-03-10T05:51:44.086585+0000 mgr.x (mgr.24773) 20 : audit [DBG] from='client.24913 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T05:51:46.002 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:51:45 vm05 bash[17864]: audit 2026-03-10T05:51:44.291166+0000 mgr.x (mgr.24773) 21 : audit [DBG] from='client.14973 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T05:51:46.002 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:51:45 vm05 bash[17864]: audit 2026-03-10T05:51:44.486793+0000 mgr.x (mgr.24773) 22 : audit [DBG] from='client.24919 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T05:51:46.002 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:51:45 vm05 bash[17864]: cluster 2026-03-10T05:51:44.629281+0000 mgr.x (mgr.24773) 23 : cluster [DBG] pgmap v6: 161 pgs: 161 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail 2026-03-10T05:51:46.002 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:51:45 vm05 bash[17864]: audit 2026-03-10T05:51:44.748098+0000 mon.b (mon.2) 54 : audit [DBG] from='client.? 192.168.123.102:0/3939635225' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T05:51:46.002 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:51:45 vm05 bash[17864]: audit 2026-03-10T05:51:45.101140+0000 mon.a (mon.0) 804 : audit [INF] from='mgr.24773 ' entity='mgr.x' 2026-03-10T05:51:46.002 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:51:45 vm05 bash[17864]: audit 2026-03-10T05:51:45.109058+0000 mon.a (mon.0) 805 : audit [INF] from='mgr.24773 ' entity='mgr.x' 2026-03-10T05:51:46.002 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:51:45 vm05 bash[17864]: audit 2026-03-10T05:51:45.256444+0000 mon.a (mon.0) 806 : audit [DBG] from='client.? 192.168.123.102:0/465245714' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch 2026-03-10T05:51:46.002 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:51:45 vm05 bash[17864]: audit 2026-03-10T05:51:45.628634+0000 mon.a (mon.0) 807 : audit [INF] from='mgr.24773 ' entity='mgr.x' 2026-03-10T05:51:46.002 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:51:45 vm05 bash[17864]: audit 2026-03-10T05:51:45.637011+0000 mon.a (mon.0) 808 : audit [INF] from='mgr.24773 ' entity='mgr.x' 2026-03-10T05:51:46.084 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:51:45 vm02 bash[17462]: audit 2026-03-10T05:51:44.086585+0000 mgr.x (mgr.24773) 20 : audit [DBG] from='client.24913 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T05:51:46.084 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:51:45 vm02 bash[17462]: audit 2026-03-10T05:51:44.291166+0000 mgr.x (mgr.24773) 21 : audit [DBG] from='client.14973 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T05:51:46.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:51:45 vm02 bash[17462]: audit 2026-03-10T05:51:44.486793+0000 mgr.x (mgr.24773) 22 : audit [DBG] from='client.24919 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T05:51:46.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:51:45 vm02 bash[17462]: cluster 2026-03-10T05:51:44.629281+0000 mgr.x (mgr.24773) 23 : cluster [DBG] pgmap v6: 161 pgs: 161 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail 2026-03-10T05:51:46.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:51:45 vm02 bash[17462]: audit 2026-03-10T05:51:44.748098+0000 mon.b (mon.2) 54 : audit [DBG] from='client.? 192.168.123.102:0/3939635225' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T05:51:46.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:51:45 vm02 bash[17462]: audit 2026-03-10T05:51:45.101140+0000 mon.a (mon.0) 804 : audit [INF] from='mgr.24773 ' entity='mgr.x' 2026-03-10T05:51:46.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:51:45 vm02 bash[17462]: audit 2026-03-10T05:51:45.109058+0000 mon.a (mon.0) 805 : audit [INF] from='mgr.24773 ' entity='mgr.x' 2026-03-10T05:51:46.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:51:45 vm02 bash[17462]: audit 2026-03-10T05:51:45.256444+0000 mon.a (mon.0) 806 : audit [DBG] from='client.? 192.168.123.102:0/465245714' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch 2026-03-10T05:51:46.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:51:45 vm02 bash[17462]: audit 2026-03-10T05:51:45.628634+0000 mon.a (mon.0) 807 : audit [INF] from='mgr.24773 ' entity='mgr.x' 2026-03-10T05:51:46.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:51:45 vm02 bash[17462]: audit 2026-03-10T05:51:45.637011+0000 mon.a (mon.0) 808 : audit [INF] from='mgr.24773 ' entity='mgr.x' 2026-03-10T05:51:46.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:51:45 vm02 bash[22526]: audit 2026-03-10T05:51:44.086585+0000 mgr.x (mgr.24773) 20 : audit [DBG] from='client.24913 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T05:51:46.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:51:45 vm02 bash[22526]: audit 2026-03-10T05:51:44.291166+0000 mgr.x (mgr.24773) 21 : audit [DBG] from='client.14973 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T05:51:46.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:51:45 vm02 bash[22526]: audit 2026-03-10T05:51:44.486793+0000 mgr.x (mgr.24773) 22 : audit [DBG] from='client.24919 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T05:51:46.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:51:45 vm02 bash[22526]: cluster 2026-03-10T05:51:44.629281+0000 mgr.x (mgr.24773) 23 : cluster [DBG] pgmap v6: 161 pgs: 161 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail 2026-03-10T05:51:46.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:51:45 vm02 bash[22526]: audit 2026-03-10T05:51:44.748098+0000 mon.b (mon.2) 54 : audit [DBG] from='client.? 192.168.123.102:0/3939635225' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T05:51:46.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:51:45 vm02 bash[22526]: audit 2026-03-10T05:51:45.101140+0000 mon.a (mon.0) 804 : audit [INF] from='mgr.24773 ' entity='mgr.x' 2026-03-10T05:51:46.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:51:45 vm02 bash[22526]: audit 2026-03-10T05:51:45.109058+0000 mon.a (mon.0) 805 : audit [INF] from='mgr.24773 ' entity='mgr.x' 2026-03-10T05:51:46.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:51:45 vm02 bash[22526]: audit 2026-03-10T05:51:45.256444+0000 mon.a (mon.0) 806 : audit [DBG] from='client.? 192.168.123.102:0/465245714' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch 2026-03-10T05:51:46.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:51:45 vm02 bash[22526]: audit 2026-03-10T05:51:45.628634+0000 mon.a (mon.0) 807 : audit [INF] from='mgr.24773 ' entity='mgr.x' 2026-03-10T05:51:46.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:51:45 vm02 bash[22526]: audit 2026-03-10T05:51:45.637011+0000 mon.a (mon.0) 808 : audit [INF] from='mgr.24773 ' entity='mgr.x' 2026-03-10T05:51:47.002 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:51:46 vm05 bash[17864]: audit 2026-03-10T05:51:44.953772+0000 mgr.x (mgr.24773) 24 : audit [DBG] from='client.24928 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T05:51:47.002 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:51:46 vm05 bash[17864]: audit 2026-03-10T05:51:45.689719+0000 mon.a (mon.0) 809 : audit [INF] from='mgr.24773 ' entity='mgr.x' 2026-03-10T05:51:47.002 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:51:46 vm05 bash[17864]: audit 2026-03-10T05:51:45.719035+0000 mon.a (mon.0) 810 : audit [INF] from='mgr.24773 ' entity='mgr.x' 2026-03-10T05:51:47.002 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:51:46 vm05 bash[17864]: audit 2026-03-10T05:51:46.292637+0000 mon.a (mon.0) 811 : audit [INF] from='mgr.24773 ' entity='mgr.x' 2026-03-10T05:51:47.002 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:51:46 vm05 bash[17864]: audit 2026-03-10T05:51:46.427137+0000 mon.a (mon.0) 812 : audit [INF] from='mgr.24773 ' entity='mgr.x' 2026-03-10T05:51:47.002 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:51:46 vm05 bash[17864]: audit 2026-03-10T05:51:46.429678+0000 mon.a (mon.0) 813 : audit [INF] from='mgr.24773 ' entity='mgr.x' cmd=[{"prefix": "config rm", "who": "osd/host:vm02", "name": "osd_memory_target"}]: dispatch 2026-03-10T05:51:47.002 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:51:46 vm05 bash[17864]: audit 2026-03-10T05:51:46.430925+0000 mon.b (mon.2) 55 : audit [INF] from='mgr.24773 192.168.123.105:0/3918521054' entity='mgr.x' cmd=[{"prefix": "config rm", "who": "osd/host:vm02", "name": "osd_memory_target"}]: dispatch 2026-03-10T05:51:47.084 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:51:46 vm02 bash[17462]: audit 2026-03-10T05:51:44.953772+0000 mgr.x (mgr.24773) 24 : audit [DBG] from='client.24928 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T05:51:47.084 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:51:46 vm02 bash[17462]: audit 2026-03-10T05:51:45.689719+0000 mon.a (mon.0) 809 : audit [INF] from='mgr.24773 ' entity='mgr.x' 2026-03-10T05:51:47.084 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:51:46 vm02 bash[17462]: audit 2026-03-10T05:51:45.719035+0000 mon.a (mon.0) 810 : audit [INF] from='mgr.24773 ' entity='mgr.x' 2026-03-10T05:51:47.084 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:51:46 vm02 bash[17462]: audit 2026-03-10T05:51:46.292637+0000 mon.a (mon.0) 811 : audit [INF] from='mgr.24773 ' entity='mgr.x' 2026-03-10T05:51:47.084 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:51:46 vm02 bash[17462]: audit 2026-03-10T05:51:46.427137+0000 mon.a (mon.0) 812 : audit [INF] from='mgr.24773 ' entity='mgr.x' 2026-03-10T05:51:47.084 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:51:46 vm02 bash[17462]: audit 2026-03-10T05:51:46.429678+0000 mon.a (mon.0) 813 : audit [INF] from='mgr.24773 ' entity='mgr.x' cmd=[{"prefix": "config rm", "who": "osd/host:vm02", "name": "osd_memory_target"}]: dispatch 2026-03-10T05:51:47.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:51:46 vm02 bash[17462]: audit 2026-03-10T05:51:46.430925+0000 mon.b (mon.2) 55 : audit [INF] from='mgr.24773 192.168.123.105:0/3918521054' entity='mgr.x' cmd=[{"prefix": "config rm", "who": "osd/host:vm02", "name": "osd_memory_target"}]: dispatch 2026-03-10T05:51:47.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:51:46 vm02 bash[22526]: audit 2026-03-10T05:51:44.953772+0000 mgr.x (mgr.24773) 24 : audit [DBG] from='client.24928 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T05:51:47.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:51:46 vm02 bash[22526]: audit 2026-03-10T05:51:45.689719+0000 mon.a (mon.0) 809 : audit [INF] from='mgr.24773 ' entity='mgr.x' 2026-03-10T05:51:47.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:51:46 vm02 bash[22526]: audit 2026-03-10T05:51:45.719035+0000 mon.a (mon.0) 810 : audit [INF] from='mgr.24773 ' entity='mgr.x' 2026-03-10T05:51:47.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:51:46 vm02 bash[22526]: audit 2026-03-10T05:51:46.292637+0000 mon.a (mon.0) 811 : audit [INF] from='mgr.24773 ' entity='mgr.x' 2026-03-10T05:51:47.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:51:46 vm02 bash[22526]: audit 2026-03-10T05:51:46.427137+0000 mon.a (mon.0) 812 : audit [INF] from='mgr.24773 ' entity='mgr.x' 2026-03-10T05:51:47.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:51:46 vm02 bash[22526]: audit 2026-03-10T05:51:46.429678+0000 mon.a (mon.0) 813 : audit [INF] from='mgr.24773 ' entity='mgr.x' cmd=[{"prefix": "config rm", "who": "osd/host:vm02", "name": "osd_memory_target"}]: dispatch 2026-03-10T05:51:47.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:51:46 vm02 bash[22526]: audit 2026-03-10T05:51:46.430925+0000 mon.b (mon.2) 55 : audit [INF] from='mgr.24773 192.168.123.105:0/3918521054' entity='mgr.x' cmd=[{"prefix": "config rm", "who": "osd/host:vm02", "name": "osd_memory_target"}]: dispatch 2026-03-10T05:51:47.584 INFO:journalctl@ceph.mgr.y.vm02.stdout:Mar 10 05:51:47 vm02 bash[17731]: ::ffff:192.168.123.105 - - [10/Mar/2026:05:51:47] "GET /metrics HTTP/1.1" 200 - "" "Prometheus/2.33.4" 2026-03-10T05:51:48.002 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:51:47 vm05 bash[17864]: cluster 2026-03-10T05:51:46.629934+0000 mgr.x (mgr.24773) 25 : cluster [DBG] pgmap v7: 161 pgs: 161 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail; 26 KiB/s rd, 0 B/s wr, 11 op/s 2026-03-10T05:51:48.084 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:51:47 vm02 bash[17462]: cluster 2026-03-10T05:51:46.629934+0000 mgr.x (mgr.24773) 25 : cluster [DBG] pgmap v7: 161 pgs: 161 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail; 26 KiB/s rd, 0 B/s wr, 11 op/s 2026-03-10T05:51:48.084 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:51:47 vm02 bash[22526]: cluster 2026-03-10T05:51:46.629934+0000 mgr.x (mgr.24773) 25 : cluster [DBG] pgmap v7: 161 pgs: 161 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail; 26 KiB/s rd, 0 B/s wr, 11 op/s 2026-03-10T05:51:49.584 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:51:49 vm02 bash[17462]: cluster 2026-03-10T05:51:48.630294+0000 mgr.x (mgr.24773) 26 : cluster [DBG] pgmap v8: 161 pgs: 161 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail; 20 KiB/s rd, 0 B/s wr, 8 op/s 2026-03-10T05:51:49.584 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:51:49 vm02 bash[22526]: cluster 2026-03-10T05:51:48.630294+0000 mgr.x (mgr.24773) 26 : cluster [DBG] pgmap v8: 161 pgs: 161 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail; 20 KiB/s rd, 0 B/s wr, 8 op/s 2026-03-10T05:51:49.752 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:51:49 vm05 bash[17864]: cluster 2026-03-10T05:51:48.630294+0000 mgr.x (mgr.24773) 26 : cluster [DBG] pgmap v8: 161 pgs: 161 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail; 20 KiB/s rd, 0 B/s wr, 8 op/s 2026-03-10T05:51:52.002 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:51:51 vm05 bash[17864]: cluster 2026-03-10T05:51:50.630798+0000 mgr.x (mgr.24773) 27 : cluster [DBG] pgmap v9: 161 pgs: 161 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail; 17 KiB/s rd, 0 B/s wr, 7 op/s 2026-03-10T05:51:52.084 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:51:51 vm02 bash[17462]: cluster 2026-03-10T05:51:50.630798+0000 mgr.x (mgr.24773) 27 : cluster [DBG] pgmap v9: 161 pgs: 161 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail; 17 KiB/s rd, 0 B/s wr, 7 op/s 2026-03-10T05:51:52.084 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:51:51 vm02 bash[22526]: cluster 2026-03-10T05:51:50.630798+0000 mgr.x (mgr.24773) 27 : cluster [DBG] pgmap v9: 161 pgs: 161 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail; 17 KiB/s rd, 0 B/s wr, 7 op/s 2026-03-10T05:51:53.228 INFO:journalctl@ceph.mgr.x.vm05.stdout:Mar 10 05:51:52 vm05 bash[37598]: ::ffff:192.168.123.105 - - [10/Mar/2026:05:51:52] "GET /metrics HTTP/1.1" 200 34963 "" "Prometheus/2.33.4" 2026-03-10T05:51:53.313 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:51:53 vm02 bash[17462]: audit 2026-03-10T05:51:52.226088+0000 mon.a (mon.0) 814 : audit [INF] from='mgr.24773 ' entity='mgr.x' 2026-03-10T05:51:53.313 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:51:53 vm02 bash[17462]: audit 2026-03-10T05:51:52.231414+0000 mon.a (mon.0) 815 : audit [INF] from='mgr.24773 ' entity='mgr.x' 2026-03-10T05:51:53.313 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:51:53 vm02 bash[17462]: audit 2026-03-10T05:51:52.234110+0000 mon.a (mon.0) 816 : audit [INF] from='mgr.24773 ' entity='mgr.x' cmd=[{"prefix": "config rm", "who": "osd/host:vm05", "name": "osd_memory_target"}]: dispatch 2026-03-10T05:51:53.313 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:51:53 vm02 bash[17462]: audit 2026-03-10T05:51:52.235451+0000 mon.b (mon.2) 56 : audit [INF] from='mgr.24773 192.168.123.105:0/3918521054' entity='mgr.x' cmd=[{"prefix": "config rm", "who": "osd/host:vm05", "name": "osd_memory_target"}]: dispatch 2026-03-10T05:51:53.313 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:51:53 vm02 bash[17462]: audit 2026-03-10T05:51:52.236774+0000 mon.b (mon.2) 57 : audit [DBG] from='mgr.24773 192.168.123.105:0/3918521054' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:51:53.313 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:51:53 vm02 bash[17462]: audit 2026-03-10T05:51:52.237455+0000 mon.b (mon.2) 58 : audit [INF] from='mgr.24773 192.168.123.105:0/3918521054' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T05:51:53.313 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:51:53 vm02 bash[17462]: cephadm 2026-03-10T05:51:52.238305+0000 mgr.x (mgr.24773) 28 : cephadm [INF] Updating vm02:/etc/ceph/ceph.conf 2026-03-10T05:51:53.313 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:51:53 vm02 bash[17462]: cephadm 2026-03-10T05:51:52.238449+0000 mgr.x (mgr.24773) 29 : cephadm [INF] Updating vm05:/etc/ceph/ceph.conf 2026-03-10T05:51:53.313 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:51:53 vm02 bash[17462]: cephadm 2026-03-10T05:51:52.275572+0000 mgr.x (mgr.24773) 30 : cephadm [INF] Updating vm02:/var/lib/ceph/107483ae-1c44-11f1-b530-c1172cd6122a/config/ceph.conf 2026-03-10T05:51:53.313 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:51:53 vm02 bash[17462]: cephadm 2026-03-10T05:51:52.277056+0000 mgr.x (mgr.24773) 31 : cephadm [INF] Updating vm05:/var/lib/ceph/107483ae-1c44-11f1-b530-c1172cd6122a/config/ceph.conf 2026-03-10T05:51:53.313 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:51:53 vm02 bash[17462]: cephadm 2026-03-10T05:51:52.310287+0000 mgr.x (mgr.24773) 32 : cephadm [INF] Updating vm02:/etc/ceph/ceph.client.admin.keyring 2026-03-10T05:51:53.313 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:51:53 vm02 bash[17462]: cephadm 2026-03-10T05:51:52.311721+0000 mgr.x (mgr.24773) 33 : cephadm [INF] Updating vm05:/etc/ceph/ceph.client.admin.keyring 2026-03-10T05:51:53.313 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:51:53 vm02 bash[17462]: cephadm 2026-03-10T05:51:52.343859+0000 mgr.x (mgr.24773) 34 : cephadm [INF] Updating vm02:/var/lib/ceph/107483ae-1c44-11f1-b530-c1172cd6122a/config/ceph.client.admin.keyring 2026-03-10T05:51:53.313 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:51:53 vm02 bash[17462]: cephadm 2026-03-10T05:51:52.344116+0000 mgr.x (mgr.24773) 35 : cephadm [INF] Updating vm05:/var/lib/ceph/107483ae-1c44-11f1-b530-c1172cd6122a/config/ceph.client.admin.keyring 2026-03-10T05:51:53.313 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:51:53 vm02 bash[17462]: audit 2026-03-10T05:51:52.382081+0000 mon.a (mon.0) 817 : audit [INF] from='mgr.24773 ' entity='mgr.x' 2026-03-10T05:51:53.313 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:51:53 vm02 bash[17462]: audit 2026-03-10T05:51:52.387496+0000 mon.a (mon.0) 818 : audit [INF] from='mgr.24773 ' entity='mgr.x' 2026-03-10T05:51:53.313 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:51:53 vm02 bash[17462]: audit 2026-03-10T05:51:52.393615+0000 mon.a (mon.0) 819 : audit [INF] from='mgr.24773 ' entity='mgr.x' 2026-03-10T05:51:53.313 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:51:53 vm02 bash[17462]: audit 2026-03-10T05:51:52.398510+0000 mon.a (mon.0) 820 : audit [INF] from='mgr.24773 ' entity='mgr.x' 2026-03-10T05:51:53.313 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:51:53 vm02 bash[17462]: audit 2026-03-10T05:51:52.404189+0000 mon.a (mon.0) 821 : audit [INF] from='mgr.24773 ' entity='mgr.x' 2026-03-10T05:51:53.313 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:51:53 vm02 bash[17462]: audit 2026-03-10T05:51:52.418497+0000 mon.a (mon.0) 822 : audit [INF] from='mgr.24773 ' entity='mgr.x' 2026-03-10T05:51:53.313 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:51:53 vm02 bash[17462]: audit 2026-03-10T05:51:52.422735+0000 mon.a (mon.0) 823 : audit [INF] from='mgr.24773 ' entity='mgr.x' 2026-03-10T05:51:53.313 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:51:53 vm02 bash[17462]: audit 2026-03-10T05:51:52.426725+0000 mon.a (mon.0) 824 : audit [INF] from='mgr.24773 ' entity='mgr.x' 2026-03-10T05:51:53.313 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:51:53 vm02 bash[17462]: audit 2026-03-10T05:51:52.430487+0000 mon.a (mon.0) 825 : audit [INF] from='mgr.24773 ' entity='mgr.x' 2026-03-10T05:51:53.313 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:51:53 vm02 bash[17462]: audit 2026-03-10T05:51:52.432292+0000 mon.a (mon.0) 826 : audit [INF] from='mgr.24773 ' entity='mgr.x' cmd=[{"prefix": "auth get-or-create", "entity": "client.iscsi.foo.vm02.mxbwmh", "caps": ["mon", "profile rbd, allow command \"osd blocklist\", allow command \"config-key get\" with \"key\" prefix \"iscsi/\"", "mgr", "allow command \"service status\"", "osd", "allow rwx"]}]: dispatch 2026-03-10T05:51:53.313 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:51:53 vm02 bash[17462]: cephadm 2026-03-10T05:51:52.433309+0000 mgr.x (mgr.24773) 36 : cephadm [INF] Reconfiguring iscsi.foo.vm02.mxbwmh (dependencies changed)... 2026-03-10T05:51:53.313 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:51:53 vm02 bash[17462]: audit 2026-03-10T05:51:52.433635+0000 mon.b (mon.2) 59 : audit [INF] from='mgr.24773 192.168.123.105:0/3918521054' entity='mgr.x' cmd=[{"prefix": "auth get-or-create", "entity": "client.iscsi.foo.vm02.mxbwmh", "caps": ["mon", "profile rbd, allow command \"osd blocklist\", allow command \"config-key get\" with \"key\" prefix \"iscsi/\"", "mgr", "allow command \"service status\"", "osd", "allow rwx"]}]: dispatch 2026-03-10T05:51:53.314 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:51:53 vm02 bash[17462]: audit 2026-03-10T05:51:52.437460+0000 mon.b (mon.2) 60 : audit [DBG] from='mgr.24773 192.168.123.105:0/3918521054' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:51:53.314 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:51:53 vm02 bash[17462]: cephadm 2026-03-10T05:51:52.438200+0000 mgr.x (mgr.24773) 37 : cephadm [INF] Reconfiguring daemon iscsi.foo.vm02.mxbwmh on vm02 2026-03-10T05:51:53.314 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:51:53 vm02 bash[17462]: cluster 2026-03-10T05:51:52.631103+0000 mgr.x (mgr.24773) 38 : cluster [DBG] pgmap v10: 161 pgs: 161 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail; 15 KiB/s rd, 0 B/s wr, 6 op/s 2026-03-10T05:51:53.314 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:51:53 vm02 bash[17462]: audit 2026-03-10T05:51:52.953550+0000 mon.a (mon.0) 827 : audit [INF] from='mgr.24773 ' entity='mgr.x' 2026-03-10T05:51:53.314 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:51:53 vm02 bash[17462]: audit 2026-03-10T05:51:52.961482+0000 mon.a (mon.0) 828 : audit [INF] from='mgr.24773 ' entity='mgr.x' 2026-03-10T05:51:53.314 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:51:53 vm02 bash[22526]: audit 2026-03-10T05:51:52.226088+0000 mon.a (mon.0) 814 : audit [INF] from='mgr.24773 ' entity='mgr.x' 2026-03-10T05:51:53.314 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:51:53 vm02 bash[22526]: audit 2026-03-10T05:51:52.231414+0000 mon.a (mon.0) 815 : audit [INF] from='mgr.24773 ' entity='mgr.x' 2026-03-10T05:51:53.314 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:51:53 vm02 bash[22526]: audit 2026-03-10T05:51:52.234110+0000 mon.a (mon.0) 816 : audit [INF] from='mgr.24773 ' entity='mgr.x' cmd=[{"prefix": "config rm", "who": "osd/host:vm05", "name": "osd_memory_target"}]: dispatch 2026-03-10T05:51:53.314 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:51:53 vm02 bash[22526]: audit 2026-03-10T05:51:52.235451+0000 mon.b (mon.2) 56 : audit [INF] from='mgr.24773 192.168.123.105:0/3918521054' entity='mgr.x' cmd=[{"prefix": "config rm", "who": "osd/host:vm05", "name": "osd_memory_target"}]: dispatch 2026-03-10T05:51:53.314 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:51:53 vm02 bash[22526]: audit 2026-03-10T05:51:52.236774+0000 mon.b (mon.2) 57 : audit [DBG] from='mgr.24773 192.168.123.105:0/3918521054' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:51:53.314 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:51:53 vm02 bash[22526]: audit 2026-03-10T05:51:52.237455+0000 mon.b (mon.2) 58 : audit [INF] from='mgr.24773 192.168.123.105:0/3918521054' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T05:51:53.314 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:51:53 vm02 bash[22526]: cephadm 2026-03-10T05:51:52.238305+0000 mgr.x (mgr.24773) 28 : cephadm [INF] Updating vm02:/etc/ceph/ceph.conf 2026-03-10T05:51:53.314 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:51:53 vm02 bash[22526]: cephadm 2026-03-10T05:51:52.238449+0000 mgr.x (mgr.24773) 29 : cephadm [INF] Updating vm05:/etc/ceph/ceph.conf 2026-03-10T05:51:53.314 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:51:53 vm02 bash[22526]: cephadm 2026-03-10T05:51:52.275572+0000 mgr.x (mgr.24773) 30 : cephadm [INF] Updating vm02:/var/lib/ceph/107483ae-1c44-11f1-b530-c1172cd6122a/config/ceph.conf 2026-03-10T05:51:53.314 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:51:53 vm02 bash[22526]: cephadm 2026-03-10T05:51:52.277056+0000 mgr.x (mgr.24773) 31 : cephadm [INF] Updating vm05:/var/lib/ceph/107483ae-1c44-11f1-b530-c1172cd6122a/config/ceph.conf 2026-03-10T05:51:53.314 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:51:53 vm02 bash[22526]: cephadm 2026-03-10T05:51:52.310287+0000 mgr.x (mgr.24773) 32 : cephadm [INF] Updating vm02:/etc/ceph/ceph.client.admin.keyring 2026-03-10T05:51:53.314 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:51:53 vm02 bash[22526]: cephadm 2026-03-10T05:51:52.311721+0000 mgr.x (mgr.24773) 33 : cephadm [INF] Updating vm05:/etc/ceph/ceph.client.admin.keyring 2026-03-10T05:51:53.314 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:51:53 vm02 bash[22526]: cephadm 2026-03-10T05:51:52.343859+0000 mgr.x (mgr.24773) 34 : cephadm [INF] Updating vm02:/var/lib/ceph/107483ae-1c44-11f1-b530-c1172cd6122a/config/ceph.client.admin.keyring 2026-03-10T05:51:53.314 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:51:53 vm02 bash[22526]: cephadm 2026-03-10T05:51:52.344116+0000 mgr.x (mgr.24773) 35 : cephadm [INF] Updating vm05:/var/lib/ceph/107483ae-1c44-11f1-b530-c1172cd6122a/config/ceph.client.admin.keyring 2026-03-10T05:51:53.314 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:51:53 vm02 bash[22526]: audit 2026-03-10T05:51:52.382081+0000 mon.a (mon.0) 817 : audit [INF] from='mgr.24773 ' entity='mgr.x' 2026-03-10T05:51:53.314 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:51:53 vm02 bash[22526]: audit 2026-03-10T05:51:52.387496+0000 mon.a (mon.0) 818 : audit [INF] from='mgr.24773 ' entity='mgr.x' 2026-03-10T05:51:53.314 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:51:53 vm02 bash[22526]: audit 2026-03-10T05:51:52.393615+0000 mon.a (mon.0) 819 : audit [INF] from='mgr.24773 ' entity='mgr.x' 2026-03-10T05:51:53.314 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:51:53 vm02 bash[22526]: audit 2026-03-10T05:51:52.398510+0000 mon.a (mon.0) 820 : audit [INF] from='mgr.24773 ' entity='mgr.x' 2026-03-10T05:51:53.314 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:51:53 vm02 bash[22526]: audit 2026-03-10T05:51:52.404189+0000 mon.a (mon.0) 821 : audit [INF] from='mgr.24773 ' entity='mgr.x' 2026-03-10T05:51:53.314 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:51:53 vm02 bash[22526]: audit 2026-03-10T05:51:52.418497+0000 mon.a (mon.0) 822 : audit [INF] from='mgr.24773 ' entity='mgr.x' 2026-03-10T05:51:53.314 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:51:53 vm02 bash[22526]: audit 2026-03-10T05:51:52.422735+0000 mon.a (mon.0) 823 : audit [INF] from='mgr.24773 ' entity='mgr.x' 2026-03-10T05:51:53.314 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:51:53 vm02 bash[22526]: audit 2026-03-10T05:51:52.426725+0000 mon.a (mon.0) 824 : audit [INF] from='mgr.24773 ' entity='mgr.x' 2026-03-10T05:51:53.314 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:51:53 vm02 bash[22526]: audit 2026-03-10T05:51:52.430487+0000 mon.a (mon.0) 825 : audit [INF] from='mgr.24773 ' entity='mgr.x' 2026-03-10T05:51:53.314 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:51:53 vm02 bash[22526]: audit 2026-03-10T05:51:52.432292+0000 mon.a (mon.0) 826 : audit [INF] from='mgr.24773 ' entity='mgr.x' cmd=[{"prefix": "auth get-or-create", "entity": "client.iscsi.foo.vm02.mxbwmh", "caps": ["mon", "profile rbd, allow command \"osd blocklist\", allow command \"config-key get\" with \"key\" prefix \"iscsi/\"", "mgr", "allow command \"service status\"", "osd", "allow rwx"]}]: dispatch 2026-03-10T05:51:53.314 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:51:53 vm02 bash[22526]: cephadm 2026-03-10T05:51:52.433309+0000 mgr.x (mgr.24773) 36 : cephadm [INF] Reconfiguring iscsi.foo.vm02.mxbwmh (dependencies changed)... 2026-03-10T05:51:53.314 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:51:53 vm02 bash[22526]: audit 2026-03-10T05:51:52.433635+0000 mon.b (mon.2) 59 : audit [INF] from='mgr.24773 192.168.123.105:0/3918521054' entity='mgr.x' cmd=[{"prefix": "auth get-or-create", "entity": "client.iscsi.foo.vm02.mxbwmh", "caps": ["mon", "profile rbd, allow command \"osd blocklist\", allow command \"config-key get\" with \"key\" prefix \"iscsi/\"", "mgr", "allow command \"service status\"", "osd", "allow rwx"]}]: dispatch 2026-03-10T05:51:53.314 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:51:53 vm02 bash[22526]: audit 2026-03-10T05:51:52.437460+0000 mon.b (mon.2) 60 : audit [DBG] from='mgr.24773 192.168.123.105:0/3918521054' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:51:53.314 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:51:53 vm02 bash[22526]: cephadm 2026-03-10T05:51:52.438200+0000 mgr.x (mgr.24773) 37 : cephadm [INF] Reconfiguring daemon iscsi.foo.vm02.mxbwmh on vm02 2026-03-10T05:51:53.314 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:51:53 vm02 bash[22526]: cluster 2026-03-10T05:51:52.631103+0000 mgr.x (mgr.24773) 38 : cluster [DBG] pgmap v10: 161 pgs: 161 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail; 15 KiB/s rd, 0 B/s wr, 6 op/s 2026-03-10T05:51:53.314 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:51:53 vm02 bash[22526]: audit 2026-03-10T05:51:52.953550+0000 mon.a (mon.0) 827 : audit [INF] from='mgr.24773 ' entity='mgr.x' 2026-03-10T05:51:53.314 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:51:53 vm02 bash[22526]: audit 2026-03-10T05:51:52.961482+0000 mon.a (mon.0) 828 : audit [INF] from='mgr.24773 ' entity='mgr.x' 2026-03-10T05:51:53.504 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:51:53 vm05 bash[17864]: audit 2026-03-10T05:51:52.226088+0000 mon.a (mon.0) 814 : audit [INF] from='mgr.24773 ' entity='mgr.x' 2026-03-10T05:51:53.504 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:51:53 vm05 bash[17864]: audit 2026-03-10T05:51:52.231414+0000 mon.a (mon.0) 815 : audit [INF] from='mgr.24773 ' entity='mgr.x' 2026-03-10T05:51:53.504 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:51:53 vm05 bash[17864]: audit 2026-03-10T05:51:52.234110+0000 mon.a (mon.0) 816 : audit [INF] from='mgr.24773 ' entity='mgr.x' cmd=[{"prefix": "config rm", "who": "osd/host:vm05", "name": "osd_memory_target"}]: dispatch 2026-03-10T05:51:53.504 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:51:53 vm05 bash[17864]: audit 2026-03-10T05:51:52.235451+0000 mon.b (mon.2) 56 : audit [INF] from='mgr.24773 192.168.123.105:0/3918521054' entity='mgr.x' cmd=[{"prefix": "config rm", "who": "osd/host:vm05", "name": "osd_memory_target"}]: dispatch 2026-03-10T05:51:53.504 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:51:53 vm05 bash[17864]: audit 2026-03-10T05:51:52.236774+0000 mon.b (mon.2) 57 : audit [DBG] from='mgr.24773 192.168.123.105:0/3918521054' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:51:53.504 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:51:53 vm05 bash[17864]: audit 2026-03-10T05:51:52.237455+0000 mon.b (mon.2) 58 : audit [INF] from='mgr.24773 192.168.123.105:0/3918521054' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T05:51:53.504 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:51:53 vm05 bash[17864]: cephadm 2026-03-10T05:51:52.238305+0000 mgr.x (mgr.24773) 28 : cephadm [INF] Updating vm02:/etc/ceph/ceph.conf 2026-03-10T05:51:53.504 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:51:53 vm05 bash[17864]: cephadm 2026-03-10T05:51:52.238449+0000 mgr.x (mgr.24773) 29 : cephadm [INF] Updating vm05:/etc/ceph/ceph.conf 2026-03-10T05:51:53.504 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:51:53 vm05 bash[17864]: cephadm 2026-03-10T05:51:52.275572+0000 mgr.x (mgr.24773) 30 : cephadm [INF] Updating vm02:/var/lib/ceph/107483ae-1c44-11f1-b530-c1172cd6122a/config/ceph.conf 2026-03-10T05:51:53.504 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:51:53 vm05 bash[17864]: cephadm 2026-03-10T05:51:52.277056+0000 mgr.x (mgr.24773) 31 : cephadm [INF] Updating vm05:/var/lib/ceph/107483ae-1c44-11f1-b530-c1172cd6122a/config/ceph.conf 2026-03-10T05:51:53.504 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:51:53 vm05 bash[17864]: cephadm 2026-03-10T05:51:52.310287+0000 mgr.x (mgr.24773) 32 : cephadm [INF] Updating vm02:/etc/ceph/ceph.client.admin.keyring 2026-03-10T05:51:53.504 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:51:53 vm05 bash[17864]: cephadm 2026-03-10T05:51:52.311721+0000 mgr.x (mgr.24773) 33 : cephadm [INF] Updating vm05:/etc/ceph/ceph.client.admin.keyring 2026-03-10T05:51:53.504 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:51:53 vm05 bash[17864]: cephadm 2026-03-10T05:51:52.343859+0000 mgr.x (mgr.24773) 34 : cephadm [INF] Updating vm02:/var/lib/ceph/107483ae-1c44-11f1-b530-c1172cd6122a/config/ceph.client.admin.keyring 2026-03-10T05:51:53.504 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:51:53 vm05 bash[17864]: cephadm 2026-03-10T05:51:52.344116+0000 mgr.x (mgr.24773) 35 : cephadm [INF] Updating vm05:/var/lib/ceph/107483ae-1c44-11f1-b530-c1172cd6122a/config/ceph.client.admin.keyring 2026-03-10T05:51:53.504 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:51:53 vm05 bash[17864]: audit 2026-03-10T05:51:52.382081+0000 mon.a (mon.0) 817 : audit [INF] from='mgr.24773 ' entity='mgr.x' 2026-03-10T05:51:53.504 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:51:53 vm05 bash[17864]: audit 2026-03-10T05:51:52.387496+0000 mon.a (mon.0) 818 : audit [INF] from='mgr.24773 ' entity='mgr.x' 2026-03-10T05:51:53.504 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:51:53 vm05 bash[17864]: audit 2026-03-10T05:51:52.393615+0000 mon.a (mon.0) 819 : audit [INF] from='mgr.24773 ' entity='mgr.x' 2026-03-10T05:51:53.504 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:51:53 vm05 bash[17864]: audit 2026-03-10T05:51:52.398510+0000 mon.a (mon.0) 820 : audit [INF] from='mgr.24773 ' entity='mgr.x' 2026-03-10T05:51:53.504 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:51:53 vm05 bash[17864]: audit 2026-03-10T05:51:52.404189+0000 mon.a (mon.0) 821 : audit [INF] from='mgr.24773 ' entity='mgr.x' 2026-03-10T05:51:53.504 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:51:53 vm05 bash[17864]: audit 2026-03-10T05:51:52.418497+0000 mon.a (mon.0) 822 : audit [INF] from='mgr.24773 ' entity='mgr.x' 2026-03-10T05:51:53.504 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:51:53 vm05 bash[17864]: audit 2026-03-10T05:51:52.422735+0000 mon.a (mon.0) 823 : audit [INF] from='mgr.24773 ' entity='mgr.x' 2026-03-10T05:51:53.504 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:51:53 vm05 bash[17864]: audit 2026-03-10T05:51:52.426725+0000 mon.a (mon.0) 824 : audit [INF] from='mgr.24773 ' entity='mgr.x' 2026-03-10T05:51:53.504 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:51:53 vm05 bash[17864]: audit 2026-03-10T05:51:52.430487+0000 mon.a (mon.0) 825 : audit [INF] from='mgr.24773 ' entity='mgr.x' 2026-03-10T05:51:53.504 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:51:53 vm05 bash[17864]: audit 2026-03-10T05:51:52.432292+0000 mon.a (mon.0) 826 : audit [INF] from='mgr.24773 ' entity='mgr.x' cmd=[{"prefix": "auth get-or-create", "entity": "client.iscsi.foo.vm02.mxbwmh", "caps": ["mon", "profile rbd, allow command \"osd blocklist\", allow command \"config-key get\" with \"key\" prefix \"iscsi/\"", "mgr", "allow command \"service status\"", "osd", "allow rwx"]}]: dispatch 2026-03-10T05:51:53.504 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:51:53 vm05 bash[17864]: cephadm 2026-03-10T05:51:52.433309+0000 mgr.x (mgr.24773) 36 : cephadm [INF] Reconfiguring iscsi.foo.vm02.mxbwmh (dependencies changed)... 2026-03-10T05:51:53.504 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:51:53 vm05 bash[17864]: audit 2026-03-10T05:51:52.433635+0000 mon.b (mon.2) 59 : audit [INF] from='mgr.24773 192.168.123.105:0/3918521054' entity='mgr.x' cmd=[{"prefix": "auth get-or-create", "entity": "client.iscsi.foo.vm02.mxbwmh", "caps": ["mon", "profile rbd, allow command \"osd blocklist\", allow command \"config-key get\" with \"key\" prefix \"iscsi/\"", "mgr", "allow command \"service status\"", "osd", "allow rwx"]}]: dispatch 2026-03-10T05:51:53.504 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:51:53 vm05 bash[17864]: audit 2026-03-10T05:51:52.437460+0000 mon.b (mon.2) 60 : audit [DBG] from='mgr.24773 192.168.123.105:0/3918521054' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:51:53.504 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:51:53 vm05 bash[17864]: cephadm 2026-03-10T05:51:52.438200+0000 mgr.x (mgr.24773) 37 : cephadm [INF] Reconfiguring daemon iscsi.foo.vm02.mxbwmh on vm02 2026-03-10T05:51:53.504 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:51:53 vm05 bash[17864]: cluster 2026-03-10T05:51:52.631103+0000 mgr.x (mgr.24773) 38 : cluster [DBG] pgmap v10: 161 pgs: 161 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail; 15 KiB/s rd, 0 B/s wr, 6 op/s 2026-03-10T05:51:53.504 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:51:53 vm05 bash[17864]: audit 2026-03-10T05:51:52.953550+0000 mon.a (mon.0) 827 : audit [INF] from='mgr.24773 ' entity='mgr.x' 2026-03-10T05:51:53.504 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:51:53 vm05 bash[17864]: audit 2026-03-10T05:51:52.961482+0000 mon.a (mon.0) 828 : audit [INF] from='mgr.24773 ' entity='mgr.x' 2026-03-10T05:51:53.584 INFO:journalctl@ceph.alertmanager.a.vm02.stdout:Mar 10 05:51:53 vm02 bash[43400]: level=error ts=2026-03-10T05:51:53.524Z caller=dispatch.go:354 component=dispatcher msg="Notify for alerts failed" num_alerts=10 err="ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"https://192.168.123.105:8443/api/prometheus_receiver\": x509: cannot validate certificate for 192.168.123.105 because it doesn't contain any IP SANs; ceph-dashboard/webhook[0]: notify retry canceled after 8 attempts: Post \"https://192.168.123.102:8443//api/prometheus_receiver\": x509: cannot validate certificate for 192.168.123.102 because it doesn't contain any IP SANs" 2026-03-10T05:51:53.584 INFO:journalctl@ceph.alertmanager.a.vm02.stdout:Mar 10 05:51:53 vm02 bash[43400]: level=warn ts=2026-03-10T05:51:53.525Z caller=notify.go:724 component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"https://192.168.123.105:8443/api/prometheus_receiver\": x509: cannot validate certificate for 192.168.123.105 because it doesn't contain any IP SANs" 2026-03-10T05:51:53.584 INFO:journalctl@ceph.alertmanager.a.vm02.stdout:Mar 10 05:51:53 vm02 bash[43400]: level=warn ts=2026-03-10T05:51:53.526Z caller=notify.go:724 component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"https://192.168.123.102:8443//api/prometheus_receiver\": x509: cannot validate certificate for 192.168.123.102 because it doesn't contain any IP SANs" 2026-03-10T05:51:54.752 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:51:54 vm05 bash[17864]: audit 2026-03-10T05:51:52.822211+0000 mgr.x (mgr.24773) 39 : audit [DBG] from='client.14712 -' entity='client.iscsi.foo.vm02.mxbwmh' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T05:51:54.752 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:51:54 vm05 bash[17864]: cephadm 2026-03-10T05:51:52.965114+0000 mgr.x (mgr.24773) 40 : cephadm [INF] Reconfiguring alertmanager.a (dependencies changed)... 2026-03-10T05:51:54.752 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:51:54 vm05 bash[17864]: cephadm 2026-03-10T05:51:52.970335+0000 mgr.x (mgr.24773) 41 : cephadm [INF] Deploying daemon alertmanager.a on vm02 2026-03-10T05:51:54.752 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:51:54 vm05 bash[17864]: audit 2026-03-10T05:51:53.515439+0000 mon.a (mon.0) 829 : audit [DBG] from='client.? 192.168.123.102:0/3843388591' entity='client.iscsi.foo.vm02.mxbwmh' cmd=[{"prefix": "osd blocklist ls"}]: dispatch 2026-03-10T05:51:54.752 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:51:54 vm05 bash[17864]: audit 2026-03-10T05:51:53.716535+0000 mon.a (mon.0) 830 : audit [INF] from='client.? ' entity='client.iscsi.foo.vm02.mxbwmh' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.102:0/2188069013"}]: dispatch 2026-03-10T05:51:54.752 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:51:54 vm05 bash[17864]: audit 2026-03-10T05:51:53.717749+0000 mon.b (mon.2) 61 : audit [INF] from='client.? 192.168.123.102:0/1311742161' entity='client.iscsi.foo.vm02.mxbwmh' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.102:0/2188069013"}]: dispatch 2026-03-10T05:51:54.752 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:51:54 vm05 bash[17864]: audit 2026-03-10T05:51:53.751687+0000 mon.b (mon.2) 62 : audit [DBG] from='mgr.24773 192.168.123.105:0/3918521054' entity='mgr.x' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T05:51:54.834 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:51:54 vm02 bash[17462]: audit 2026-03-10T05:51:52.822211+0000 mgr.x (mgr.24773) 39 : audit [DBG] from='client.14712 -' entity='client.iscsi.foo.vm02.mxbwmh' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T05:51:54.834 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:51:54 vm02 bash[17462]: cephadm 2026-03-10T05:51:52.965114+0000 mgr.x (mgr.24773) 40 : cephadm [INF] Reconfiguring alertmanager.a (dependencies changed)... 2026-03-10T05:51:54.834 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:51:54 vm02 bash[17462]: cephadm 2026-03-10T05:51:52.970335+0000 mgr.x (mgr.24773) 41 : cephadm [INF] Deploying daemon alertmanager.a on vm02 2026-03-10T05:51:54.834 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:51:54 vm02 bash[17462]: audit 2026-03-10T05:51:53.515439+0000 mon.a (mon.0) 829 : audit [DBG] from='client.? 192.168.123.102:0/3843388591' entity='client.iscsi.foo.vm02.mxbwmh' cmd=[{"prefix": "osd blocklist ls"}]: dispatch 2026-03-10T05:51:54.834 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:51:54 vm02 bash[17462]: audit 2026-03-10T05:51:53.716535+0000 mon.a (mon.0) 830 : audit [INF] from='client.? ' entity='client.iscsi.foo.vm02.mxbwmh' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.102:0/2188069013"}]: dispatch 2026-03-10T05:51:54.834 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:51:54 vm02 bash[17462]: audit 2026-03-10T05:51:53.717749+0000 mon.b (mon.2) 61 : audit [INF] from='client.? 192.168.123.102:0/1311742161' entity='client.iscsi.foo.vm02.mxbwmh' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.102:0/2188069013"}]: dispatch 2026-03-10T05:51:54.834 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:51:54 vm02 bash[17462]: audit 2026-03-10T05:51:53.751687+0000 mon.b (mon.2) 62 : audit [DBG] from='mgr.24773 192.168.123.105:0/3918521054' entity='mgr.x' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T05:51:54.834 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:51:54 vm02 bash[22526]: audit 2026-03-10T05:51:52.822211+0000 mgr.x (mgr.24773) 39 : audit [DBG] from='client.14712 -' entity='client.iscsi.foo.vm02.mxbwmh' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T05:51:54.834 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:51:54 vm02 bash[22526]: cephadm 2026-03-10T05:51:52.965114+0000 mgr.x (mgr.24773) 40 : cephadm [INF] Reconfiguring alertmanager.a (dependencies changed)... 2026-03-10T05:51:54.834 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:51:54 vm02 bash[22526]: cephadm 2026-03-10T05:51:52.970335+0000 mgr.x (mgr.24773) 41 : cephadm [INF] Deploying daemon alertmanager.a on vm02 2026-03-10T05:51:54.834 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:51:54 vm02 bash[22526]: audit 2026-03-10T05:51:53.515439+0000 mon.a (mon.0) 829 : audit [DBG] from='client.? 192.168.123.102:0/3843388591' entity='client.iscsi.foo.vm02.mxbwmh' cmd=[{"prefix": "osd blocklist ls"}]: dispatch 2026-03-10T05:51:54.834 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:51:54 vm02 bash[22526]: audit 2026-03-10T05:51:53.716535+0000 mon.a (mon.0) 830 : audit [INF] from='client.? ' entity='client.iscsi.foo.vm02.mxbwmh' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.102:0/2188069013"}]: dispatch 2026-03-10T05:51:54.835 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:51:54 vm02 bash[22526]: audit 2026-03-10T05:51:53.717749+0000 mon.b (mon.2) 61 : audit [INF] from='client.? 192.168.123.102:0/1311742161' entity='client.iscsi.foo.vm02.mxbwmh' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.102:0/2188069013"}]: dispatch 2026-03-10T05:51:54.835 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:51:54 vm02 bash[22526]: audit 2026-03-10T05:51:53.751687+0000 mon.b (mon.2) 62 : audit [DBG] from='mgr.24773 192.168.123.105:0/3918521054' entity='mgr.x' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T05:51:55.752 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:51:55 vm05 bash[17864]: audit 2026-03-10T05:51:54.437854+0000 mon.a (mon.0) 831 : audit [INF] from='client.? ' entity='client.iscsi.foo.vm02.mxbwmh' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.102:0/2188069013"}]': finished 2026-03-10T05:51:55.752 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:51:55 vm05 bash[17864]: cluster 2026-03-10T05:51:54.438155+0000 mon.a (mon.0) 832 : cluster [DBG] osdmap e83: 8 total, 8 up, 8 in 2026-03-10T05:51:55.752 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:51:55 vm05 bash[17864]: cluster 2026-03-10T05:51:54.631428+0000 mgr.x (mgr.24773) 42 : cluster [DBG] pgmap v12: 161 pgs: 161 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail; 19 KiB/s rd, 0 B/s wr, 8 op/s 2026-03-10T05:51:55.752 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:51:55 vm05 bash[17864]: audit 2026-03-10T05:51:54.635246+0000 mon.a (mon.0) 833 : audit [INF] from='client.? ' entity='client.iscsi.foo.vm02.mxbwmh' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.102:0/350459207"}]: dispatch 2026-03-10T05:51:55.752 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:51:55 vm05 bash[17864]: audit 2026-03-10T05:51:54.636522+0000 mon.b (mon.2) 63 : audit [INF] from='client.? 192.168.123.102:0/2133490795' entity='client.iscsi.foo.vm02.mxbwmh' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.102:0/350459207"}]: dispatch 2026-03-10T05:51:55.834 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:51:55 vm02 bash[17462]: audit 2026-03-10T05:51:54.437854+0000 mon.a (mon.0) 831 : audit [INF] from='client.? ' entity='client.iscsi.foo.vm02.mxbwmh' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.102:0/2188069013"}]': finished 2026-03-10T05:51:55.834 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:51:55 vm02 bash[17462]: cluster 2026-03-10T05:51:54.438155+0000 mon.a (mon.0) 832 : cluster [DBG] osdmap e83: 8 total, 8 up, 8 in 2026-03-10T05:51:55.834 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:51:55 vm02 bash[17462]: cluster 2026-03-10T05:51:54.631428+0000 mgr.x (mgr.24773) 42 : cluster [DBG] pgmap v12: 161 pgs: 161 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail; 19 KiB/s rd, 0 B/s wr, 8 op/s 2026-03-10T05:51:55.834 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:51:55 vm02 bash[17462]: audit 2026-03-10T05:51:54.635246+0000 mon.a (mon.0) 833 : audit [INF] from='client.? ' entity='client.iscsi.foo.vm02.mxbwmh' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.102:0/350459207"}]: dispatch 2026-03-10T05:51:55.834 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:51:55 vm02 bash[17462]: audit 2026-03-10T05:51:54.636522+0000 mon.b (mon.2) 63 : audit [INF] from='client.? 192.168.123.102:0/2133490795' entity='client.iscsi.foo.vm02.mxbwmh' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.102:0/350459207"}]: dispatch 2026-03-10T05:51:55.834 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:51:55 vm02 bash[22526]: audit 2026-03-10T05:51:54.437854+0000 mon.a (mon.0) 831 : audit [INF] from='client.? ' entity='client.iscsi.foo.vm02.mxbwmh' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.102:0/2188069013"}]': finished 2026-03-10T05:51:55.834 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:51:55 vm02 bash[22526]: cluster 2026-03-10T05:51:54.438155+0000 mon.a (mon.0) 832 : cluster [DBG] osdmap e83: 8 total, 8 up, 8 in 2026-03-10T05:51:55.834 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:51:55 vm02 bash[22526]: cluster 2026-03-10T05:51:54.631428+0000 mgr.x (mgr.24773) 42 : cluster [DBG] pgmap v12: 161 pgs: 161 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail; 19 KiB/s rd, 0 B/s wr, 8 op/s 2026-03-10T05:51:55.834 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:51:55 vm02 bash[22526]: audit 2026-03-10T05:51:54.635246+0000 mon.a (mon.0) 833 : audit [INF] from='client.? ' entity='client.iscsi.foo.vm02.mxbwmh' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.102:0/350459207"}]: dispatch 2026-03-10T05:51:55.834 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:51:55 vm02 bash[22526]: audit 2026-03-10T05:51:54.636522+0000 mon.b (mon.2) 63 : audit [INF] from='client.? 192.168.123.102:0/2133490795' entity='client.iscsi.foo.vm02.mxbwmh' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.102:0/350459207"}]: dispatch 2026-03-10T05:51:56.752 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:51:56 vm05 bash[17864]: audit 2026-03-10T05:51:55.464965+0000 mon.a (mon.0) 834 : audit [INF] from='client.? ' entity='client.iscsi.foo.vm02.mxbwmh' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.102:0/350459207"}]': finished 2026-03-10T05:51:56.752 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:51:56 vm05 bash[17864]: cluster 2026-03-10T05:51:55.465004+0000 mon.a (mon.0) 835 : cluster [DBG] osdmap e84: 8 total, 8 up, 8 in 2026-03-10T05:51:56.752 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:51:56 vm05 bash[17864]: audit 2026-03-10T05:51:55.697369+0000 mon.c (mon.1) 125 : audit [INF] from='client.? 192.168.123.102:0/1640553539' entity='client.iscsi.foo.vm02.mxbwmh' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.102:0/3027066951"}]: dispatch 2026-03-10T05:51:56.752 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:51:56 vm05 bash[17864]: audit 2026-03-10T05:51:55.697758+0000 mon.a (mon.0) 836 : audit [INF] from='client.? ' entity='client.iscsi.foo.vm02.mxbwmh' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.102:0/3027066951"}]: dispatch 2026-03-10T05:51:56.798 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:51:56 vm02 bash[17462]: audit 2026-03-10T05:51:55.464965+0000 mon.a (mon.0) 834 : audit [INF] from='client.? ' entity='client.iscsi.foo.vm02.mxbwmh' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.102:0/350459207"}]': finished 2026-03-10T05:51:56.798 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:51:56 vm02 bash[17462]: cluster 2026-03-10T05:51:55.465004+0000 mon.a (mon.0) 835 : cluster [DBG] osdmap e84: 8 total, 8 up, 8 in 2026-03-10T05:51:56.798 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:51:56 vm02 bash[17462]: audit 2026-03-10T05:51:55.697369+0000 mon.c (mon.1) 125 : audit [INF] from='client.? 192.168.123.102:0/1640553539' entity='client.iscsi.foo.vm02.mxbwmh' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.102:0/3027066951"}]: dispatch 2026-03-10T05:51:56.798 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:51:56 vm02 bash[17462]: audit 2026-03-10T05:51:55.697758+0000 mon.a (mon.0) 836 : audit [INF] from='client.? ' entity='client.iscsi.foo.vm02.mxbwmh' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.102:0/3027066951"}]: dispatch 2026-03-10T05:51:56.798 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:51:56 vm02 bash[22526]: audit 2026-03-10T05:51:55.464965+0000 mon.a (mon.0) 834 : audit [INF] from='client.? ' entity='client.iscsi.foo.vm02.mxbwmh' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.102:0/350459207"}]': finished 2026-03-10T05:51:56.798 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:51:56 vm02 bash[22526]: cluster 2026-03-10T05:51:55.465004+0000 mon.a (mon.0) 835 : cluster [DBG] osdmap e84: 8 total, 8 up, 8 in 2026-03-10T05:51:56.798 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:51:56 vm02 bash[22526]: audit 2026-03-10T05:51:55.697369+0000 mon.c (mon.1) 125 : audit [INF] from='client.? 192.168.123.102:0/1640553539' entity='client.iscsi.foo.vm02.mxbwmh' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.102:0/3027066951"}]: dispatch 2026-03-10T05:51:56.799 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:51:56 vm02 bash[22526]: audit 2026-03-10T05:51:55.697758+0000 mon.a (mon.0) 836 : audit [INF] from='client.? ' entity='client.iscsi.foo.vm02.mxbwmh' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.102:0/3027066951"}]: dispatch 2026-03-10T05:51:57.455 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:51:57 vm02 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:51:57.455 INFO:journalctl@ceph.mgr.y.vm02.stdout:Mar 10 05:51:57 vm02 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:51:57.455 INFO:journalctl@ceph.mgr.y.vm02.stdout:Mar 10 05:51:57 vm02 bash[17731]: ::ffff:192.168.123.105 - - [10/Mar/2026:05:51:57] "GET /metrics HTTP/1.1" 200 - "" "Prometheus/2.33.4" 2026-03-10T05:51:57.455 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:51:57 vm02 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:51:57.455 INFO:journalctl@ceph.osd.0.vm02.stdout:Mar 10 05:51:57 vm02 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:51:57.455 INFO:journalctl@ceph.osd.1.vm02.stdout:Mar 10 05:51:57 vm02 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:51:57.455 INFO:journalctl@ceph.osd.2.vm02.stdout:Mar 10 05:51:57 vm02 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:51:57.455 INFO:journalctl@ceph.osd.3.vm02.stdout:Mar 10 05:51:57 vm02 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:51:57.455 INFO:journalctl@ceph.node-exporter.a.vm02.stdout:Mar 10 05:51:57 vm02 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:51:57.455 INFO:journalctl@ceph.alertmanager.a.vm02.stdout:Mar 10 05:51:57 vm02 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:51:57.455 INFO:journalctl@ceph.alertmanager.a.vm02.stdout:Mar 10 05:51:57 vm02 systemd[1]: Stopping Ceph alertmanager.a for 107483ae-1c44-11f1-b530-c1172cd6122a... 2026-03-10T05:51:57.455 INFO:journalctl@ceph.alertmanager.a.vm02.stdout:Mar 10 05:51:57 vm02 bash[43400]: level=info ts=2026-03-10T05:51:57.227Z caller=main.go:557 msg="Received SIGTERM, exiting gracefully..." 2026-03-10T05:51:57.455 INFO:journalctl@ceph.alertmanager.a.vm02.stdout:Mar 10 05:51:57 vm02 bash[51462]: ceph-107483ae-1c44-11f1-b530-c1172cd6122a-alertmanager-a 2026-03-10T05:51:57.455 INFO:journalctl@ceph.alertmanager.a.vm02.stdout:Mar 10 05:51:57 vm02 systemd[1]: ceph-107483ae-1c44-11f1-b530-c1172cd6122a@alertmanager.a.service: Deactivated successfully. 2026-03-10T05:51:57.455 INFO:journalctl@ceph.alertmanager.a.vm02.stdout:Mar 10 05:51:57 vm02 systemd[1]: Stopped Ceph alertmanager.a for 107483ae-1c44-11f1-b530-c1172cd6122a. 2026-03-10T05:51:57.721 INFO:journalctl@ceph.osd.3.vm02.stdout:Mar 10 05:51:57 vm02 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:51:57.721 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:51:57 vm02 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:51:57.721 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:51:57 vm02 bash[17462]: audit 2026-03-10T05:51:56.473484+0000 mon.a (mon.0) 837 : audit [INF] from='client.? ' entity='client.iscsi.foo.vm02.mxbwmh' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.102:0/3027066951"}]': finished 2026-03-10T05:51:57.721 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:51:57 vm02 bash[17462]: cluster 2026-03-10T05:51:56.473525+0000 mon.a (mon.0) 838 : cluster [DBG] osdmap e85: 8 total, 8 up, 8 in 2026-03-10T05:51:57.721 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:51:57 vm02 bash[17462]: cluster 2026-03-10T05:51:56.631690+0000 mgr.x (mgr.24773) 43 : cluster [DBG] pgmap v15: 161 pgs: 161 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail; 1023 B/s rd, 1 op/s 2026-03-10T05:51:57.721 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:51:57 vm02 bash[17462]: audit 2026-03-10T05:51:56.700720+0000 mon.a (mon.0) 839 : audit [INF] from='client.? 192.168.123.102:0/2491145283' entity='client.iscsi.foo.vm02.mxbwmh' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.102:0/2274851303"}]: dispatch 2026-03-10T05:51:57.721 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:51:57 vm02 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:51:57.721 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:51:57 vm02 bash[22526]: audit 2026-03-10T05:51:56.473484+0000 mon.a (mon.0) 837 : audit [INF] from='client.? ' entity='client.iscsi.foo.vm02.mxbwmh' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.102:0/3027066951"}]': finished 2026-03-10T05:51:57.721 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:51:57 vm02 bash[22526]: cluster 2026-03-10T05:51:56.473525+0000 mon.a (mon.0) 838 : cluster [DBG] osdmap e85: 8 total, 8 up, 8 in 2026-03-10T05:51:57.721 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:51:57 vm02 bash[22526]: cluster 2026-03-10T05:51:56.631690+0000 mgr.x (mgr.24773) 43 : cluster [DBG] pgmap v15: 161 pgs: 161 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail; 1023 B/s rd, 1 op/s 2026-03-10T05:51:57.721 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:51:57 vm02 bash[22526]: audit 2026-03-10T05:51:56.700720+0000 mon.a (mon.0) 839 : audit [INF] from='client.? 192.168.123.102:0/2491145283' entity='client.iscsi.foo.vm02.mxbwmh' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.102:0/2274851303"}]: dispatch 2026-03-10T05:51:57.721 INFO:journalctl@ceph.alertmanager.a.vm02.stdout:Mar 10 05:51:57 vm02 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:51:57.721 INFO:journalctl@ceph.alertmanager.a.vm02.stdout:Mar 10 05:51:57 vm02 systemd[1]: Started Ceph alertmanager.a for 107483ae-1c44-11f1-b530-c1172cd6122a. 2026-03-10T05:51:57.725 INFO:journalctl@ceph.osd.1.vm02.stdout:Mar 10 05:51:57 vm02 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:51:57.725 INFO:journalctl@ceph.osd.2.vm02.stdout:Mar 10 05:51:57 vm02 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:51:57.725 INFO:journalctl@ceph.mgr.y.vm02.stdout:Mar 10 05:51:57 vm02 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:51:57.725 INFO:journalctl@ceph.node-exporter.a.vm02.stdout:Mar 10 05:51:57 vm02 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:51:57.725 INFO:journalctl@ceph.osd.0.vm02.stdout:Mar 10 05:51:57 vm02 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:51:57.751 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:51:57 vm05 bash[17864]: audit 2026-03-10T05:51:56.473484+0000 mon.a (mon.0) 837 : audit [INF] from='client.? ' entity='client.iscsi.foo.vm02.mxbwmh' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.102:0/3027066951"}]': finished 2026-03-10T05:51:57.752 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:51:57 vm05 bash[17864]: cluster 2026-03-10T05:51:56.473525+0000 mon.a (mon.0) 838 : cluster [DBG] osdmap e85: 8 total, 8 up, 8 in 2026-03-10T05:51:57.752 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:51:57 vm05 bash[17864]: cluster 2026-03-10T05:51:56.631690+0000 mgr.x (mgr.24773) 43 : cluster [DBG] pgmap v15: 161 pgs: 161 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail; 1023 B/s rd, 1 op/s 2026-03-10T05:51:57.752 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:51:57 vm05 bash[17864]: audit 2026-03-10T05:51:56.700720+0000 mon.a (mon.0) 839 : audit [INF] from='client.? 192.168.123.102:0/2491145283' entity='client.iscsi.foo.vm02.mxbwmh' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.102:0/2274851303"}]: dispatch 2026-03-10T05:51:58.084 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:51:58 vm02 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:51:58.084 INFO:journalctl@ceph.mgr.y.vm02.stdout:Mar 10 05:51:58 vm02 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:51:58.084 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:51:58 vm02 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:51:58.084 INFO:journalctl@ceph.osd.0.vm02.stdout:Mar 10 05:51:58 vm02 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:51:58.084 INFO:journalctl@ceph.osd.1.vm02.stdout:Mar 10 05:51:58 vm02 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:51:58.085 INFO:journalctl@ceph.osd.2.vm02.stdout:Mar 10 05:51:58 vm02 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:51:58.085 INFO:journalctl@ceph.osd.3.vm02.stdout:Mar 10 05:51:58 vm02 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:51:58.085 INFO:journalctl@ceph.node-exporter.a.vm02.stdout:Mar 10 05:51:58 vm02 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:51:58.085 INFO:journalctl@ceph.alertmanager.a.vm02.stdout:Mar 10 05:51:57 vm02 bash[51578]: ts=2026-03-10T05:51:57.810Z caller=main.go:240 level=info msg="Starting Alertmanager" version="(version=0.25.0, branch=HEAD, revision=258fab7cdd551f2cf251ed0348f0ad7289aee789)" 2026-03-10T05:51:58.085 INFO:journalctl@ceph.alertmanager.a.vm02.stdout:Mar 10 05:51:57 vm02 bash[51578]: ts=2026-03-10T05:51:57.810Z caller=main.go:241 level=info build_context="(go=go1.19.4, user=root@abe866dd5717, date=20221222-14:51:36)" 2026-03-10T05:51:58.085 INFO:journalctl@ceph.alertmanager.a.vm02.stdout:Mar 10 05:51:57 vm02 bash[51578]: ts=2026-03-10T05:51:57.813Z caller=cluster.go:185 level=info component=cluster msg="setting advertise address explicitly" addr=192.168.123.102 port=9094 2026-03-10T05:51:58.085 INFO:journalctl@ceph.alertmanager.a.vm02.stdout:Mar 10 05:51:57 vm02 bash[51578]: ts=2026-03-10T05:51:57.816Z caller=cluster.go:681 level=info component=cluster msg="Waiting for gossip to settle..." interval=2s 2026-03-10T05:51:58.085 INFO:journalctl@ceph.alertmanager.a.vm02.stdout:Mar 10 05:51:57 vm02 bash[51578]: ts=2026-03-10T05:51:57.835Z caller=coordinator.go:113 level=info component=configuration msg="Loading configuration file" file=/etc/alertmanager/alertmanager.yml 2026-03-10T05:51:58.085 INFO:journalctl@ceph.alertmanager.a.vm02.stdout:Mar 10 05:51:57 vm02 bash[51578]: ts=2026-03-10T05:51:57.836Z caller=coordinator.go:126 level=info component=configuration msg="Completed loading of configuration file" file=/etc/alertmanager/alertmanager.yml 2026-03-10T05:51:58.085 INFO:journalctl@ceph.alertmanager.a.vm02.stdout:Mar 10 05:51:57 vm02 bash[51578]: ts=2026-03-10T05:51:57.837Z caller=tls_config.go:232 level=info msg="Listening on" address=[::]:9093 2026-03-10T05:51:58.085 INFO:journalctl@ceph.alertmanager.a.vm02.stdout:Mar 10 05:51:57 vm02 bash[51578]: ts=2026-03-10T05:51:57.837Z caller=tls_config.go:235 level=info msg="TLS is disabled." http2=false address=[::]:9093 2026-03-10T05:51:58.085 INFO:journalctl@ceph.alertmanager.a.vm02.stdout:Mar 10 05:51:58 vm02 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:51:58.400 INFO:journalctl@ceph.osd.0.vm02.stdout:Mar 10 05:51:58 vm02 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:51:58.400 INFO:journalctl@ceph.osd.1.vm02.stdout:Mar 10 05:51:58 vm02 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:51:58.400 INFO:journalctl@ceph.osd.2.vm02.stdout:Mar 10 05:51:58 vm02 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:51:58.400 INFO:journalctl@ceph.mgr.y.vm02.stdout:Mar 10 05:51:58 vm02 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:51:58.400 INFO:journalctl@ceph.osd.3.vm02.stdout:Mar 10 05:51:58 vm02 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:51:58.400 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:51:58 vm02 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:51:58.400 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:51:58 vm02 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:51:58.400 INFO:journalctl@ceph.node-exporter.a.vm02.stdout:Mar 10 05:51:58 vm02 systemd[1]: Stopping Ceph node-exporter.a for 107483ae-1c44-11f1-b530-c1172cd6122a... 2026-03-10T05:51:58.400 INFO:journalctl@ceph.node-exporter.a.vm02.stdout:Mar 10 05:51:58 vm02 bash[51715]: ceph-107483ae-1c44-11f1-b530-c1172cd6122a-node-exporter-a 2026-03-10T05:51:58.400 INFO:journalctl@ceph.node-exporter.a.vm02.stdout:Mar 10 05:51:58 vm02 systemd[1]: ceph-107483ae-1c44-11f1-b530-c1172cd6122a@node-exporter.a.service: Main process exited, code=exited, status=143/n/a 2026-03-10T05:51:58.400 INFO:journalctl@ceph.node-exporter.a.vm02.stdout:Mar 10 05:51:58 vm02 systemd[1]: ceph-107483ae-1c44-11f1-b530-c1172cd6122a@node-exporter.a.service: Failed with result 'exit-code'. 2026-03-10T05:51:58.400 INFO:journalctl@ceph.node-exporter.a.vm02.stdout:Mar 10 05:51:58 vm02 systemd[1]: Stopped Ceph node-exporter.a for 107483ae-1c44-11f1-b530-c1172cd6122a. 2026-03-10T05:51:58.400 INFO:journalctl@ceph.node-exporter.a.vm02.stdout:Mar 10 05:51:58 vm02 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:51:58.400 INFO:journalctl@ceph.alertmanager.a.vm02.stdout:Mar 10 05:51:58 vm02 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:51:58.751 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:51:58 vm05 bash[17864]: audit 2026-03-10T05:51:57.493162+0000 mon.a (mon.0) 840 : audit [INF] from='client.? 192.168.123.102:0/2491145283' entity='client.iscsi.foo.vm02.mxbwmh' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.102:0/2274851303"}]': finished 2026-03-10T05:51:58.751 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:51:58 vm05 bash[17864]: cluster 2026-03-10T05:51:57.493201+0000 mon.a (mon.0) 841 : cluster [DBG] osdmap e86: 8 total, 8 up, 8 in 2026-03-10T05:51:58.751 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:51:58 vm05 bash[17864]: audit 2026-03-10T05:51:57.580059+0000 mon.a (mon.0) 842 : audit [INF] from='mgr.24773 ' entity='mgr.x' 2026-03-10T05:51:58.751 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:51:58 vm05 bash[17864]: audit 2026-03-10T05:51:57.591340+0000 mon.a (mon.0) 843 : audit [INF] from='mgr.24773 ' entity='mgr.x' 2026-03-10T05:51:58.751 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:51:58 vm05 bash[17864]: cephadm 2026-03-10T05:51:57.595952+0000 mgr.x (mgr.24773) 44 : cephadm [INF] Reconfiguring node-exporter.a (dependencies changed)... 2026-03-10T05:51:58.751 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:51:58 vm05 bash[17864]: cephadm 2026-03-10T05:51:57.596376+0000 mgr.x (mgr.24773) 45 : cephadm [INF] Deploying daemon node-exporter.a on vm02 2026-03-10T05:51:58.751 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:51:58 vm05 bash[17864]: audit 2026-03-10T05:51:57.794318+0000 mon.c (mon.1) 126 : audit [INF] from='client.? 192.168.123.102:0/875591833' entity='client.iscsi.foo.vm02.mxbwmh' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.102:0/1361057090"}]: dispatch 2026-03-10T05:51:58.751 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:51:58 vm05 bash[17864]: audit 2026-03-10T05:51:57.798320+0000 mon.a (mon.0) 844 : audit [INF] from='client.? ' entity='client.iscsi.foo.vm02.mxbwmh' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.102:0/1361057090"}]: dispatch 2026-03-10T05:51:58.751 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:51:58 vm05 bash[17864]: audit 2026-03-10T05:51:58.426972+0000 mon.a (mon.0) 845 : audit [INF] from='mgr.24773 ' entity='mgr.x' 2026-03-10T05:51:58.751 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:51:58 vm05 bash[17864]: audit 2026-03-10T05:51:58.433283+0000 mon.a (mon.0) 846 : audit [INF] from='mgr.24773 ' entity='mgr.x' 2026-03-10T05:51:58.751 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:51:58 vm05 bash[17864]: audit 2026-03-10T05:51:58.458511+0000 mon.a (mon.0) 847 : audit [INF] from='mgr.24773 ' entity='mgr.x' 2026-03-10T05:51:58.751 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:51:58 vm05 bash[17864]: audit 2026-03-10T05:51:58.464029+0000 mon.a (mon.0) 848 : audit [INF] from='mgr.24773 ' entity='mgr.x' 2026-03-10T05:51:58.751 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:51:58 vm05 bash[17864]: audit 2026-03-10T05:51:58.469199+0000 mon.b (mon.2) 64 : audit [INF] from='mgr.24773 192.168.123.105:0/3918521054' entity='mgr.x' cmd=[{"prefix": "dashboard set-grafana-api-ssl-verify", "value": "false"}]: dispatch 2026-03-10T05:51:58.834 INFO:journalctl@ceph.node-exporter.a.vm02.stdout:Mar 10 05:51:58 vm02 systemd[1]: Started Ceph node-exporter.a for 107483ae-1c44-11f1-b530-c1172cd6122a. 2026-03-10T05:51:58.834 INFO:journalctl@ceph.node-exporter.a.vm02.stdout:Mar 10 05:51:58 vm02 bash[51824]: Unable to find image 'quay.io/prometheus/node-exporter:v1.7.0' locally 2026-03-10T05:51:58.834 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:51:58 vm02 bash[17462]: audit 2026-03-10T05:51:57.493162+0000 mon.a (mon.0) 840 : audit [INF] from='client.? 192.168.123.102:0/2491145283' entity='client.iscsi.foo.vm02.mxbwmh' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.102:0/2274851303"}]': finished 2026-03-10T05:51:58.834 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:51:58 vm02 bash[17462]: cluster 2026-03-10T05:51:57.493201+0000 mon.a (mon.0) 841 : cluster [DBG] osdmap e86: 8 total, 8 up, 8 in 2026-03-10T05:51:58.834 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:51:58 vm02 bash[17462]: audit 2026-03-10T05:51:57.580059+0000 mon.a (mon.0) 842 : audit [INF] from='mgr.24773 ' entity='mgr.x' 2026-03-10T05:51:58.834 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:51:58 vm02 bash[17462]: audit 2026-03-10T05:51:57.591340+0000 mon.a (mon.0) 843 : audit [INF] from='mgr.24773 ' entity='mgr.x' 2026-03-10T05:51:58.835 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:51:58 vm02 bash[17462]: cephadm 2026-03-10T05:51:57.595952+0000 mgr.x (mgr.24773) 44 : cephadm [INF] Reconfiguring node-exporter.a (dependencies changed)... 2026-03-10T05:51:58.835 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:51:58 vm02 bash[17462]: cephadm 2026-03-10T05:51:57.596376+0000 mgr.x (mgr.24773) 45 : cephadm [INF] Deploying daemon node-exporter.a on vm02 2026-03-10T05:51:58.835 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:51:58 vm02 bash[17462]: audit 2026-03-10T05:51:57.794318+0000 mon.c (mon.1) 126 : audit [INF] from='client.? 192.168.123.102:0/875591833' entity='client.iscsi.foo.vm02.mxbwmh' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.102:0/1361057090"}]: dispatch 2026-03-10T05:51:58.835 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:51:58 vm02 bash[17462]: audit 2026-03-10T05:51:57.798320+0000 mon.a (mon.0) 844 : audit [INF] from='client.? ' entity='client.iscsi.foo.vm02.mxbwmh' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.102:0/1361057090"}]: dispatch 2026-03-10T05:51:58.835 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:51:58 vm02 bash[17462]: audit 2026-03-10T05:51:58.426972+0000 mon.a (mon.0) 845 : audit [INF] from='mgr.24773 ' entity='mgr.x' 2026-03-10T05:51:58.835 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:51:58 vm02 bash[17462]: audit 2026-03-10T05:51:58.433283+0000 mon.a (mon.0) 846 : audit [INF] from='mgr.24773 ' entity='mgr.x' 2026-03-10T05:51:58.835 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:51:58 vm02 bash[17462]: audit 2026-03-10T05:51:58.458511+0000 mon.a (mon.0) 847 : audit [INF] from='mgr.24773 ' entity='mgr.x' 2026-03-10T05:51:58.835 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:51:58 vm02 bash[17462]: audit 2026-03-10T05:51:58.464029+0000 mon.a (mon.0) 848 : audit [INF] from='mgr.24773 ' entity='mgr.x' 2026-03-10T05:51:58.835 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:51:58 vm02 bash[17462]: audit 2026-03-10T05:51:58.469199+0000 mon.b (mon.2) 64 : audit [INF] from='mgr.24773 192.168.123.105:0/3918521054' entity='mgr.x' cmd=[{"prefix": "dashboard set-grafana-api-ssl-verify", "value": "false"}]: dispatch 2026-03-10T05:51:58.835 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:51:58 vm02 bash[22526]: audit 2026-03-10T05:51:57.493162+0000 mon.a (mon.0) 840 : audit [INF] from='client.? 192.168.123.102:0/2491145283' entity='client.iscsi.foo.vm02.mxbwmh' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.102:0/2274851303"}]': finished 2026-03-10T05:51:58.835 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:51:58 vm02 bash[22526]: cluster 2026-03-10T05:51:57.493201+0000 mon.a (mon.0) 841 : cluster [DBG] osdmap e86: 8 total, 8 up, 8 in 2026-03-10T05:51:58.835 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:51:58 vm02 bash[22526]: audit 2026-03-10T05:51:57.580059+0000 mon.a (mon.0) 842 : audit [INF] from='mgr.24773 ' entity='mgr.x' 2026-03-10T05:51:58.835 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:51:58 vm02 bash[22526]: audit 2026-03-10T05:51:57.591340+0000 mon.a (mon.0) 843 : audit [INF] from='mgr.24773 ' entity='mgr.x' 2026-03-10T05:51:58.835 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:51:58 vm02 bash[22526]: cephadm 2026-03-10T05:51:57.595952+0000 mgr.x (mgr.24773) 44 : cephadm [INF] Reconfiguring node-exporter.a (dependencies changed)... 2026-03-10T05:51:58.835 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:51:58 vm02 bash[22526]: cephadm 2026-03-10T05:51:57.596376+0000 mgr.x (mgr.24773) 45 : cephadm [INF] Deploying daemon node-exporter.a on vm02 2026-03-10T05:51:58.835 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:51:58 vm02 bash[22526]: audit 2026-03-10T05:51:57.794318+0000 mon.c (mon.1) 126 : audit [INF] from='client.? 192.168.123.102:0/875591833' entity='client.iscsi.foo.vm02.mxbwmh' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.102:0/1361057090"}]: dispatch 2026-03-10T05:51:58.835 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:51:58 vm02 bash[22526]: audit 2026-03-10T05:51:57.798320+0000 mon.a (mon.0) 844 : audit [INF] from='client.? ' entity='client.iscsi.foo.vm02.mxbwmh' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.102:0/1361057090"}]: dispatch 2026-03-10T05:51:58.835 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:51:58 vm02 bash[22526]: audit 2026-03-10T05:51:58.426972+0000 mon.a (mon.0) 845 : audit [INF] from='mgr.24773 ' entity='mgr.x' 2026-03-10T05:51:58.835 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:51:58 vm02 bash[22526]: audit 2026-03-10T05:51:58.433283+0000 mon.a (mon.0) 846 : audit [INF] from='mgr.24773 ' entity='mgr.x' 2026-03-10T05:51:58.835 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:51:58 vm02 bash[22526]: audit 2026-03-10T05:51:58.458511+0000 mon.a (mon.0) 847 : audit [INF] from='mgr.24773 ' entity='mgr.x' 2026-03-10T05:51:58.835 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:51:58 vm02 bash[22526]: audit 2026-03-10T05:51:58.464029+0000 mon.a (mon.0) 848 : audit [INF] from='mgr.24773 ' entity='mgr.x' 2026-03-10T05:51:58.835 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:51:58 vm02 bash[22526]: audit 2026-03-10T05:51:58.469199+0000 mon.b (mon.2) 64 : audit [INF] from='mgr.24773 192.168.123.105:0/3918521054' entity='mgr.x' cmd=[{"prefix": "dashboard set-grafana-api-ssl-verify", "value": "false"}]: dispatch 2026-03-10T05:51:59.001 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:51:58 vm05 systemd[1]: Stopping Ceph grafana.a for 107483ae-1c44-11f1-b530-c1172cd6122a... 2026-03-10T05:51:59.002 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:51:58 vm05 bash[39353]: Error response from daemon: No such container: ceph-107483ae-1c44-11f1-b530-c1172cd6122a-grafana.a 2026-03-10T05:51:59.002 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:51:58 vm05 bash[33387]: t=2026-03-10T05:51:58+0000 lvl=info msg="Shutdown started" logger=server reason="System signal: terminated" 2026-03-10T05:51:59.258 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:51:59 vm05 bash[39361]: ceph-107483ae-1c44-11f1-b530-c1172cd6122a-grafana-a 2026-03-10T05:51:59.258 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:51:59 vm05 bash[39396]: Error response from daemon: No such container: ceph-107483ae-1c44-11f1-b530-c1172cd6122a-grafana.a 2026-03-10T05:51:59.258 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:51:59 vm05 systemd[1]: ceph-107483ae-1c44-11f1-b530-c1172cd6122a@grafana.a.service: Deactivated successfully. 2026-03-10T05:51:59.258 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:51:59 vm05 systemd[1]: Stopped Ceph grafana.a for 107483ae-1c44-11f1-b530-c1172cd6122a. 2026-03-10T05:51:59.258 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:51:59 vm05 systemd[1]: Started Ceph grafana.a for 107483ae-1c44-11f1-b530-c1172cd6122a. 2026-03-10T05:51:59.258 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:51:59 vm05 bash[39420]: t=2026-03-10T05:51:59+0000 lvl=info msg="The state of unified alerting is still not defined. The decision will be made during as we run the database migrations" logger=settings 2026-03-10T05:51:59.258 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:51:59 vm05 bash[39420]: t=2026-03-10T05:51:59+0000 lvl=warn msg="falling back to legacy setting of 'min_interval_seconds'; please use the configuration option in the `unified_alerting` section if Grafana 8 alerts are enabled." logger=settings 2026-03-10T05:51:59.258 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:51:59 vm05 bash[39420]: t=2026-03-10T05:51:59+0000 lvl=info msg="Config loaded from" logger=settings file=/usr/share/grafana/conf/defaults.ini 2026-03-10T05:51:59.258 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:51:59 vm05 bash[39420]: t=2026-03-10T05:51:59+0000 lvl=info msg="Config loaded from" logger=settings file=/etc/grafana/grafana.ini 2026-03-10T05:51:59.258 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:51:59 vm05 bash[39420]: t=2026-03-10T05:51:59+0000 lvl=info msg="Config overridden from Environment variable" logger=settings var="GF_PATHS_DATA=/var/lib/grafana" 2026-03-10T05:51:59.258 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:51:59 vm05 bash[39420]: t=2026-03-10T05:51:59+0000 lvl=info msg="Config overridden from Environment variable" logger=settings var="GF_PATHS_LOGS=/var/log/grafana" 2026-03-10T05:51:59.258 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:51:59 vm05 bash[39420]: t=2026-03-10T05:51:59+0000 lvl=info msg="Config overridden from Environment variable" logger=settings var="GF_PATHS_PLUGINS=/var/lib/grafana/plugins" 2026-03-10T05:51:59.258 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:51:59 vm05 bash[39420]: t=2026-03-10T05:51:59+0000 lvl=info msg="Config overridden from Environment variable" logger=settings var="GF_PATHS_PROVISIONING=/etc/grafana/provisioning" 2026-03-10T05:51:59.258 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:51:59 vm05 bash[39420]: t=2026-03-10T05:51:59+0000 lvl=info msg="Path Home" logger=settings path=/usr/share/grafana 2026-03-10T05:51:59.258 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:51:59 vm05 bash[39420]: t=2026-03-10T05:51:59+0000 lvl=info msg="Path Data" logger=settings path=/var/lib/grafana 2026-03-10T05:51:59.258 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:51:59 vm05 bash[39420]: t=2026-03-10T05:51:59+0000 lvl=info msg="Path Logs" logger=settings path=/var/log/grafana 2026-03-10T05:51:59.258 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:51:59 vm05 bash[39420]: t=2026-03-10T05:51:59+0000 lvl=info msg="Path Plugins" logger=settings path=/var/lib/grafana/plugins 2026-03-10T05:51:59.258 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:51:59 vm05 bash[39420]: t=2026-03-10T05:51:59+0000 lvl=info msg="Path Provisioning" logger=settings path=/etc/grafana/provisioning 2026-03-10T05:51:59.259 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:51:59 vm05 bash[39420]: t=2026-03-10T05:51:59+0000 lvl=info msg="App mode production" logger=settings 2026-03-10T05:51:59.259 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:51:59 vm05 bash[39420]: t=2026-03-10T05:51:59+0000 lvl=info msg="Connecting to DB" logger=sqlstore dbtype=sqlite3 2026-03-10T05:51:59.259 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:51:59 vm05 bash[39420]: t=2026-03-10T05:51:59+0000 lvl=warn msg="SQLite database file has broader permissions than it should" logger=sqlstore path=/var/lib/grafana/grafana.db mode=-rw-r--r-- expected=-rw-r----- 2026-03-10T05:51:59.259 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:51:59 vm05 bash[39420]: t=2026-03-10T05:51:59+0000 lvl=info msg="Starting DB migrations" logger=migrator 2026-03-10T05:51:59.259 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:51:59 vm05 bash[39420]: t=2026-03-10T05:51:59+0000 lvl=info msg="migrations completed" logger=migrator performed=0 skipped=377 duration=455.668µs 2026-03-10T05:51:59.259 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:51:59 vm05 bash[39420]: t=2026-03-10T05:51:59+0000 lvl=info msg="Created default organization" logger=sqlstore 2026-03-10T05:51:59.259 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:51:59 vm05 bash[39420]: t=2026-03-10T05:51:59+0000 lvl=info msg="Initialising plugins" logger=plugin.manager 2026-03-10T05:51:59.259 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:51:59 vm05 bash[39420]: t=2026-03-10T05:51:59+0000 lvl=info msg="Plugin registered" logger=plugin.manager pluginId=input 2026-03-10T05:51:59.533 INFO:journalctl@ceph.mgr.x.vm05.stdout:Mar 10 05:51:59 vm05 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:51:59.533 INFO:journalctl@ceph.osd.4.vm05.stdout:Mar 10 05:51:59 vm05 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:51:59.533 INFO:journalctl@ceph.osd.6.vm05.stdout:Mar 10 05:51:59 vm05 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:51:59.533 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:51:59 vm05 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:51:59.533 INFO:journalctl@ceph.osd.5.vm05.stdout:Mar 10 05:51:59 vm05 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:51:59.533 INFO:journalctl@ceph.osd.7.vm05.stdout:Mar 10 05:51:59 vm05 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:51:59.533 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:51:59 vm05 bash[39420]: t=2026-03-10T05:51:59+0000 lvl=info msg="Plugin registered" logger=plugin.manager pluginId=grafana-piechart-panel 2026-03-10T05:51:59.533 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:51:59 vm05 bash[39420]: t=2026-03-10T05:51:59+0000 lvl=info msg="Plugin registered" logger=plugin.manager pluginId=vonage-status-panel 2026-03-10T05:51:59.533 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:51:59 vm05 bash[39420]: t=2026-03-10T05:51:59+0000 lvl=info msg="Live Push Gateway initialization" logger=live.push_http 2026-03-10T05:51:59.533 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:51:59 vm05 bash[39420]: t=2026-03-10T05:51:59+0000 lvl=info msg="deleted datasource based on configuration" logger=provisioning.datasources name=Dashboard1 2026-03-10T05:51:59.533 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:51:59 vm05 bash[39420]: t=2026-03-10T05:51:59+0000 lvl=info msg="inserting datasource from configuration " logger=provisioning.datasources name=Dashboard1 uid=P43CA22E17D0F9596 2026-03-10T05:51:59.533 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:51:59 vm05 bash[39420]: t=2026-03-10T05:51:59+0000 lvl=info msg="inserting datasource from configuration " logger=provisioning.datasources name=Loki uid=P8E80F9AEF21F6940 2026-03-10T05:51:59.533 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:51:59 vm05 bash[39420]: t=2026-03-10T05:51:59+0000 lvl=info msg="HTTP Server Listen" logger=http.server address=[::]:3000 protocol=https subUrl= socket= 2026-03-10T05:51:59.533 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:51:59 vm05 bash[39420]: t=2026-03-10T05:51:59+0000 lvl=info msg="warming cache for startup" logger=ngalert 2026-03-10T05:51:59.533 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:51:59 vm05 bash[39420]: t=2026-03-10T05:51:59+0000 lvl=info msg="starting MultiOrg Alertmanager" logger=ngalert.multiorg.alertmanager 2026-03-10T05:51:59.533 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:51:59 vm05 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:51:59.533 INFO:journalctl@ceph.prometheus.a.vm05.stdout:Mar 10 05:51:59 vm05 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:51:59.533 INFO:journalctl@ceph.node-exporter.b.vm05.stdout:Mar 10 05:51:59 vm05 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:51:59.533 INFO:journalctl@ceph.node-exporter.b.vm05.stdout:Mar 10 05:51:59 vm05 systemd[1]: Stopping Ceph node-exporter.b for 107483ae-1c44-11f1-b530-c1172cd6122a... 2026-03-10T05:51:59.844 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:51:59 vm05 bash[17864]: cephadm 2026-03-10T05:51:58.436517+0000 mgr.x (mgr.24773) 46 : cephadm [INF] Reconfiguring grafana.a (dependencies changed)... 2026-03-10T05:51:59.845 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:51:59 vm05 bash[17864]: cephadm 2026-03-10T05:51:58.440940+0000 mgr.x (mgr.24773) 47 : cephadm [INF] Regenerating cephadm self-signed grafana TLS certificates 2026-03-10T05:51:59.845 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:51:59 vm05 bash[17864]: audit 2026-03-10T05:51:58.469502+0000 mgr.x (mgr.24773) 48 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard set-grafana-api-ssl-verify", "value": "false"}]: dispatch 2026-03-10T05:51:59.845 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:51:59 vm05 bash[17864]: cephadm 2026-03-10T05:51:58.471783+0000 mgr.x (mgr.24773) 49 : cephadm [INF] Reconfiguring daemon grafana.a on vm05 2026-03-10T05:51:59.845 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:51:59 vm05 bash[17864]: audit 2026-03-10T05:51:58.595213+0000 mon.a (mon.0) 849 : audit [INF] from='client.? ' entity='client.iscsi.foo.vm02.mxbwmh' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.102:0/1361057090"}]': finished 2026-03-10T05:51:59.845 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:51:59 vm05 bash[17864]: cluster 2026-03-10T05:51:58.595254+0000 mon.a (mon.0) 850 : cluster [DBG] osdmap e87: 8 total, 8 up, 8 in 2026-03-10T05:51:59.845 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:51:59 vm05 bash[17864]: cluster 2026-03-10T05:51:58.632357+0000 mgr.x (mgr.24773) 50 : cluster [DBG] pgmap v18: 161 pgs: 161 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail; 1.5 KiB/s rd, 1 op/s 2026-03-10T05:51:59.845 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:51:59 vm05 bash[17864]: audit 2026-03-10T05:51:58.802716+0000 mon.c (mon.1) 127 : audit [INF] from='client.? 192.168.123.102:0/3700669266' entity='client.iscsi.foo.vm02.mxbwmh' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.102:6801/3443698967"}]: dispatch 2026-03-10T05:51:59.845 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:51:59 vm05 bash[17864]: audit 2026-03-10T05:51:58.803067+0000 mon.a (mon.0) 851 : audit [INF] from='client.? ' entity='client.iscsi.foo.vm02.mxbwmh' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.102:6801/3443698967"}]: dispatch 2026-03-10T05:51:59.845 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:51:59 vm05 bash[17864]: audit 2026-03-10T05:51:59.046805+0000 mon.a (mon.0) 852 : audit [INF] from='mgr.24773 ' entity='mgr.x' 2026-03-10T05:51:59.845 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:51:59 vm05 bash[17864]: audit 2026-03-10T05:51:59.053014+0000 mon.a (mon.0) 853 : audit [INF] from='mgr.24773 ' entity='mgr.x' 2026-03-10T05:51:59.845 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:51:59 vm05 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:51:59.845 INFO:journalctl@ceph.mgr.x.vm05.stdout:Mar 10 05:51:59 vm05 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:51:59.845 INFO:journalctl@ceph.osd.4.vm05.stdout:Mar 10 05:51:59 vm05 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:51:59.845 INFO:journalctl@ceph.osd.5.vm05.stdout:Mar 10 05:51:59 vm05 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:51:59.845 INFO:journalctl@ceph.osd.6.vm05.stdout:Mar 10 05:51:59 vm05 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:51:59.845 INFO:journalctl@ceph.osd.7.vm05.stdout:Mar 10 05:51:59 vm05 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:51:59.845 INFO:journalctl@ceph.prometheus.a.vm05.stdout:Mar 10 05:51:59 vm05 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:51:59.845 INFO:journalctl@ceph.node-exporter.b.vm05.stdout:Mar 10 05:51:59 vm05 bash[39541]: ceph-107483ae-1c44-11f1-b530-c1172cd6122a-node-exporter-b 2026-03-10T05:51:59.845 INFO:journalctl@ceph.node-exporter.b.vm05.stdout:Mar 10 05:51:59 vm05 systemd[1]: ceph-107483ae-1c44-11f1-b530-c1172cd6122a@node-exporter.b.service: Main process exited, code=exited, status=143/n/a 2026-03-10T05:51:59.845 INFO:journalctl@ceph.node-exporter.b.vm05.stdout:Mar 10 05:51:59 vm05 systemd[1]: ceph-107483ae-1c44-11f1-b530-c1172cd6122a@node-exporter.b.service: Failed with result 'exit-code'. 2026-03-10T05:51:59.845 INFO:journalctl@ceph.node-exporter.b.vm05.stdout:Mar 10 05:51:59 vm05 systemd[1]: Stopped Ceph node-exporter.b for 107483ae-1c44-11f1-b530-c1172cd6122a. 2026-03-10T05:51:59.845 INFO:journalctl@ceph.node-exporter.b.vm05.stdout:Mar 10 05:51:59 vm05 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:51:59.845 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:51:59 vm05 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:52:00.084 INFO:journalctl@ceph.alertmanager.a.vm02.stdout:Mar 10 05:51:59 vm02 bash[51578]: ts=2026-03-10T05:51:59.816Z caller=cluster.go:706 level=info component=cluster msg="gossip not settled" polls=0 before=0 now=1 elapsed=2.000202358s 2026-03-10T05:52:00.084 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:51:59 vm02 bash[17462]: cephadm 2026-03-10T05:51:58.436517+0000 mgr.x (mgr.24773) 46 : cephadm [INF] Reconfiguring grafana.a (dependencies changed)... 2026-03-10T05:52:00.084 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:51:59 vm02 bash[17462]: cephadm 2026-03-10T05:51:58.440940+0000 mgr.x (mgr.24773) 47 : cephadm [INF] Regenerating cephadm self-signed grafana TLS certificates 2026-03-10T05:52:00.084 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:51:59 vm02 bash[17462]: audit 2026-03-10T05:51:58.469502+0000 mgr.x (mgr.24773) 48 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard set-grafana-api-ssl-verify", "value": "false"}]: dispatch 2026-03-10T05:52:00.084 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:51:59 vm02 bash[17462]: cephadm 2026-03-10T05:51:58.471783+0000 mgr.x (mgr.24773) 49 : cephadm [INF] Reconfiguring daemon grafana.a on vm05 2026-03-10T05:52:00.084 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:51:59 vm02 bash[17462]: audit 2026-03-10T05:51:58.595213+0000 mon.a (mon.0) 849 : audit [INF] from='client.? ' entity='client.iscsi.foo.vm02.mxbwmh' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.102:0/1361057090"}]': finished 2026-03-10T05:52:00.084 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:51:59 vm02 bash[17462]: cluster 2026-03-10T05:51:58.595254+0000 mon.a (mon.0) 850 : cluster [DBG] osdmap e87: 8 total, 8 up, 8 in 2026-03-10T05:52:00.084 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:51:59 vm02 bash[17462]: cluster 2026-03-10T05:51:58.632357+0000 mgr.x (mgr.24773) 50 : cluster [DBG] pgmap v18: 161 pgs: 161 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail; 1.5 KiB/s rd, 1 op/s 2026-03-10T05:52:00.084 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:51:59 vm02 bash[17462]: audit 2026-03-10T05:51:58.802716+0000 mon.c (mon.1) 127 : audit [INF] from='client.? 192.168.123.102:0/3700669266' entity='client.iscsi.foo.vm02.mxbwmh' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.102:6801/3443698967"}]: dispatch 2026-03-10T05:52:00.084 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:51:59 vm02 bash[17462]: audit 2026-03-10T05:51:58.803067+0000 mon.a (mon.0) 851 : audit [INF] from='client.? ' entity='client.iscsi.foo.vm02.mxbwmh' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.102:6801/3443698967"}]: dispatch 2026-03-10T05:52:00.084 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:51:59 vm02 bash[17462]: audit 2026-03-10T05:51:59.046805+0000 mon.a (mon.0) 852 : audit [INF] from='mgr.24773 ' entity='mgr.x' 2026-03-10T05:52:00.084 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:51:59 vm02 bash[17462]: audit 2026-03-10T05:51:59.053014+0000 mon.a (mon.0) 853 : audit [INF] from='mgr.24773 ' entity='mgr.x' 2026-03-10T05:52:00.084 INFO:journalctl@ceph.node-exporter.a.vm02.stdout:Mar 10 05:51:59 vm02 bash[51824]: v1.7.0: Pulling from prometheus/node-exporter 2026-03-10T05:52:00.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:51:59 vm02 bash[22526]: cephadm 2026-03-10T05:51:58.436517+0000 mgr.x (mgr.24773) 46 : cephadm [INF] Reconfiguring grafana.a (dependencies changed)... 2026-03-10T05:52:00.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:51:59 vm02 bash[22526]: cephadm 2026-03-10T05:51:58.440940+0000 mgr.x (mgr.24773) 47 : cephadm [INF] Regenerating cephadm self-signed grafana TLS certificates 2026-03-10T05:52:00.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:51:59 vm02 bash[22526]: audit 2026-03-10T05:51:58.469502+0000 mgr.x (mgr.24773) 48 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard set-grafana-api-ssl-verify", "value": "false"}]: dispatch 2026-03-10T05:52:00.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:51:59 vm02 bash[22526]: cephadm 2026-03-10T05:51:58.471783+0000 mgr.x (mgr.24773) 49 : cephadm [INF] Reconfiguring daemon grafana.a on vm05 2026-03-10T05:52:00.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:51:59 vm02 bash[22526]: audit 2026-03-10T05:51:58.595213+0000 mon.a (mon.0) 849 : audit [INF] from='client.? ' entity='client.iscsi.foo.vm02.mxbwmh' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.102:0/1361057090"}]': finished 2026-03-10T05:52:00.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:51:59 vm02 bash[22526]: cluster 2026-03-10T05:51:58.595254+0000 mon.a (mon.0) 850 : cluster [DBG] osdmap e87: 8 total, 8 up, 8 in 2026-03-10T05:52:00.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:51:59 vm02 bash[22526]: cluster 2026-03-10T05:51:58.632357+0000 mgr.x (mgr.24773) 50 : cluster [DBG] pgmap v18: 161 pgs: 161 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail; 1.5 KiB/s rd, 1 op/s 2026-03-10T05:52:00.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:51:59 vm02 bash[22526]: audit 2026-03-10T05:51:58.802716+0000 mon.c (mon.1) 127 : audit [INF] from='client.? 192.168.123.102:0/3700669266' entity='client.iscsi.foo.vm02.mxbwmh' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.102:6801/3443698967"}]: dispatch 2026-03-10T05:52:00.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:51:59 vm02 bash[22526]: audit 2026-03-10T05:51:58.803067+0000 mon.a (mon.0) 851 : audit [INF] from='client.? ' entity='client.iscsi.foo.vm02.mxbwmh' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.102:6801/3443698967"}]: dispatch 2026-03-10T05:52:00.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:51:59 vm02 bash[22526]: audit 2026-03-10T05:51:59.046805+0000 mon.a (mon.0) 852 : audit [INF] from='mgr.24773 ' entity='mgr.x' 2026-03-10T05:52:00.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:51:59 vm02 bash[22526]: audit 2026-03-10T05:51:59.053014+0000 mon.a (mon.0) 853 : audit [INF] from='mgr.24773 ' entity='mgr.x' 2026-03-10T05:52:00.251 INFO:journalctl@ceph.node-exporter.b.vm05.stdout:Mar 10 05:51:59 vm05 systemd[1]: Started Ceph node-exporter.b for 107483ae-1c44-11f1-b530-c1172cd6122a. 2026-03-10T05:52:00.251 INFO:journalctl@ceph.node-exporter.b.vm05.stdout:Mar 10 05:51:59 vm05 bash[39655]: Unable to find image 'quay.io/prometheus/node-exporter:v1.7.0' locally 2026-03-10T05:52:00.584 INFO:journalctl@ceph.node-exporter.a.vm02.stdout:Mar 10 05:52:00 vm02 bash[51824]: 2abcce694348: Pulling fs layer 2026-03-10T05:52:00.584 INFO:journalctl@ceph.node-exporter.a.vm02.stdout:Mar 10 05:52:00 vm02 bash[51824]: 455fd88e5221: Pulling fs layer 2026-03-10T05:52:00.584 INFO:journalctl@ceph.node-exporter.a.vm02.stdout:Mar 10 05:52:00 vm02 bash[51824]: 324153f2810a: Pulling fs layer 2026-03-10T05:52:00.884 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:52:00 vm02 bash[17462]: cephadm 2026-03-10T05:51:59.057930+0000 mgr.x (mgr.24773) 51 : cephadm [INF] Reconfiguring node-exporter.b (dependencies changed)... 2026-03-10T05:52:00.884 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:52:00 vm02 bash[17462]: cephadm 2026-03-10T05:51:59.058234+0000 mgr.x (mgr.24773) 52 : cephadm [INF] Deploying daemon node-exporter.b on vm05 2026-03-10T05:52:00.885 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:52:00 vm02 bash[17462]: audit 2026-03-10T05:51:59.612300+0000 mon.a (mon.0) 854 : audit [INF] from='client.? ' entity='client.iscsi.foo.vm02.mxbwmh' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.102:6801/3443698967"}]': finished 2026-03-10T05:52:00.885 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:52:00 vm02 bash[17462]: cluster 2026-03-10T05:51:59.612395+0000 mon.a (mon.0) 855 : cluster [DBG] osdmap e88: 8 total, 8 up, 8 in 2026-03-10T05:52:00.885 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:52:00 vm02 bash[17462]: audit 2026-03-10T05:51:59.834179+0000 mon.c (mon.1) 128 : audit [INF] from='client.? 192.168.123.102:0/314740819' entity='client.iscsi.foo.vm02.mxbwmh' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.102:6800/3443698967"}]: dispatch 2026-03-10T05:52:00.885 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:52:00 vm02 bash[17462]: audit 2026-03-10T05:51:59.834487+0000 mon.a (mon.0) 856 : audit [INF] from='client.? ' entity='client.iscsi.foo.vm02.mxbwmh' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.102:6800/3443698967"}]: dispatch 2026-03-10T05:52:00.885 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:52:00 vm02 bash[17462]: audit 2026-03-10T05:51:59.880597+0000 mon.a (mon.0) 857 : audit [INF] from='mgr.24773 ' entity='mgr.x' 2026-03-10T05:52:00.885 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:52:00 vm02 bash[17462]: audit 2026-03-10T05:51:59.890855+0000 mon.a (mon.0) 858 : audit [INF] from='mgr.24773 ' entity='mgr.x' 2026-03-10T05:52:00.885 INFO:journalctl@ceph.node-exporter.a.vm02.stdout:Mar 10 05:52:00 vm02 bash[51824]: 2abcce694348: Verifying Checksum 2026-03-10T05:52:00.885 INFO:journalctl@ceph.node-exporter.a.vm02.stdout:Mar 10 05:52:00 vm02 bash[51824]: 2abcce694348: Download complete 2026-03-10T05:52:00.885 INFO:journalctl@ceph.node-exporter.a.vm02.stdout:Mar 10 05:52:00 vm02 bash[51824]: 455fd88e5221: Verifying Checksum 2026-03-10T05:52:00.885 INFO:journalctl@ceph.node-exporter.a.vm02.stdout:Mar 10 05:52:00 vm02 bash[51824]: 455fd88e5221: Download complete 2026-03-10T05:52:00.885 INFO:journalctl@ceph.node-exporter.a.vm02.stdout:Mar 10 05:52:00 vm02 bash[51824]: 2abcce694348: Pull complete 2026-03-10T05:52:00.885 INFO:journalctl@ceph.node-exporter.a.vm02.stdout:Mar 10 05:52:00 vm02 bash[51824]: 455fd88e5221: Pull complete 2026-03-10T05:52:00.885 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:00 vm02 bash[22526]: cephadm 2026-03-10T05:51:59.057930+0000 mgr.x (mgr.24773) 51 : cephadm [INF] Reconfiguring node-exporter.b (dependencies changed)... 2026-03-10T05:52:00.885 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:00 vm02 bash[22526]: cephadm 2026-03-10T05:51:59.058234+0000 mgr.x (mgr.24773) 52 : cephadm [INF] Deploying daemon node-exporter.b on vm05 2026-03-10T05:52:00.885 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:00 vm02 bash[22526]: audit 2026-03-10T05:51:59.612300+0000 mon.a (mon.0) 854 : audit [INF] from='client.? ' entity='client.iscsi.foo.vm02.mxbwmh' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.102:6801/3443698967"}]': finished 2026-03-10T05:52:00.885 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:00 vm02 bash[22526]: cluster 2026-03-10T05:51:59.612395+0000 mon.a (mon.0) 855 : cluster [DBG] osdmap e88: 8 total, 8 up, 8 in 2026-03-10T05:52:00.885 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:00 vm02 bash[22526]: audit 2026-03-10T05:51:59.834179+0000 mon.c (mon.1) 128 : audit [INF] from='client.? 192.168.123.102:0/314740819' entity='client.iscsi.foo.vm02.mxbwmh' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.102:6800/3443698967"}]: dispatch 2026-03-10T05:52:00.885 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:00 vm02 bash[22526]: audit 2026-03-10T05:51:59.834487+0000 mon.a (mon.0) 856 : audit [INF] from='client.? ' entity='client.iscsi.foo.vm02.mxbwmh' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.102:6800/3443698967"}]: dispatch 2026-03-10T05:52:00.885 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:00 vm02 bash[22526]: audit 2026-03-10T05:51:59.880597+0000 mon.a (mon.0) 857 : audit [INF] from='mgr.24773 ' entity='mgr.x' 2026-03-10T05:52:00.885 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:00 vm02 bash[22526]: audit 2026-03-10T05:51:59.890855+0000 mon.a (mon.0) 858 : audit [INF] from='mgr.24773 ' entity='mgr.x' 2026-03-10T05:52:01.001 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:52:00 vm05 bash[17864]: cephadm 2026-03-10T05:51:59.057930+0000 mgr.x (mgr.24773) 51 : cephadm [INF] Reconfiguring node-exporter.b (dependencies changed)... 2026-03-10T05:52:01.001 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:52:00 vm05 bash[17864]: cephadm 2026-03-10T05:51:59.058234+0000 mgr.x (mgr.24773) 52 : cephadm [INF] Deploying daemon node-exporter.b on vm05 2026-03-10T05:52:01.001 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:52:00 vm05 bash[17864]: audit 2026-03-10T05:51:59.612300+0000 mon.a (mon.0) 854 : audit [INF] from='client.? ' entity='client.iscsi.foo.vm02.mxbwmh' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.102:6801/3443698967"}]': finished 2026-03-10T05:52:01.001 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:52:00 vm05 bash[17864]: cluster 2026-03-10T05:51:59.612395+0000 mon.a (mon.0) 855 : cluster [DBG] osdmap e88: 8 total, 8 up, 8 in 2026-03-10T05:52:01.001 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:52:00 vm05 bash[17864]: audit 2026-03-10T05:51:59.834179+0000 mon.c (mon.1) 128 : audit [INF] from='client.? 192.168.123.102:0/314740819' entity='client.iscsi.foo.vm02.mxbwmh' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.102:6800/3443698967"}]: dispatch 2026-03-10T05:52:01.001 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:52:00 vm05 bash[17864]: audit 2026-03-10T05:51:59.834487+0000 mon.a (mon.0) 856 : audit [INF] from='client.? ' entity='client.iscsi.foo.vm02.mxbwmh' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.102:6800/3443698967"}]: dispatch 2026-03-10T05:52:01.001 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:52:00 vm05 bash[17864]: audit 2026-03-10T05:51:59.880597+0000 mon.a (mon.0) 857 : audit [INF] from='mgr.24773 ' entity='mgr.x' 2026-03-10T05:52:01.001 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:52:00 vm05 bash[17864]: audit 2026-03-10T05:51:59.890855+0000 mon.a (mon.0) 858 : audit [INF] from='mgr.24773 ' entity='mgr.x' 2026-03-10T05:52:01.335 INFO:journalctl@ceph.node-exporter.a.vm02.stdout:Mar 10 05:52:00 vm02 bash[51824]: 324153f2810a: Verifying Checksum 2026-03-10T05:52:01.335 INFO:journalctl@ceph.node-exporter.a.vm02.stdout:Mar 10 05:52:00 vm02 bash[51824]: 324153f2810a: Download complete 2026-03-10T05:52:01.335 INFO:journalctl@ceph.node-exporter.a.vm02.stdout:Mar 10 05:52:00 vm02 bash[51824]: 324153f2810a: Pull complete 2026-03-10T05:52:01.335 INFO:journalctl@ceph.node-exporter.a.vm02.stdout:Mar 10 05:52:00 vm02 bash[51824]: Digest: sha256:4cb2b9019f1757be8482419002cb7afe028fdba35d47958829e4cfeaf6246d80 2026-03-10T05:52:01.335 INFO:journalctl@ceph.node-exporter.a.vm02.stdout:Mar 10 05:52:00 vm02 bash[51824]: Status: Downloaded newer image for quay.io/prometheus/node-exporter:v1.7.0 2026-03-10T05:52:01.335 INFO:journalctl@ceph.node-exporter.a.vm02.stdout:Mar 10 05:52:01 vm02 bash[51824]: ts=2026-03-10T05:52:01.118Z caller=node_exporter.go:192 level=info msg="Starting node_exporter" version="(version=1.7.0, branch=HEAD, revision=7333465abf9efba81876303bb57e6fadb946041b)" 2026-03-10T05:52:01.335 INFO:journalctl@ceph.node-exporter.a.vm02.stdout:Mar 10 05:52:01 vm02 bash[51824]: ts=2026-03-10T05:52:01.119Z caller=node_exporter.go:193 level=info msg="Build context" build_context="(go=go1.21.4, platform=linux/amd64, user=root@35918982f6d8, date=20231112-23:53:35, tags=netgo osusergo static_build)" 2026-03-10T05:52:01.335 INFO:journalctl@ceph.node-exporter.a.vm02.stdout:Mar 10 05:52:01 vm02 bash[51824]: ts=2026-03-10T05:52:01.119Z caller=filesystem_common.go:111 level=info collector=filesystem msg="Parsed flag --collector.filesystem.mount-points-exclude" flag=^/(dev|proc|run/credentials/.+|sys|var/lib/docker/.+|var/lib/containers/storage/.+)($|/) 2026-03-10T05:52:01.335 INFO:journalctl@ceph.node-exporter.a.vm02.stdout:Mar 10 05:52:01 vm02 bash[51824]: ts=2026-03-10T05:52:01.119Z caller=filesystem_common.go:113 level=info collector=filesystem msg="Parsed flag --collector.filesystem.fs-types-exclude" flag=^(autofs|binfmt_misc|bpf|cgroup2?|configfs|debugfs|devpts|devtmpfs|fusectl|hugetlbfs|iso9660|mqueue|nsfs|overlay|proc|procfs|pstore|rpc_pipefs|securityfs|selinuxfs|squashfs|sysfs|tracefs)$ 2026-03-10T05:52:01.335 INFO:journalctl@ceph.node-exporter.a.vm02.stdout:Mar 10 05:52:01 vm02 bash[51824]: ts=2026-03-10T05:52:01.120Z caller=diskstats_common.go:111 level=info collector=diskstats msg="Parsed flag --collector.diskstats.device-exclude" flag=^(ram|loop|fd|(h|s|v|xv)d[a-z]|nvme\d+n\d+p)\d+$ 2026-03-10T05:52:01.335 INFO:journalctl@ceph.node-exporter.a.vm02.stdout:Mar 10 05:52:01 vm02 bash[51824]: ts=2026-03-10T05:52:01.120Z caller=diskstats_linux.go:265 level=error collector=diskstats msg="Failed to open directory, disabling udev device properties" path=/run/udev/data 2026-03-10T05:52:01.335 INFO:journalctl@ceph.node-exporter.a.vm02.stdout:Mar 10 05:52:01 vm02 bash[51824]: ts=2026-03-10T05:52:01.120Z caller=node_exporter.go:110 level=info msg="Enabled collectors" 2026-03-10T05:52:01.335 INFO:journalctl@ceph.node-exporter.a.vm02.stdout:Mar 10 05:52:01 vm02 bash[51824]: ts=2026-03-10T05:52:01.120Z caller=node_exporter.go:117 level=info collector=arp 2026-03-10T05:52:01.335 INFO:journalctl@ceph.node-exporter.a.vm02.stdout:Mar 10 05:52:01 vm02 bash[51824]: ts=2026-03-10T05:52:01.120Z caller=node_exporter.go:117 level=info collector=bcache 2026-03-10T05:52:01.335 INFO:journalctl@ceph.node-exporter.a.vm02.stdout:Mar 10 05:52:01 vm02 bash[51824]: ts=2026-03-10T05:52:01.120Z caller=node_exporter.go:117 level=info collector=bonding 2026-03-10T05:52:01.335 INFO:journalctl@ceph.node-exporter.a.vm02.stdout:Mar 10 05:52:01 vm02 bash[51824]: ts=2026-03-10T05:52:01.120Z caller=node_exporter.go:117 level=info collector=btrfs 2026-03-10T05:52:01.335 INFO:journalctl@ceph.node-exporter.a.vm02.stdout:Mar 10 05:52:01 vm02 bash[51824]: ts=2026-03-10T05:52:01.120Z caller=node_exporter.go:117 level=info collector=conntrack 2026-03-10T05:52:01.335 INFO:journalctl@ceph.node-exporter.a.vm02.stdout:Mar 10 05:52:01 vm02 bash[51824]: ts=2026-03-10T05:52:01.120Z caller=node_exporter.go:117 level=info collector=cpu 2026-03-10T05:52:01.335 INFO:journalctl@ceph.node-exporter.a.vm02.stdout:Mar 10 05:52:01 vm02 bash[51824]: ts=2026-03-10T05:52:01.120Z caller=node_exporter.go:117 level=info collector=cpufreq 2026-03-10T05:52:01.335 INFO:journalctl@ceph.node-exporter.a.vm02.stdout:Mar 10 05:52:01 vm02 bash[51824]: ts=2026-03-10T05:52:01.120Z caller=node_exporter.go:117 level=info collector=diskstats 2026-03-10T05:52:01.335 INFO:journalctl@ceph.node-exporter.a.vm02.stdout:Mar 10 05:52:01 vm02 bash[51824]: ts=2026-03-10T05:52:01.120Z caller=node_exporter.go:117 level=info collector=dmi 2026-03-10T05:52:01.335 INFO:journalctl@ceph.node-exporter.a.vm02.stdout:Mar 10 05:52:01 vm02 bash[51824]: ts=2026-03-10T05:52:01.120Z caller=node_exporter.go:117 level=info collector=edac 2026-03-10T05:52:01.335 INFO:journalctl@ceph.node-exporter.a.vm02.stdout:Mar 10 05:52:01 vm02 bash[51824]: ts=2026-03-10T05:52:01.120Z caller=node_exporter.go:117 level=info collector=entropy 2026-03-10T05:52:01.335 INFO:journalctl@ceph.node-exporter.a.vm02.stdout:Mar 10 05:52:01 vm02 bash[51824]: ts=2026-03-10T05:52:01.120Z caller=node_exporter.go:117 level=info collector=fibrechannel 2026-03-10T05:52:01.335 INFO:journalctl@ceph.node-exporter.a.vm02.stdout:Mar 10 05:52:01 vm02 bash[51824]: ts=2026-03-10T05:52:01.120Z caller=node_exporter.go:117 level=info collector=filefd 2026-03-10T05:52:01.335 INFO:journalctl@ceph.node-exporter.a.vm02.stdout:Mar 10 05:52:01 vm02 bash[51824]: ts=2026-03-10T05:52:01.120Z caller=node_exporter.go:117 level=info collector=filesystem 2026-03-10T05:52:01.335 INFO:journalctl@ceph.node-exporter.a.vm02.stdout:Mar 10 05:52:01 vm02 bash[51824]: ts=2026-03-10T05:52:01.120Z caller=node_exporter.go:117 level=info collector=hwmon 2026-03-10T05:52:01.335 INFO:journalctl@ceph.node-exporter.a.vm02.stdout:Mar 10 05:52:01 vm02 bash[51824]: ts=2026-03-10T05:52:01.120Z caller=node_exporter.go:117 level=info collector=infiniband 2026-03-10T05:52:01.335 INFO:journalctl@ceph.node-exporter.a.vm02.stdout:Mar 10 05:52:01 vm02 bash[51824]: ts=2026-03-10T05:52:01.120Z caller=node_exporter.go:117 level=info collector=ipvs 2026-03-10T05:52:01.335 INFO:journalctl@ceph.node-exporter.a.vm02.stdout:Mar 10 05:52:01 vm02 bash[51824]: ts=2026-03-10T05:52:01.120Z caller=node_exporter.go:117 level=info collector=loadavg 2026-03-10T05:52:01.335 INFO:journalctl@ceph.node-exporter.a.vm02.stdout:Mar 10 05:52:01 vm02 bash[51824]: ts=2026-03-10T05:52:01.120Z caller=node_exporter.go:117 level=info collector=mdadm 2026-03-10T05:52:01.335 INFO:journalctl@ceph.node-exporter.a.vm02.stdout:Mar 10 05:52:01 vm02 bash[51824]: ts=2026-03-10T05:52:01.120Z caller=node_exporter.go:117 level=info collector=meminfo 2026-03-10T05:52:01.335 INFO:journalctl@ceph.node-exporter.a.vm02.stdout:Mar 10 05:52:01 vm02 bash[51824]: ts=2026-03-10T05:52:01.120Z caller=node_exporter.go:117 level=info collector=netclass 2026-03-10T05:52:01.335 INFO:journalctl@ceph.node-exporter.a.vm02.stdout:Mar 10 05:52:01 vm02 bash[51824]: ts=2026-03-10T05:52:01.120Z caller=node_exporter.go:117 level=info collector=netdev 2026-03-10T05:52:01.335 INFO:journalctl@ceph.node-exporter.a.vm02.stdout:Mar 10 05:52:01 vm02 bash[51824]: ts=2026-03-10T05:52:01.120Z caller=node_exporter.go:117 level=info collector=netstat 2026-03-10T05:52:01.335 INFO:journalctl@ceph.node-exporter.a.vm02.stdout:Mar 10 05:52:01 vm02 bash[51824]: ts=2026-03-10T05:52:01.120Z caller=node_exporter.go:117 level=info collector=nfs 2026-03-10T05:52:01.335 INFO:journalctl@ceph.node-exporter.a.vm02.stdout:Mar 10 05:52:01 vm02 bash[51824]: ts=2026-03-10T05:52:01.120Z caller=node_exporter.go:117 level=info collector=nfsd 2026-03-10T05:52:01.335 INFO:journalctl@ceph.node-exporter.a.vm02.stdout:Mar 10 05:52:01 vm02 bash[51824]: ts=2026-03-10T05:52:01.120Z caller=node_exporter.go:117 level=info collector=nvme 2026-03-10T05:52:01.335 INFO:journalctl@ceph.node-exporter.a.vm02.stdout:Mar 10 05:52:01 vm02 bash[51824]: ts=2026-03-10T05:52:01.120Z caller=node_exporter.go:117 level=info collector=os 2026-03-10T05:52:01.335 INFO:journalctl@ceph.node-exporter.a.vm02.stdout:Mar 10 05:52:01 vm02 bash[51824]: ts=2026-03-10T05:52:01.120Z caller=node_exporter.go:117 level=info collector=powersupplyclass 2026-03-10T05:52:01.335 INFO:journalctl@ceph.node-exporter.a.vm02.stdout:Mar 10 05:52:01 vm02 bash[51824]: ts=2026-03-10T05:52:01.120Z caller=node_exporter.go:117 level=info collector=pressure 2026-03-10T05:52:01.335 INFO:journalctl@ceph.node-exporter.a.vm02.stdout:Mar 10 05:52:01 vm02 bash[51824]: ts=2026-03-10T05:52:01.120Z caller=node_exporter.go:117 level=info collector=rapl 2026-03-10T05:52:01.335 INFO:journalctl@ceph.node-exporter.a.vm02.stdout:Mar 10 05:52:01 vm02 bash[51824]: ts=2026-03-10T05:52:01.120Z caller=node_exporter.go:117 level=info collector=schedstat 2026-03-10T05:52:01.335 INFO:journalctl@ceph.node-exporter.a.vm02.stdout:Mar 10 05:52:01 vm02 bash[51824]: ts=2026-03-10T05:52:01.120Z caller=node_exporter.go:117 level=info collector=selinux 2026-03-10T05:52:01.335 INFO:journalctl@ceph.node-exporter.a.vm02.stdout:Mar 10 05:52:01 vm02 bash[51824]: ts=2026-03-10T05:52:01.120Z caller=node_exporter.go:117 level=info collector=sockstat 2026-03-10T05:52:01.335 INFO:journalctl@ceph.node-exporter.a.vm02.stdout:Mar 10 05:52:01 vm02 bash[51824]: ts=2026-03-10T05:52:01.120Z caller=node_exporter.go:117 level=info collector=softnet 2026-03-10T05:52:01.335 INFO:journalctl@ceph.node-exporter.a.vm02.stdout:Mar 10 05:52:01 vm02 bash[51824]: ts=2026-03-10T05:52:01.120Z caller=node_exporter.go:117 level=info collector=stat 2026-03-10T05:52:01.335 INFO:journalctl@ceph.node-exporter.a.vm02.stdout:Mar 10 05:52:01 vm02 bash[51824]: ts=2026-03-10T05:52:01.120Z caller=node_exporter.go:117 level=info collector=tapestats 2026-03-10T05:52:01.335 INFO:journalctl@ceph.node-exporter.a.vm02.stdout:Mar 10 05:52:01 vm02 bash[51824]: ts=2026-03-10T05:52:01.120Z caller=node_exporter.go:117 level=info collector=textfile 2026-03-10T05:52:01.335 INFO:journalctl@ceph.node-exporter.a.vm02.stdout:Mar 10 05:52:01 vm02 bash[51824]: ts=2026-03-10T05:52:01.120Z caller=node_exporter.go:117 level=info collector=thermal_zone 2026-03-10T05:52:01.335 INFO:journalctl@ceph.node-exporter.a.vm02.stdout:Mar 10 05:52:01 vm02 bash[51824]: ts=2026-03-10T05:52:01.120Z caller=node_exporter.go:117 level=info collector=time 2026-03-10T05:52:01.335 INFO:journalctl@ceph.node-exporter.a.vm02.stdout:Mar 10 05:52:01 vm02 bash[51824]: ts=2026-03-10T05:52:01.120Z caller=node_exporter.go:117 level=info collector=udp_queues 2026-03-10T05:52:01.335 INFO:journalctl@ceph.node-exporter.a.vm02.stdout:Mar 10 05:52:01 vm02 bash[51824]: ts=2026-03-10T05:52:01.120Z caller=node_exporter.go:117 level=info collector=uname 2026-03-10T05:52:01.336 INFO:journalctl@ceph.node-exporter.a.vm02.stdout:Mar 10 05:52:01 vm02 bash[51824]: ts=2026-03-10T05:52:01.120Z caller=node_exporter.go:117 level=info collector=vmstat 2026-03-10T05:52:01.336 INFO:journalctl@ceph.node-exporter.a.vm02.stdout:Mar 10 05:52:01 vm02 bash[51824]: ts=2026-03-10T05:52:01.120Z caller=node_exporter.go:117 level=info collector=xfs 2026-03-10T05:52:01.336 INFO:journalctl@ceph.node-exporter.a.vm02.stdout:Mar 10 05:52:01 vm02 bash[51824]: ts=2026-03-10T05:52:01.120Z caller=node_exporter.go:117 level=info collector=zfs 2026-03-10T05:52:01.336 INFO:journalctl@ceph.node-exporter.a.vm02.stdout:Mar 10 05:52:01 vm02 bash[51824]: ts=2026-03-10T05:52:01.122Z caller=tls_config.go:274 level=info msg="Listening on" address=[::]:9100 2026-03-10T05:52:01.336 INFO:journalctl@ceph.node-exporter.a.vm02.stdout:Mar 10 05:52:01 vm02 bash[51824]: ts=2026-03-10T05:52:01.122Z caller=tls_config.go:277 level=info msg="TLS is disabled." http2=false address=[::]:9100 2026-03-10T05:52:01.501 INFO:journalctl@ceph.node-exporter.b.vm05.stdout:Mar 10 05:52:01 vm05 bash[39655]: v1.7.0: Pulling from prometheus/node-exporter 2026-03-10T05:52:02.001 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:52:01 vm05 bash[17864]: cephadm 2026-03-10T05:51:59.897859+0000 mgr.x (mgr.24773) 53 : cephadm [INF] Reconfiguring prometheus.a (dependencies changed)... 2026-03-10T05:52:02.001 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:52:01 vm05 bash[17864]: cephadm 2026-03-10T05:52:00.057949+0000 mgr.x (mgr.24773) 54 : cephadm [INF] Deploying daemon prometheus.a on vm05 2026-03-10T05:52:02.001 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:52:01 vm05 bash[17864]: audit 2026-03-10T05:52:00.612992+0000 mon.a (mon.0) 859 : audit [INF] from='client.? ' entity='client.iscsi.foo.vm02.mxbwmh' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.102:6800/3443698967"}]': finished 2026-03-10T05:52:02.001 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:52:01 vm05 bash[17864]: cluster 2026-03-10T05:52:00.613170+0000 mon.a (mon.0) 860 : cluster [DBG] osdmap e89: 8 total, 8 up, 8 in 2026-03-10T05:52:02.001 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:52:01 vm05 bash[17864]: cluster 2026-03-10T05:52:00.632726+0000 mgr.x (mgr.24773) 55 : cluster [DBG] pgmap v21: 161 pgs: 161 active+clean; 457 KiB data, 101 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T05:52:02.002 INFO:journalctl@ceph.node-exporter.b.vm05.stdout:Mar 10 05:52:01 vm05 bash[39655]: 2abcce694348: Pulling fs layer 2026-03-10T05:52:02.002 INFO:journalctl@ceph.node-exporter.b.vm05.stdout:Mar 10 05:52:01 vm05 bash[39655]: 455fd88e5221: Pulling fs layer 2026-03-10T05:52:02.002 INFO:journalctl@ceph.node-exporter.b.vm05.stdout:Mar 10 05:52:01 vm05 bash[39655]: 324153f2810a: Pulling fs layer 2026-03-10T05:52:02.084 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:01 vm02 bash[22526]: cephadm 2026-03-10T05:51:59.897859+0000 mgr.x (mgr.24773) 53 : cephadm [INF] Reconfiguring prometheus.a (dependencies changed)... 2026-03-10T05:52:02.084 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:01 vm02 bash[22526]: cephadm 2026-03-10T05:52:00.057949+0000 mgr.x (mgr.24773) 54 : cephadm [INF] Deploying daemon prometheus.a on vm05 2026-03-10T05:52:02.084 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:01 vm02 bash[22526]: audit 2026-03-10T05:52:00.612992+0000 mon.a (mon.0) 859 : audit [INF] from='client.? ' entity='client.iscsi.foo.vm02.mxbwmh' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.102:6800/3443698967"}]': finished 2026-03-10T05:52:02.084 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:01 vm02 bash[22526]: cluster 2026-03-10T05:52:00.613170+0000 mon.a (mon.0) 860 : cluster [DBG] osdmap e89: 8 total, 8 up, 8 in 2026-03-10T05:52:02.084 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:01 vm02 bash[22526]: cluster 2026-03-10T05:52:00.632726+0000 mgr.x (mgr.24773) 55 : cluster [DBG] pgmap v21: 161 pgs: 161 active+clean; 457 KiB data, 101 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T05:52:02.084 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:52:01 vm02 bash[17462]: cephadm 2026-03-10T05:51:59.897859+0000 mgr.x (mgr.24773) 53 : cephadm [INF] Reconfiguring prometheus.a (dependencies changed)... 2026-03-10T05:52:02.084 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:52:01 vm02 bash[17462]: cephadm 2026-03-10T05:52:00.057949+0000 mgr.x (mgr.24773) 54 : cephadm [INF] Deploying daemon prometheus.a on vm05 2026-03-10T05:52:02.084 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:52:01 vm02 bash[17462]: audit 2026-03-10T05:52:00.612992+0000 mon.a (mon.0) 859 : audit [INF] from='client.? ' entity='client.iscsi.foo.vm02.mxbwmh' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.102:6800/3443698967"}]': finished 2026-03-10T05:52:02.084 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:52:01 vm02 bash[17462]: cluster 2026-03-10T05:52:00.613170+0000 mon.a (mon.0) 860 : cluster [DBG] osdmap e89: 8 total, 8 up, 8 in 2026-03-10T05:52:02.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:52:01 vm02 bash[17462]: cluster 2026-03-10T05:52:00.632726+0000 mgr.x (mgr.24773) 55 : cluster [DBG] pgmap v21: 161 pgs: 161 active+clean; 457 KiB data, 101 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T05:52:02.403 INFO:journalctl@ceph.prometheus.a.vm05.stdout:Mar 10 05:52:02 vm05 bash[33954]: ts=2026-03-10T05:52:02.314Z caller=manager.go:609 level=warn component="rule manager" group=pools msg="Evaluating rule failed" rule="alert: CephPoolGrowthWarning\nexpr: (predict_linear(ceph_pool_percent_used[2d], 3600 * 24 * 5) * on(pool_id) group_right()\n ceph_pool_metadata) >= 95\nlabels:\n oid: 1.3.6.1.4.1.50495.1.2.1.9.2\n severity: warning\n type: ceph_default\nannotations:\n description: |\n Pool '{{ $labels.name }}' will be full in less than 5 days assuming the average fill-up rate of the past 48 hours.\n summary: Pool growth rate may soon exceed it's capacity\n" err="found duplicate series for the match group {pool_id=\"1\"} on the left hand-side of the operation: [{instance=\"192.168.123.105:9283\", job=\"ceph\", pool_id=\"1\"}, {instance=\"192.168.123.102:9283\", job=\"ceph\", pool_id=\"1\"}];many-to-many matching not allowed: matching labels must be unique on one side" 2026-03-10T05:52:02.403 INFO:journalctl@ceph.node-exporter.b.vm05.stdout:Mar 10 05:52:02 vm05 bash[39655]: 455fd88e5221: Verifying Checksum 2026-03-10T05:52:02.403 INFO:journalctl@ceph.node-exporter.b.vm05.stdout:Mar 10 05:52:02 vm05 bash[39655]: 455fd88e5221: Download complete 2026-03-10T05:52:02.403 INFO:journalctl@ceph.node-exporter.b.vm05.stdout:Mar 10 05:52:02 vm05 bash[39655]: 2abcce694348: Download complete 2026-03-10T05:52:02.403 INFO:journalctl@ceph.node-exporter.b.vm05.stdout:Mar 10 05:52:02 vm05 bash[39655]: 2abcce694348: Pull complete 2026-03-10T05:52:02.403 INFO:journalctl@ceph.node-exporter.b.vm05.stdout:Mar 10 05:52:02 vm05 bash[39655]: 455fd88e5221: Pull complete 2026-03-10T05:52:02.403 INFO:journalctl@ceph.node-exporter.b.vm05.stdout:Mar 10 05:52:02 vm05 bash[39655]: 324153f2810a: Verifying Checksum 2026-03-10T05:52:02.403 INFO:journalctl@ceph.node-exporter.b.vm05.stdout:Mar 10 05:52:02 vm05 bash[39655]: 324153f2810a: Download complete 2026-03-10T05:52:02.752 INFO:journalctl@ceph.node-exporter.b.vm05.stdout:Mar 10 05:52:02 vm05 bash[39655]: 324153f2810a: Pull complete 2026-03-10T05:52:02.752 INFO:journalctl@ceph.node-exporter.b.vm05.stdout:Mar 10 05:52:02 vm05 bash[39655]: Digest: sha256:4cb2b9019f1757be8482419002cb7afe028fdba35d47958829e4cfeaf6246d80 2026-03-10T05:52:02.752 INFO:journalctl@ceph.node-exporter.b.vm05.stdout:Mar 10 05:52:02 vm05 bash[39655]: Status: Downloaded newer image for quay.io/prometheus/node-exporter:v1.7.0 2026-03-10T05:52:02.752 INFO:journalctl@ceph.node-exporter.b.vm05.stdout:Mar 10 05:52:02 vm05 bash[39655]: ts=2026-03-10T05:52:02.535Z caller=node_exporter.go:192 level=info msg="Starting node_exporter" version="(version=1.7.0, branch=HEAD, revision=7333465abf9efba81876303bb57e6fadb946041b)" 2026-03-10T05:52:02.752 INFO:journalctl@ceph.node-exporter.b.vm05.stdout:Mar 10 05:52:02 vm05 bash[39655]: ts=2026-03-10T05:52:02.535Z caller=node_exporter.go:193 level=info msg="Build context" build_context="(go=go1.21.4, platform=linux/amd64, user=root@35918982f6d8, date=20231112-23:53:35, tags=netgo osusergo static_build)" 2026-03-10T05:52:02.752 INFO:journalctl@ceph.node-exporter.b.vm05.stdout:Mar 10 05:52:02 vm05 bash[39655]: ts=2026-03-10T05:52:02.536Z caller=diskstats_common.go:111 level=info collector=diskstats msg="Parsed flag --collector.diskstats.device-exclude" flag=^(ram|loop|fd|(h|s|v|xv)d[a-z]|nvme\d+n\d+p)\d+$ 2026-03-10T05:52:02.752 INFO:journalctl@ceph.node-exporter.b.vm05.stdout:Mar 10 05:52:02 vm05 bash[39655]: ts=2026-03-10T05:52:02.536Z caller=diskstats_linux.go:265 level=error collector=diskstats msg="Failed to open directory, disabling udev device properties" path=/run/udev/data 2026-03-10T05:52:02.752 INFO:journalctl@ceph.node-exporter.b.vm05.stdout:Mar 10 05:52:02 vm05 bash[39655]: ts=2026-03-10T05:52:02.537Z caller=filesystem_common.go:111 level=info collector=filesystem msg="Parsed flag --collector.filesystem.mount-points-exclude" flag=^/(dev|proc|run/credentials/.+|sys|var/lib/docker/.+|var/lib/containers/storage/.+)($|/) 2026-03-10T05:52:02.752 INFO:journalctl@ceph.node-exporter.b.vm05.stdout:Mar 10 05:52:02 vm05 bash[39655]: ts=2026-03-10T05:52:02.537Z caller=filesystem_common.go:113 level=info collector=filesystem msg="Parsed flag --collector.filesystem.fs-types-exclude" flag=^(autofs|binfmt_misc|bpf|cgroup2?|configfs|debugfs|devpts|devtmpfs|fusectl|hugetlbfs|iso9660|mqueue|nsfs|overlay|proc|procfs|pstore|rpc_pipefs|securityfs|selinuxfs|squashfs|sysfs|tracefs)$ 2026-03-10T05:52:02.752 INFO:journalctl@ceph.node-exporter.b.vm05.stdout:Mar 10 05:52:02 vm05 bash[39655]: ts=2026-03-10T05:52:02.537Z caller=node_exporter.go:110 level=info msg="Enabled collectors" 2026-03-10T05:52:02.752 INFO:journalctl@ceph.node-exporter.b.vm05.stdout:Mar 10 05:52:02 vm05 bash[39655]: ts=2026-03-10T05:52:02.537Z caller=node_exporter.go:117 level=info collector=arp 2026-03-10T05:52:02.752 INFO:journalctl@ceph.node-exporter.b.vm05.stdout:Mar 10 05:52:02 vm05 bash[39655]: ts=2026-03-10T05:52:02.537Z caller=node_exporter.go:117 level=info collector=bcache 2026-03-10T05:52:02.752 INFO:journalctl@ceph.node-exporter.b.vm05.stdout:Mar 10 05:52:02 vm05 bash[39655]: ts=2026-03-10T05:52:02.537Z caller=node_exporter.go:117 level=info collector=bonding 2026-03-10T05:52:02.752 INFO:journalctl@ceph.node-exporter.b.vm05.stdout:Mar 10 05:52:02 vm05 bash[39655]: ts=2026-03-10T05:52:02.537Z caller=node_exporter.go:117 level=info collector=btrfs 2026-03-10T05:52:02.752 INFO:journalctl@ceph.node-exporter.b.vm05.stdout:Mar 10 05:52:02 vm05 bash[39655]: ts=2026-03-10T05:52:02.537Z caller=node_exporter.go:117 level=info collector=conntrack 2026-03-10T05:52:02.752 INFO:journalctl@ceph.node-exporter.b.vm05.stdout:Mar 10 05:52:02 vm05 bash[39655]: ts=2026-03-10T05:52:02.537Z caller=node_exporter.go:117 level=info collector=cpu 2026-03-10T05:52:02.752 INFO:journalctl@ceph.node-exporter.b.vm05.stdout:Mar 10 05:52:02 vm05 bash[39655]: ts=2026-03-10T05:52:02.537Z caller=node_exporter.go:117 level=info collector=cpufreq 2026-03-10T05:52:02.752 INFO:journalctl@ceph.node-exporter.b.vm05.stdout:Mar 10 05:52:02 vm05 bash[39655]: ts=2026-03-10T05:52:02.537Z caller=node_exporter.go:117 level=info collector=diskstats 2026-03-10T05:52:02.752 INFO:journalctl@ceph.node-exporter.b.vm05.stdout:Mar 10 05:52:02 vm05 bash[39655]: ts=2026-03-10T05:52:02.537Z caller=node_exporter.go:117 level=info collector=dmi 2026-03-10T05:52:02.752 INFO:journalctl@ceph.node-exporter.b.vm05.stdout:Mar 10 05:52:02 vm05 bash[39655]: ts=2026-03-10T05:52:02.537Z caller=node_exporter.go:117 level=info collector=edac 2026-03-10T05:52:02.752 INFO:journalctl@ceph.node-exporter.b.vm05.stdout:Mar 10 05:52:02 vm05 bash[39655]: ts=2026-03-10T05:52:02.537Z caller=node_exporter.go:117 level=info collector=entropy 2026-03-10T05:52:02.752 INFO:journalctl@ceph.node-exporter.b.vm05.stdout:Mar 10 05:52:02 vm05 bash[39655]: ts=2026-03-10T05:52:02.537Z caller=node_exporter.go:117 level=info collector=fibrechannel 2026-03-10T05:52:02.752 INFO:journalctl@ceph.node-exporter.b.vm05.stdout:Mar 10 05:52:02 vm05 bash[39655]: ts=2026-03-10T05:52:02.537Z caller=node_exporter.go:117 level=info collector=filefd 2026-03-10T05:52:02.752 INFO:journalctl@ceph.node-exporter.b.vm05.stdout:Mar 10 05:52:02 vm05 bash[39655]: ts=2026-03-10T05:52:02.537Z caller=node_exporter.go:117 level=info collector=filesystem 2026-03-10T05:52:02.752 INFO:journalctl@ceph.node-exporter.b.vm05.stdout:Mar 10 05:52:02 vm05 bash[39655]: ts=2026-03-10T05:52:02.537Z caller=node_exporter.go:117 level=info collector=hwmon 2026-03-10T05:52:02.752 INFO:journalctl@ceph.node-exporter.b.vm05.stdout:Mar 10 05:52:02 vm05 bash[39655]: ts=2026-03-10T05:52:02.537Z caller=node_exporter.go:117 level=info collector=infiniband 2026-03-10T05:52:02.752 INFO:journalctl@ceph.node-exporter.b.vm05.stdout:Mar 10 05:52:02 vm05 bash[39655]: ts=2026-03-10T05:52:02.537Z caller=node_exporter.go:117 level=info collector=ipvs 2026-03-10T05:52:02.752 INFO:journalctl@ceph.node-exporter.b.vm05.stdout:Mar 10 05:52:02 vm05 bash[39655]: ts=2026-03-10T05:52:02.537Z caller=node_exporter.go:117 level=info collector=loadavg 2026-03-10T05:52:02.752 INFO:journalctl@ceph.node-exporter.b.vm05.stdout:Mar 10 05:52:02 vm05 bash[39655]: ts=2026-03-10T05:52:02.537Z caller=node_exporter.go:117 level=info collector=mdadm 2026-03-10T05:52:02.752 INFO:journalctl@ceph.node-exporter.b.vm05.stdout:Mar 10 05:52:02 vm05 bash[39655]: ts=2026-03-10T05:52:02.537Z caller=node_exporter.go:117 level=info collector=meminfo 2026-03-10T05:52:02.752 INFO:journalctl@ceph.node-exporter.b.vm05.stdout:Mar 10 05:52:02 vm05 bash[39655]: ts=2026-03-10T05:52:02.537Z caller=node_exporter.go:117 level=info collector=netclass 2026-03-10T05:52:02.752 INFO:journalctl@ceph.node-exporter.b.vm05.stdout:Mar 10 05:52:02 vm05 bash[39655]: ts=2026-03-10T05:52:02.537Z caller=node_exporter.go:117 level=info collector=netdev 2026-03-10T05:52:02.752 INFO:journalctl@ceph.node-exporter.b.vm05.stdout:Mar 10 05:52:02 vm05 bash[39655]: ts=2026-03-10T05:52:02.537Z caller=node_exporter.go:117 level=info collector=netstat 2026-03-10T05:52:02.752 INFO:journalctl@ceph.node-exporter.b.vm05.stdout:Mar 10 05:52:02 vm05 bash[39655]: ts=2026-03-10T05:52:02.537Z caller=node_exporter.go:117 level=info collector=nfs 2026-03-10T05:52:02.752 INFO:journalctl@ceph.node-exporter.b.vm05.stdout:Mar 10 05:52:02 vm05 bash[39655]: ts=2026-03-10T05:52:02.537Z caller=node_exporter.go:117 level=info collector=nfsd 2026-03-10T05:52:02.752 INFO:journalctl@ceph.node-exporter.b.vm05.stdout:Mar 10 05:52:02 vm05 bash[39655]: ts=2026-03-10T05:52:02.537Z caller=node_exporter.go:117 level=info collector=nvme 2026-03-10T05:52:02.753 INFO:journalctl@ceph.node-exporter.b.vm05.stdout:Mar 10 05:52:02 vm05 bash[39655]: ts=2026-03-10T05:52:02.537Z caller=node_exporter.go:117 level=info collector=os 2026-03-10T05:52:02.753 INFO:journalctl@ceph.node-exporter.b.vm05.stdout:Mar 10 05:52:02 vm05 bash[39655]: ts=2026-03-10T05:52:02.537Z caller=node_exporter.go:117 level=info collector=powersupplyclass 2026-03-10T05:52:02.753 INFO:journalctl@ceph.node-exporter.b.vm05.stdout:Mar 10 05:52:02 vm05 bash[39655]: ts=2026-03-10T05:52:02.537Z caller=node_exporter.go:117 level=info collector=pressure 2026-03-10T05:52:02.753 INFO:journalctl@ceph.node-exporter.b.vm05.stdout:Mar 10 05:52:02 vm05 bash[39655]: ts=2026-03-10T05:52:02.537Z caller=node_exporter.go:117 level=info collector=rapl 2026-03-10T05:52:02.753 INFO:journalctl@ceph.node-exporter.b.vm05.stdout:Mar 10 05:52:02 vm05 bash[39655]: ts=2026-03-10T05:52:02.537Z caller=node_exporter.go:117 level=info collector=schedstat 2026-03-10T05:52:02.753 INFO:journalctl@ceph.node-exporter.b.vm05.stdout:Mar 10 05:52:02 vm05 bash[39655]: ts=2026-03-10T05:52:02.537Z caller=node_exporter.go:117 level=info collector=selinux 2026-03-10T05:52:02.753 INFO:journalctl@ceph.node-exporter.b.vm05.stdout:Mar 10 05:52:02 vm05 bash[39655]: ts=2026-03-10T05:52:02.537Z caller=node_exporter.go:117 level=info collector=sockstat 2026-03-10T05:52:02.753 INFO:journalctl@ceph.node-exporter.b.vm05.stdout:Mar 10 05:52:02 vm05 bash[39655]: ts=2026-03-10T05:52:02.537Z caller=node_exporter.go:117 level=info collector=softnet 2026-03-10T05:52:02.753 INFO:journalctl@ceph.node-exporter.b.vm05.stdout:Mar 10 05:52:02 vm05 bash[39655]: ts=2026-03-10T05:52:02.537Z caller=node_exporter.go:117 level=info collector=stat 2026-03-10T05:52:02.753 INFO:journalctl@ceph.node-exporter.b.vm05.stdout:Mar 10 05:52:02 vm05 bash[39655]: ts=2026-03-10T05:52:02.537Z caller=node_exporter.go:117 level=info collector=tapestats 2026-03-10T05:52:02.753 INFO:journalctl@ceph.node-exporter.b.vm05.stdout:Mar 10 05:52:02 vm05 bash[39655]: ts=2026-03-10T05:52:02.537Z caller=node_exporter.go:117 level=info collector=textfile 2026-03-10T05:52:02.753 INFO:journalctl@ceph.node-exporter.b.vm05.stdout:Mar 10 05:52:02 vm05 bash[39655]: ts=2026-03-10T05:52:02.537Z caller=node_exporter.go:117 level=info collector=thermal_zone 2026-03-10T05:52:02.753 INFO:journalctl@ceph.node-exporter.b.vm05.stdout:Mar 10 05:52:02 vm05 bash[39655]: ts=2026-03-10T05:52:02.537Z caller=node_exporter.go:117 level=info collector=time 2026-03-10T05:52:02.753 INFO:journalctl@ceph.node-exporter.b.vm05.stdout:Mar 10 05:52:02 vm05 bash[39655]: ts=2026-03-10T05:52:02.537Z caller=node_exporter.go:117 level=info collector=udp_queues 2026-03-10T05:52:02.753 INFO:journalctl@ceph.node-exporter.b.vm05.stdout:Mar 10 05:52:02 vm05 bash[39655]: ts=2026-03-10T05:52:02.537Z caller=node_exporter.go:117 level=info collector=uname 2026-03-10T05:52:02.753 INFO:journalctl@ceph.node-exporter.b.vm05.stdout:Mar 10 05:52:02 vm05 bash[39655]: ts=2026-03-10T05:52:02.537Z caller=node_exporter.go:117 level=info collector=vmstat 2026-03-10T05:52:02.753 INFO:journalctl@ceph.node-exporter.b.vm05.stdout:Mar 10 05:52:02 vm05 bash[39655]: ts=2026-03-10T05:52:02.537Z caller=node_exporter.go:117 level=info collector=xfs 2026-03-10T05:52:02.753 INFO:journalctl@ceph.node-exporter.b.vm05.stdout:Mar 10 05:52:02 vm05 bash[39655]: ts=2026-03-10T05:52:02.537Z caller=node_exporter.go:117 level=info collector=zfs 2026-03-10T05:52:02.753 INFO:journalctl@ceph.node-exporter.b.vm05.stdout:Mar 10 05:52:02 vm05 bash[39655]: ts=2026-03-10T05:52:02.539Z caller=tls_config.go:274 level=info msg="Listening on" address=[::]:9100 2026-03-10T05:52:02.753 INFO:journalctl@ceph.node-exporter.b.vm05.stdout:Mar 10 05:52:02 vm05 bash[39655]: ts=2026-03-10T05:52:02.539Z caller=tls_config.go:277 level=info msg="TLS is disabled." http2=false address=[::]:9100 2026-03-10T05:52:03.252 INFO:journalctl@ceph.mgr.x.vm05.stdout:Mar 10 05:52:02 vm05 bash[37598]: ::ffff:192.168.123.105 - - [10/Mar/2026:05:52:02] "GET /metrics HTTP/1.1" 200 37768 "" "Prometheus/2.33.4" 2026-03-10T05:52:04.001 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:52:03 vm05 bash[17864]: cluster 2026-03-10T05:52:02.633078+0000 mgr.x (mgr.24773) 56 : cluster [DBG] pgmap v22: 161 pgs: 161 active+clean; 457 KiB data, 101 MiB used, 160 GiB / 160 GiB avail; 994 B/s rd, 0 op/s 2026-03-10T05:52:04.084 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:52:03 vm02 bash[17462]: cluster 2026-03-10T05:52:02.633078+0000 mgr.x (mgr.24773) 56 : cluster [DBG] pgmap v22: 161 pgs: 161 active+clean; 457 KiB data, 101 MiB used, 160 GiB / 160 GiB avail; 994 B/s rd, 0 op/s 2026-03-10T05:52:04.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:03 vm02 bash[22526]: cluster 2026-03-10T05:52:02.633078+0000 mgr.x (mgr.24773) 56 : cluster [DBG] pgmap v22: 161 pgs: 161 active+clean; 457 KiB data, 101 MiB used, 160 GiB / 160 GiB avail; 994 B/s rd, 0 op/s 2026-03-10T05:52:05.084 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:52:04 vm02 bash[17462]: audit 2026-03-10T05:52:03.332721+0000 mgr.x (mgr.24773) 57 : audit [DBG] from='client.24940 -' entity='client.iscsi.foo.vm02.mxbwmh' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T05:52:05.084 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:04 vm02 bash[22526]: audit 2026-03-10T05:52:03.332721+0000 mgr.x (mgr.24773) 57 : audit [DBG] from='client.24940 -' entity='client.iscsi.foo.vm02.mxbwmh' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T05:52:05.251 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:52:04 vm05 bash[17864]: audit 2026-03-10T05:52:03.332721+0000 mgr.x (mgr.24773) 57 : audit [DBG] from='client.24940 -' entity='client.iscsi.foo.vm02.mxbwmh' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T05:52:05.897 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:52:05 vm05 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:52:05.897 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:52:05 vm05 bash[17864]: cluster 2026-03-10T05:52:04.633455+0000 mgr.x (mgr.24773) 58 : cluster [DBG] pgmap v23: 161 pgs: 161 active+clean; 457 KiB data, 101 MiB used, 160 GiB / 160 GiB avail; 848 B/s rd, 0 op/s 2026-03-10T05:52:05.897 INFO:journalctl@ceph.mgr.x.vm05.stdout:Mar 10 05:52:05 vm05 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:52:05.898 INFO:journalctl@ceph.osd.4.vm05.stdout:Mar 10 05:52:05 vm05 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:52:05.898 INFO:journalctl@ceph.osd.5.vm05.stdout:Mar 10 05:52:05 vm05 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:52:05.898 INFO:journalctl@ceph.osd.6.vm05.stdout:Mar 10 05:52:05 vm05 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:52:05.898 INFO:journalctl@ceph.osd.7.vm05.stdout:Mar 10 05:52:05 vm05 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:52:05.898 INFO:journalctl@ceph.prometheus.a.vm05.stdout:Mar 10 05:52:05 vm05 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:52:05.898 INFO:journalctl@ceph.prometheus.a.vm05.stdout:Mar 10 05:52:05 vm05 systemd[1]: Stopping Ceph prometheus.a for 107483ae-1c44-11f1-b530-c1172cd6122a... 2026-03-10T05:52:05.898 INFO:journalctl@ceph.prometheus.a.vm05.stdout:Mar 10 05:52:05 vm05 bash[33954]: ts=2026-03-10T05:52:05.725Z caller=main.go:775 level=warn msg="Received SIGTERM, exiting gracefully..." 2026-03-10T05:52:05.898 INFO:journalctl@ceph.prometheus.a.vm05.stdout:Mar 10 05:52:05 vm05 bash[33954]: ts=2026-03-10T05:52:05.725Z caller=main.go:798 level=info msg="Stopping scrape discovery manager..." 2026-03-10T05:52:05.898 INFO:journalctl@ceph.prometheus.a.vm05.stdout:Mar 10 05:52:05 vm05 bash[33954]: ts=2026-03-10T05:52:05.725Z caller=main.go:812 level=info msg="Stopping notify discovery manager..." 2026-03-10T05:52:05.898 INFO:journalctl@ceph.prometheus.a.vm05.stdout:Mar 10 05:52:05 vm05 bash[33954]: ts=2026-03-10T05:52:05.725Z caller=main.go:834 level=info msg="Stopping scrape manager..." 2026-03-10T05:52:05.898 INFO:journalctl@ceph.prometheus.a.vm05.stdout:Mar 10 05:52:05 vm05 bash[33954]: ts=2026-03-10T05:52:05.725Z caller=main.go:794 level=info msg="Scrape discovery manager stopped" 2026-03-10T05:52:05.898 INFO:journalctl@ceph.prometheus.a.vm05.stdout:Mar 10 05:52:05 vm05 bash[33954]: ts=2026-03-10T05:52:05.725Z caller=main.go:808 level=info msg="Notify discovery manager stopped" 2026-03-10T05:52:05.898 INFO:journalctl@ceph.prometheus.a.vm05.stdout:Mar 10 05:52:05 vm05 bash[33954]: ts=2026-03-10T05:52:05.725Z caller=manager.go:945 level=info component="rule manager" msg="Stopping rule manager..." 2026-03-10T05:52:05.898 INFO:journalctl@ceph.prometheus.a.vm05.stdout:Mar 10 05:52:05 vm05 bash[33954]: ts=2026-03-10T05:52:05.725Z caller=manager.go:955 level=info component="rule manager" msg="Rule manager stopped" 2026-03-10T05:52:05.898 INFO:journalctl@ceph.prometheus.a.vm05.stdout:Mar 10 05:52:05 vm05 bash[33954]: ts=2026-03-10T05:52:05.725Z caller=main.go:828 level=info msg="Scrape manager stopped" 2026-03-10T05:52:05.898 INFO:journalctl@ceph.prometheus.a.vm05.stdout:Mar 10 05:52:05 vm05 bash[33954]: ts=2026-03-10T05:52:05.726Z caller=notifier.go:600 level=info component=notifier msg="Stopping notification manager..." 2026-03-10T05:52:05.898 INFO:journalctl@ceph.prometheus.a.vm05.stdout:Mar 10 05:52:05 vm05 bash[33954]: ts=2026-03-10T05:52:05.726Z caller=main.go:1054 level=info msg="Notifier manager stopped" 2026-03-10T05:52:05.898 INFO:journalctl@ceph.prometheus.a.vm05.stdout:Mar 10 05:52:05 vm05 bash[33954]: ts=2026-03-10T05:52:05.726Z caller=main.go:1066 level=info msg="See you next time!" 2026-03-10T05:52:05.898 INFO:journalctl@ceph.prometheus.a.vm05.stdout:Mar 10 05:52:05 vm05 bash[39986]: ceph-107483ae-1c44-11f1-b530-c1172cd6122a-prometheus-a 2026-03-10T05:52:05.898 INFO:journalctl@ceph.prometheus.a.vm05.stdout:Mar 10 05:52:05 vm05 systemd[1]: ceph-107483ae-1c44-11f1-b530-c1172cd6122a@prometheus.a.service: Deactivated successfully. 2026-03-10T05:52:05.898 INFO:journalctl@ceph.prometheus.a.vm05.stdout:Mar 10 05:52:05 vm05 systemd[1]: Stopped Ceph prometheus.a for 107483ae-1c44-11f1-b530-c1172cd6122a. 2026-03-10T05:52:05.898 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:52:05 vm05 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:52:05.899 INFO:journalctl@ceph.node-exporter.b.vm05.stdout:Mar 10 05:52:05 vm05 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:52:06.023 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:52:05 vm02 bash[17462]: cluster 2026-03-10T05:52:04.633455+0000 mgr.x (mgr.24773) 58 : cluster [DBG] pgmap v23: 161 pgs: 161 active+clean; 457 KiB data, 101 MiB used, 160 GiB / 160 GiB avail; 848 B/s rd, 0 op/s 2026-03-10T05:52:06.024 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:05 vm02 bash[22526]: cluster 2026-03-10T05:52:04.633455+0000 mgr.x (mgr.24773) 58 : cluster [DBG] pgmap v23: 161 pgs: 161 active+clean; 457 KiB data, 101 MiB used, 160 GiB / 160 GiB avail; 848 B/s rd, 0 op/s 2026-03-10T05:52:06.147 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:52:05 vm05 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:52:06.148 INFO:journalctl@ceph.mgr.x.vm05.stdout:Mar 10 05:52:05 vm05 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:52:06.148 INFO:journalctl@ceph.mgr.x.vm05.stdout:Mar 10 05:52:06 vm05 bash[37598]: [10/Mar/2026:05:52:06] ENGINE Bus STOPPING 2026-03-10T05:52:06.148 INFO:journalctl@ceph.osd.4.vm05.stdout:Mar 10 05:52:05 vm05 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:52:06.148 INFO:journalctl@ceph.osd.5.vm05.stdout:Mar 10 05:52:05 vm05 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:52:06.148 INFO:journalctl@ceph.osd.6.vm05.stdout:Mar 10 05:52:05 vm05 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:52:06.148 INFO:journalctl@ceph.osd.7.vm05.stdout:Mar 10 05:52:05 vm05 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:52:06.148 INFO:journalctl@ceph.prometheus.a.vm05.stdout:Mar 10 05:52:05 vm05 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:52:06.148 INFO:journalctl@ceph.prometheus.a.vm05.stdout:Mar 10 05:52:05 vm05 systemd[1]: Started Ceph prometheus.a for 107483ae-1c44-11f1-b530-c1172cd6122a. 2026-03-10T05:52:06.148 INFO:journalctl@ceph.prometheus.a.vm05.stdout:Mar 10 05:52:06 vm05 bash[40098]: ts=2026-03-10T05:52:06.148Z caller=main.go:617 level=info msg="Starting Prometheus Server" mode=server version="(version=2.51.0, branch=HEAD, revision=c05c15512acb675e3f6cd662a6727854e93fc024)" 2026-03-10T05:52:06.148 INFO:journalctl@ceph.prometheus.a.vm05.stdout:Mar 10 05:52:06 vm05 bash[40098]: ts=2026-03-10T05:52:06.148Z caller=main.go:622 level=info build_context="(go=go1.22.1, platform=linux/amd64, user=root@b5723e458358, date=20240319-10:54:45, tags=netgo,builtinassets,stringlabels)" 2026-03-10T05:52:06.148 INFO:journalctl@ceph.node-exporter.b.vm05.stdout:Mar 10 05:52:05 vm05 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:52:06.148 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:52:05 vm05 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:52:06.501 INFO:journalctl@ceph.mgr.x.vm05.stdout:Mar 10 05:52:06 vm05 bash[37598]: [10/Mar/2026:05:52:06] ENGINE HTTP Server cherrypy._cpwsgi_server.CPWSGIServer(('::', 9283)) shut down 2026-03-10T05:52:06.502 INFO:journalctl@ceph.mgr.x.vm05.stdout:Mar 10 05:52:06 vm05 bash[37598]: [10/Mar/2026:05:52:06] ENGINE Bus STOPPED 2026-03-10T05:52:06.502 INFO:journalctl@ceph.mgr.x.vm05.stdout:Mar 10 05:52:06 vm05 bash[37598]: [10/Mar/2026:05:52:06] ENGINE Bus STARTING 2026-03-10T05:52:06.502 INFO:journalctl@ceph.mgr.x.vm05.stdout:Mar 10 05:52:06 vm05 bash[37598]: [10/Mar/2026:05:52:06] ENGINE Serving on http://:::9283 2026-03-10T05:52:06.502 INFO:journalctl@ceph.mgr.x.vm05.stdout:Mar 10 05:52:06 vm05 bash[37598]: [10/Mar/2026:05:52:06] ENGINE Bus STARTED 2026-03-10T05:52:06.502 INFO:journalctl@ceph.mgr.x.vm05.stdout:Mar 10 05:52:06 vm05 bash[37598]: [10/Mar/2026:05:52:06] ENGINE Bus STOPPING 2026-03-10T05:52:06.502 INFO:journalctl@ceph.prometheus.a.vm05.stdout:Mar 10 05:52:06 vm05 bash[40098]: ts=2026-03-10T05:52:06.148Z caller=main.go:623 level=info host_details="(Linux 5.15.0-1092-kvm #97-Ubuntu SMP Fri Jan 23 15:00:24 UTC 2026 x86_64 vm05 (none))" 2026-03-10T05:52:06.502 INFO:journalctl@ceph.prometheus.a.vm05.stdout:Mar 10 05:52:06 vm05 bash[40098]: ts=2026-03-10T05:52:06.148Z caller=main.go:624 level=info fd_limits="(soft=1048576, hard=1048576)" 2026-03-10T05:52:06.502 INFO:journalctl@ceph.prometheus.a.vm05.stdout:Mar 10 05:52:06 vm05 bash[40098]: ts=2026-03-10T05:52:06.148Z caller=main.go:625 level=info vm_limits="(soft=unlimited, hard=unlimited)" 2026-03-10T05:52:06.502 INFO:journalctl@ceph.prometheus.a.vm05.stdout:Mar 10 05:52:06 vm05 bash[40098]: ts=2026-03-10T05:52:06.151Z caller=web.go:568 level=info component=web msg="Start listening for connections" address=:9095 2026-03-10T05:52:06.502 INFO:journalctl@ceph.prometheus.a.vm05.stdout:Mar 10 05:52:06 vm05 bash[40098]: ts=2026-03-10T05:52:06.152Z caller=main.go:1129 level=info msg="Starting TSDB ..." 2026-03-10T05:52:06.502 INFO:journalctl@ceph.prometheus.a.vm05.stdout:Mar 10 05:52:06 vm05 bash[40098]: ts=2026-03-10T05:52:06.153Z caller=tls_config.go:313 level=info component=web msg="Listening on" address=[::]:9095 2026-03-10T05:52:06.502 INFO:journalctl@ceph.prometheus.a.vm05.stdout:Mar 10 05:52:06 vm05 bash[40098]: ts=2026-03-10T05:52:06.153Z caller=tls_config.go:316 level=info component=web msg="TLS is disabled." http2=false address=[::]:9095 2026-03-10T05:52:06.502 INFO:journalctl@ceph.prometheus.a.vm05.stdout:Mar 10 05:52:06 vm05 bash[40098]: ts=2026-03-10T05:52:06.155Z caller=head.go:616 level=info component=tsdb msg="Replaying on-disk memory mappable chunks if any" 2026-03-10T05:52:06.502 INFO:journalctl@ceph.prometheus.a.vm05.stdout:Mar 10 05:52:06 vm05 bash[40098]: ts=2026-03-10T05:52:06.155Z caller=head.go:698 level=info component=tsdb msg="On-disk memory mappable chunks replay completed" duration=1.452µs 2026-03-10T05:52:06.502 INFO:journalctl@ceph.prometheus.a.vm05.stdout:Mar 10 05:52:06 vm05 bash[40098]: ts=2026-03-10T05:52:06.155Z caller=head.go:706 level=info component=tsdb msg="Replaying WAL, this may take a while" 2026-03-10T05:52:06.502 INFO:journalctl@ceph.prometheus.a.vm05.stdout:Mar 10 05:52:06 vm05 bash[40098]: ts=2026-03-10T05:52:06.163Z caller=head.go:778 level=info component=tsdb msg="WAL segment loaded" segment=0 maxSegment=2 2026-03-10T05:52:06.502 INFO:journalctl@ceph.prometheus.a.vm05.stdout:Mar 10 05:52:06 vm05 bash[40098]: ts=2026-03-10T05:52:06.170Z caller=head.go:778 level=info component=tsdb msg="WAL segment loaded" segment=1 maxSegment=2 2026-03-10T05:52:06.502 INFO:journalctl@ceph.prometheus.a.vm05.stdout:Mar 10 05:52:06 vm05 bash[40098]: ts=2026-03-10T05:52:06.170Z caller=head.go:778 level=info component=tsdb msg="WAL segment loaded" segment=2 maxSegment=2 2026-03-10T05:52:06.502 INFO:journalctl@ceph.prometheus.a.vm05.stdout:Mar 10 05:52:06 vm05 bash[40098]: ts=2026-03-10T05:52:06.171Z caller=head.go:815 level=info component=tsdb msg="WAL replay completed" checkpoint_replay_duration=77.537µs wal_replay_duration=15.618577ms wbl_replay_duration=120ns total_replay_duration=15.715129ms 2026-03-10T05:52:06.502 INFO:journalctl@ceph.prometheus.a.vm05.stdout:Mar 10 05:52:06 vm05 bash[40098]: ts=2026-03-10T05:52:06.175Z caller=main.go:1150 level=info fs_type=EXT4_SUPER_MAGIC 2026-03-10T05:52:06.502 INFO:journalctl@ceph.prometheus.a.vm05.stdout:Mar 10 05:52:06 vm05 bash[40098]: ts=2026-03-10T05:52:06.175Z caller=main.go:1153 level=info msg="TSDB started" 2026-03-10T05:52:06.502 INFO:journalctl@ceph.prometheus.a.vm05.stdout:Mar 10 05:52:06 vm05 bash[40098]: ts=2026-03-10T05:52:06.175Z caller=main.go:1335 level=info msg="Loading configuration file" filename=/etc/prometheus/prometheus.yml 2026-03-10T05:52:06.502 INFO:journalctl@ceph.prometheus.a.vm05.stdout:Mar 10 05:52:06 vm05 bash[40098]: ts=2026-03-10T05:52:06.185Z caller=main.go:1372 level=info msg="Completed loading of configuration file" filename=/etc/prometheus/prometheus.yml totalDuration=10.530731ms db_storage=682ns remote_storage=953ns web_handler=280ns query_engine=541ns scrape=716.029µs scrape_sd=105.879µs notify=9.077µs notify_sd=12.344µs rules=9.084848ms tracing=5.851µs 2026-03-10T05:52:06.502 INFO:journalctl@ceph.prometheus.a.vm05.stdout:Mar 10 05:52:06 vm05 bash[40098]: ts=2026-03-10T05:52:06.185Z caller=main.go:1114 level=info msg="Server is ready to receive web requests." 2026-03-10T05:52:06.502 INFO:journalctl@ceph.prometheus.a.vm05.stdout:Mar 10 05:52:06 vm05 bash[40098]: ts=2026-03-10T05:52:06.185Z caller=manager.go:163 level=info component="rule manager" msg="Starting rule manager..." 2026-03-10T05:52:07.252 INFO:journalctl@ceph.mgr.x.vm05.stdout:Mar 10 05:52:06 vm05 bash[37598]: [10/Mar/2026:05:52:06] ENGINE HTTP Server cherrypy._cpwsgi_server.CPWSGIServer(('::', 9283)) shut down 2026-03-10T05:52:07.252 INFO:journalctl@ceph.mgr.x.vm05.stdout:Mar 10 05:52:06 vm05 bash[37598]: [10/Mar/2026:05:52:06] ENGINE Bus STOPPED 2026-03-10T05:52:07.252 INFO:journalctl@ceph.mgr.x.vm05.stdout:Mar 10 05:52:06 vm05 bash[37598]: [10/Mar/2026:05:52:06] ENGINE Bus STARTING 2026-03-10T05:52:07.252 INFO:journalctl@ceph.mgr.x.vm05.stdout:Mar 10 05:52:06 vm05 bash[37598]: [10/Mar/2026:05:52:06] ENGINE Serving on http://:::9283 2026-03-10T05:52:07.252 INFO:journalctl@ceph.mgr.x.vm05.stdout:Mar 10 05:52:06 vm05 bash[37598]: [10/Mar/2026:05:52:06] ENGINE Bus STARTED 2026-03-10T05:52:07.252 INFO:journalctl@ceph.mgr.x.vm05.stdout:Mar 10 05:52:06 vm05 bash[37598]: [10/Mar/2026:05:52:06] ENGINE Bus STOPPING 2026-03-10T05:52:07.252 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:52:07 vm05 bash[17864]: audit 2026-03-10T05:52:06.009479+0000 mon.a (mon.0) 861 : audit [INF] from='mgr.24773 ' entity='mgr.x' 2026-03-10T05:52:07.252 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:52:07 vm05 bash[17864]: audit 2026-03-10T05:52:06.016477+0000 mon.a (mon.0) 862 : audit [INF] from='mgr.24773 ' entity='mgr.x' 2026-03-10T05:52:07.252 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:52:07 vm05 bash[17864]: audit 2026-03-10T05:52:06.020654+0000 mon.b (mon.2) 65 : audit [DBG] from='mgr.24773 192.168.123.105:0/3918521054' entity='mgr.x' cmd=[{"prefix": "dashboard iscsi-gateway-list"}]: dispatch 2026-03-10T05:52:07.252 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:52:07 vm05 bash[17864]: audit 2026-03-10T05:52:06.021093+0000 mgr.x (mgr.24773) 59 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard iscsi-gateway-list"}]: dispatch 2026-03-10T05:52:07.252 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:52:07 vm05 bash[17864]: audit 2026-03-10T05:52:06.029185+0000 mon.a (mon.0) 863 : audit [INF] from='mgr.24773 ' entity='mgr.x' 2026-03-10T05:52:07.252 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:52:07 vm05 bash[17864]: cephadm 2026-03-10T05:52:06.034472+0000 mgr.x (mgr.24773) 60 : cephadm [INF] Adding iSCSI gateway http://:@192.168.123.102:5000 to Dashboard 2026-03-10T05:52:07.252 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:52:07 vm05 bash[17864]: audit 2026-03-10T05:52:06.034663+0000 mon.b (mon.2) 66 : audit [INF] from='mgr.24773 192.168.123.105:0/3918521054' entity='mgr.x' cmd=[{"prefix": "dashboard set-iscsi-api-ssl-verification", "value": "true"}]: dispatch 2026-03-10T05:52:07.252 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:52:07 vm05 bash[17864]: audit 2026-03-10T05:52:06.035036+0000 mgr.x (mgr.24773) 61 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard set-iscsi-api-ssl-verification", "value": "true"}]: dispatch 2026-03-10T05:52:07.252 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:52:07 vm05 bash[17864]: audit 2026-03-10T05:52:06.036608+0000 mon.b (mon.2) 67 : audit [INF] from='mgr.24773 192.168.123.105:0/3918521054' entity='mgr.x' cmd=[{"prefix": "dashboard iscsi-gateway-add", "name": "vm02"}]: dispatch 2026-03-10T05:52:07.252 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:52:07 vm05 bash[17864]: audit 2026-03-10T05:52:06.036879+0000 mgr.x (mgr.24773) 62 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard iscsi-gateway-add", "name": "vm02"}]: dispatch 2026-03-10T05:52:07.252 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:52:07 vm05 bash[17864]: audit 2026-03-10T05:52:06.039920+0000 mon.a (mon.0) 864 : audit [INF] from='mgr.24773 ' entity='mgr.x' 2026-03-10T05:52:07.252 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:52:07 vm05 bash[17864]: audit 2026-03-10T05:52:06.051083+0000 mon.b (mon.2) 68 : audit [DBG] from='mgr.24773 192.168.123.105:0/3918521054' entity='mgr.x' cmd=[{"prefix": "dashboard get-alertmanager-api-host"}]: dispatch 2026-03-10T05:52:07.252 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:52:07 vm05 bash[17864]: audit 2026-03-10T05:52:06.053716+0000 mgr.x (mgr.24773) 63 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard get-alertmanager-api-host"}]: dispatch 2026-03-10T05:52:07.252 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:52:07 vm05 bash[17864]: audit 2026-03-10T05:52:06.055086+0000 mon.b (mon.2) 69 : audit [INF] from='mgr.24773 192.168.123.105:0/3918521054' entity='mgr.x' cmd=[{"prefix": "dashboard set-alertmanager-api-host", "value": "http://vm02.local:9093"}]: dispatch 2026-03-10T05:52:07.252 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:52:07 vm05 bash[17864]: audit 2026-03-10T05:52:06.055313+0000 mgr.x (mgr.24773) 64 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard set-alertmanager-api-host", "value": "http://vm02.local:9093"}]: dispatch 2026-03-10T05:52:07.252 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:52:07 vm05 bash[17864]: audit 2026-03-10T05:52:06.060189+0000 mon.a (mon.0) 865 : audit [INF] from='mgr.24773 ' entity='mgr.x' 2026-03-10T05:52:07.252 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:52:07 vm05 bash[17864]: audit 2026-03-10T05:52:06.071477+0000 mon.b (mon.2) 70 : audit [DBG] from='mgr.24773 192.168.123.105:0/3918521054' entity='mgr.x' cmd=[{"prefix": "dashboard get-grafana-api-url"}]: dispatch 2026-03-10T05:52:07.252 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:52:07 vm05 bash[17864]: audit 2026-03-10T05:52:06.071803+0000 mgr.x (mgr.24773) 65 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard get-grafana-api-url"}]: dispatch 2026-03-10T05:52:07.252 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:52:07 vm05 bash[17864]: audit 2026-03-10T05:52:06.072440+0000 mon.b (mon.2) 71 : audit [INF] from='mgr.24773 192.168.123.105:0/3918521054' entity='mgr.x' cmd=[{"prefix": "dashboard set-grafana-api-url", "value": "https://vm05.local:3000"}]: dispatch 2026-03-10T05:52:07.252 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:52:07 vm05 bash[17864]: audit 2026-03-10T05:52:06.072629+0000 mgr.x (mgr.24773) 66 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard set-grafana-api-url", "value": "https://vm05.local:3000"}]: dispatch 2026-03-10T05:52:07.252 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:52:07 vm05 bash[17864]: audit 2026-03-10T05:52:06.075836+0000 mon.a (mon.0) 866 : audit [INF] from='mgr.24773 ' entity='mgr.x' 2026-03-10T05:52:07.252 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:52:07 vm05 bash[17864]: audit 2026-03-10T05:52:06.087398+0000 mon.b (mon.2) 72 : audit [DBG] from='mgr.24773 192.168.123.105:0/3918521054' entity='mgr.x' cmd=[{"prefix": "dashboard get-prometheus-api-host"}]: dispatch 2026-03-10T05:52:07.252 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:52:07 vm05 bash[17864]: audit 2026-03-10T05:52:06.088138+0000 mgr.x (mgr.24773) 67 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard get-prometheus-api-host"}]: dispatch 2026-03-10T05:52:07.252 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:52:07 vm05 bash[17864]: audit 2026-03-10T05:52:06.089813+0000 mon.b (mon.2) 73 : audit [INF] from='mgr.24773 192.168.123.105:0/3918521054' entity='mgr.x' cmd=[{"prefix": "dashboard set-prometheus-api-host", "value": "http://vm05.local:9095"}]: dispatch 2026-03-10T05:52:07.252 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:52:07 vm05 bash[17864]: audit 2026-03-10T05:52:06.090056+0000 mgr.x (mgr.24773) 68 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard set-prometheus-api-host", "value": "http://vm05.local:9095"}]: dispatch 2026-03-10T05:52:07.252 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:52:07 vm05 bash[17864]: audit 2026-03-10T05:52:06.094072+0000 mon.a (mon.0) 867 : audit [INF] from='mgr.24773 ' entity='mgr.x' 2026-03-10T05:52:07.252 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:52:07 vm05 bash[17864]: audit 2026-03-10T05:52:06.140157+0000 mon.b (mon.2) 74 : audit [DBG] from='mgr.24773 192.168.123.105:0/3918521054' entity='mgr.x' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T05:52:07.252 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:52:07 vm05 bash[17864]: cephadm 2026-03-10T05:52:06.141134+0000 mgr.x (mgr.24773) 69 : cephadm [INF] Upgrade: Need to upgrade myself (mgr.x) 2026-03-10T05:52:07.252 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:52:07 vm05 bash[17864]: cephadm 2026-03-10T05:52:06.141469+0000 mgr.x (mgr.24773) 70 : cephadm [INF] Upgrade: Need to upgrade myself (mgr.x) 2026-03-10T05:52:07.252 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:52:07 vm05 bash[17864]: cluster 2026-03-10T05:52:06.634010+0000 mgr.x (mgr.24773) 71 : cluster [DBG] pgmap v24: 161 pgs: 161 active+clean; 457 KiB data, 102 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T05:52:07.334 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:52:07 vm02 bash[17462]: audit 2026-03-10T05:52:06.009479+0000 mon.a (mon.0) 861 : audit [INF] from='mgr.24773 ' entity='mgr.x' 2026-03-10T05:52:07.344 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:52:07 vm02 bash[17462]: audit 2026-03-10T05:52:06.016477+0000 mon.a (mon.0) 862 : audit [INF] from='mgr.24773 ' entity='mgr.x' 2026-03-10T05:52:07.344 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:52:07 vm02 bash[17462]: audit 2026-03-10T05:52:06.020654+0000 mon.b (mon.2) 65 : audit [DBG] from='mgr.24773 192.168.123.105:0/3918521054' entity='mgr.x' cmd=[{"prefix": "dashboard iscsi-gateway-list"}]: dispatch 2026-03-10T05:52:07.344 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:52:07 vm02 bash[17462]: audit 2026-03-10T05:52:06.021093+0000 mgr.x (mgr.24773) 59 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard iscsi-gateway-list"}]: dispatch 2026-03-10T05:52:07.344 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:52:07 vm02 bash[17462]: audit 2026-03-10T05:52:06.029185+0000 mon.a (mon.0) 863 : audit [INF] from='mgr.24773 ' entity='mgr.x' 2026-03-10T05:52:07.344 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:52:07 vm02 bash[17462]: cephadm 2026-03-10T05:52:06.034472+0000 mgr.x (mgr.24773) 60 : cephadm [INF] Adding iSCSI gateway http://:@192.168.123.102:5000 to Dashboard 2026-03-10T05:52:07.344 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:52:07 vm02 bash[17462]: audit 2026-03-10T05:52:06.034663+0000 mon.b (mon.2) 66 : audit [INF] from='mgr.24773 192.168.123.105:0/3918521054' entity='mgr.x' cmd=[{"prefix": "dashboard set-iscsi-api-ssl-verification", "value": "true"}]: dispatch 2026-03-10T05:52:07.344 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:52:07 vm02 bash[17462]: audit 2026-03-10T05:52:06.035036+0000 mgr.x (mgr.24773) 61 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard set-iscsi-api-ssl-verification", "value": "true"}]: dispatch 2026-03-10T05:52:07.344 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:52:07 vm02 bash[17462]: audit 2026-03-10T05:52:06.036608+0000 mon.b (mon.2) 67 : audit [INF] from='mgr.24773 192.168.123.105:0/3918521054' entity='mgr.x' cmd=[{"prefix": "dashboard iscsi-gateway-add", "name": "vm02"}]: dispatch 2026-03-10T05:52:07.344 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:52:07 vm02 bash[17462]: audit 2026-03-10T05:52:06.036879+0000 mgr.x (mgr.24773) 62 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard iscsi-gateway-add", "name": "vm02"}]: dispatch 2026-03-10T05:52:07.345 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:52:07 vm02 bash[17462]: audit 2026-03-10T05:52:06.039920+0000 mon.a (mon.0) 864 : audit [INF] from='mgr.24773 ' entity='mgr.x' 2026-03-10T05:52:07.345 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:52:07 vm02 bash[17462]: audit 2026-03-10T05:52:06.051083+0000 mon.b (mon.2) 68 : audit [DBG] from='mgr.24773 192.168.123.105:0/3918521054' entity='mgr.x' cmd=[{"prefix": "dashboard get-alertmanager-api-host"}]: dispatch 2026-03-10T05:52:07.345 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:52:07 vm02 bash[17462]: audit 2026-03-10T05:52:06.053716+0000 mgr.x (mgr.24773) 63 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard get-alertmanager-api-host"}]: dispatch 2026-03-10T05:52:07.345 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:52:07 vm02 bash[17462]: audit 2026-03-10T05:52:06.055086+0000 mon.b (mon.2) 69 : audit [INF] from='mgr.24773 192.168.123.105:0/3918521054' entity='mgr.x' cmd=[{"prefix": "dashboard set-alertmanager-api-host", "value": "http://vm02.local:9093"}]: dispatch 2026-03-10T05:52:07.345 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:52:07 vm02 bash[17462]: audit 2026-03-10T05:52:06.055313+0000 mgr.x (mgr.24773) 64 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard set-alertmanager-api-host", "value": "http://vm02.local:9093"}]: dispatch 2026-03-10T05:52:07.345 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:52:07 vm02 bash[17462]: audit 2026-03-10T05:52:06.060189+0000 mon.a (mon.0) 865 : audit [INF] from='mgr.24773 ' entity='mgr.x' 2026-03-10T05:52:07.345 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:52:07 vm02 bash[17462]: audit 2026-03-10T05:52:06.071477+0000 mon.b (mon.2) 70 : audit [DBG] from='mgr.24773 192.168.123.105:0/3918521054' entity='mgr.x' cmd=[{"prefix": "dashboard get-grafana-api-url"}]: dispatch 2026-03-10T05:52:07.345 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:52:07 vm02 bash[17462]: audit 2026-03-10T05:52:06.071803+0000 mgr.x (mgr.24773) 65 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard get-grafana-api-url"}]: dispatch 2026-03-10T05:52:07.345 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:52:07 vm02 bash[17462]: audit 2026-03-10T05:52:06.072440+0000 mon.b (mon.2) 71 : audit [INF] from='mgr.24773 192.168.123.105:0/3918521054' entity='mgr.x' cmd=[{"prefix": "dashboard set-grafana-api-url", "value": "https://vm05.local:3000"}]: dispatch 2026-03-10T05:52:07.345 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:52:07 vm02 bash[17462]: audit 2026-03-10T05:52:06.072629+0000 mgr.x (mgr.24773) 66 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard set-grafana-api-url", "value": "https://vm05.local:3000"}]: dispatch 2026-03-10T05:52:07.345 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:52:07 vm02 bash[17462]: audit 2026-03-10T05:52:06.075836+0000 mon.a (mon.0) 866 : audit [INF] from='mgr.24773 ' entity='mgr.x' 2026-03-10T05:52:07.345 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:52:07 vm02 bash[17462]: audit 2026-03-10T05:52:06.087398+0000 mon.b (mon.2) 72 : audit [DBG] from='mgr.24773 192.168.123.105:0/3918521054' entity='mgr.x' cmd=[{"prefix": "dashboard get-prometheus-api-host"}]: dispatch 2026-03-10T05:52:07.345 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:52:07 vm02 bash[17462]: audit 2026-03-10T05:52:06.088138+0000 mgr.x (mgr.24773) 67 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard get-prometheus-api-host"}]: dispatch 2026-03-10T05:52:07.345 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:52:07 vm02 bash[17462]: audit 2026-03-10T05:52:06.089813+0000 mon.b (mon.2) 73 : audit [INF] from='mgr.24773 192.168.123.105:0/3918521054' entity='mgr.x' cmd=[{"prefix": "dashboard set-prometheus-api-host", "value": "http://vm05.local:9095"}]: dispatch 2026-03-10T05:52:07.345 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:52:07 vm02 bash[17462]: audit 2026-03-10T05:52:06.090056+0000 mgr.x (mgr.24773) 68 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard set-prometheus-api-host", "value": "http://vm05.local:9095"}]: dispatch 2026-03-10T05:52:07.345 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:52:07 vm02 bash[17462]: audit 2026-03-10T05:52:06.094072+0000 mon.a (mon.0) 867 : audit [INF] from='mgr.24773 ' entity='mgr.x' 2026-03-10T05:52:07.345 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:52:07 vm02 bash[17462]: audit 2026-03-10T05:52:06.140157+0000 mon.b (mon.2) 74 : audit [DBG] from='mgr.24773 192.168.123.105:0/3918521054' entity='mgr.x' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T05:52:07.345 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:52:07 vm02 bash[17462]: cephadm 2026-03-10T05:52:06.141134+0000 mgr.x (mgr.24773) 69 : cephadm [INF] Upgrade: Need to upgrade myself (mgr.x) 2026-03-10T05:52:07.345 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:52:07 vm02 bash[17462]: cephadm 2026-03-10T05:52:06.141469+0000 mgr.x (mgr.24773) 70 : cephadm [INF] Upgrade: Need to upgrade myself (mgr.x) 2026-03-10T05:52:07.345 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:52:07 vm02 bash[17462]: cluster 2026-03-10T05:52:06.634010+0000 mgr.x (mgr.24773) 71 : cluster [DBG] pgmap v24: 161 pgs: 161 active+clean; 457 KiB data, 102 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T05:52:07.345 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:07 vm02 bash[22526]: audit 2026-03-10T05:52:06.009479+0000 mon.a (mon.0) 861 : audit [INF] from='mgr.24773 ' entity='mgr.x' 2026-03-10T05:52:07.345 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:07 vm02 bash[22526]: audit 2026-03-10T05:52:06.016477+0000 mon.a (mon.0) 862 : audit [INF] from='mgr.24773 ' entity='mgr.x' 2026-03-10T05:52:07.345 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:07 vm02 bash[22526]: audit 2026-03-10T05:52:06.020654+0000 mon.b (mon.2) 65 : audit [DBG] from='mgr.24773 192.168.123.105:0/3918521054' entity='mgr.x' cmd=[{"prefix": "dashboard iscsi-gateway-list"}]: dispatch 2026-03-10T05:52:07.345 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:07 vm02 bash[22526]: audit 2026-03-10T05:52:06.021093+0000 mgr.x (mgr.24773) 59 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard iscsi-gateway-list"}]: dispatch 2026-03-10T05:52:07.345 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:07 vm02 bash[22526]: audit 2026-03-10T05:52:06.029185+0000 mon.a (mon.0) 863 : audit [INF] from='mgr.24773 ' entity='mgr.x' 2026-03-10T05:52:07.345 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:07 vm02 bash[22526]: cephadm 2026-03-10T05:52:06.034472+0000 mgr.x (mgr.24773) 60 : cephadm [INF] Adding iSCSI gateway http://:@192.168.123.102:5000 to Dashboard 2026-03-10T05:52:07.345 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:07 vm02 bash[22526]: audit 2026-03-10T05:52:06.034663+0000 mon.b (mon.2) 66 : audit [INF] from='mgr.24773 192.168.123.105:0/3918521054' entity='mgr.x' cmd=[{"prefix": "dashboard set-iscsi-api-ssl-verification", "value": "true"}]: dispatch 2026-03-10T05:52:07.345 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:07 vm02 bash[22526]: audit 2026-03-10T05:52:06.035036+0000 mgr.x (mgr.24773) 61 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard set-iscsi-api-ssl-verification", "value": "true"}]: dispatch 2026-03-10T05:52:07.345 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:07 vm02 bash[22526]: audit 2026-03-10T05:52:06.036608+0000 mon.b (mon.2) 67 : audit [INF] from='mgr.24773 192.168.123.105:0/3918521054' entity='mgr.x' cmd=[{"prefix": "dashboard iscsi-gateway-add", "name": "vm02"}]: dispatch 2026-03-10T05:52:07.345 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:07 vm02 bash[22526]: audit 2026-03-10T05:52:06.036879+0000 mgr.x (mgr.24773) 62 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard iscsi-gateway-add", "name": "vm02"}]: dispatch 2026-03-10T05:52:07.345 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:07 vm02 bash[22526]: audit 2026-03-10T05:52:06.039920+0000 mon.a (mon.0) 864 : audit [INF] from='mgr.24773 ' entity='mgr.x' 2026-03-10T05:52:07.345 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:07 vm02 bash[22526]: audit 2026-03-10T05:52:06.051083+0000 mon.b (mon.2) 68 : audit [DBG] from='mgr.24773 192.168.123.105:0/3918521054' entity='mgr.x' cmd=[{"prefix": "dashboard get-alertmanager-api-host"}]: dispatch 2026-03-10T05:52:07.345 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:07 vm02 bash[22526]: audit 2026-03-10T05:52:06.053716+0000 mgr.x (mgr.24773) 63 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard get-alertmanager-api-host"}]: dispatch 2026-03-10T05:52:07.345 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:07 vm02 bash[22526]: audit 2026-03-10T05:52:06.055086+0000 mon.b (mon.2) 69 : audit [INF] from='mgr.24773 192.168.123.105:0/3918521054' entity='mgr.x' cmd=[{"prefix": "dashboard set-alertmanager-api-host", "value": "http://vm02.local:9093"}]: dispatch 2026-03-10T05:52:07.345 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:07 vm02 bash[22526]: audit 2026-03-10T05:52:06.055313+0000 mgr.x (mgr.24773) 64 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard set-alertmanager-api-host", "value": "http://vm02.local:9093"}]: dispatch 2026-03-10T05:52:07.345 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:07 vm02 bash[22526]: audit 2026-03-10T05:52:06.060189+0000 mon.a (mon.0) 865 : audit [INF] from='mgr.24773 ' entity='mgr.x' 2026-03-10T05:52:07.345 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:07 vm02 bash[22526]: audit 2026-03-10T05:52:06.071477+0000 mon.b (mon.2) 70 : audit [DBG] from='mgr.24773 192.168.123.105:0/3918521054' entity='mgr.x' cmd=[{"prefix": "dashboard get-grafana-api-url"}]: dispatch 2026-03-10T05:52:07.345 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:07 vm02 bash[22526]: audit 2026-03-10T05:52:06.071803+0000 mgr.x (mgr.24773) 65 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard get-grafana-api-url"}]: dispatch 2026-03-10T05:52:07.345 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:07 vm02 bash[22526]: audit 2026-03-10T05:52:06.072440+0000 mon.b (mon.2) 71 : audit [INF] from='mgr.24773 192.168.123.105:0/3918521054' entity='mgr.x' cmd=[{"prefix": "dashboard set-grafana-api-url", "value": "https://vm05.local:3000"}]: dispatch 2026-03-10T05:52:07.346 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:07 vm02 bash[22526]: audit 2026-03-10T05:52:06.072629+0000 mgr.x (mgr.24773) 66 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard set-grafana-api-url", "value": "https://vm05.local:3000"}]: dispatch 2026-03-10T05:52:07.346 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:07 vm02 bash[22526]: audit 2026-03-10T05:52:06.075836+0000 mon.a (mon.0) 866 : audit [INF] from='mgr.24773 ' entity='mgr.x' 2026-03-10T05:52:07.346 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:07 vm02 bash[22526]: audit 2026-03-10T05:52:06.087398+0000 mon.b (mon.2) 72 : audit [DBG] from='mgr.24773 192.168.123.105:0/3918521054' entity='mgr.x' cmd=[{"prefix": "dashboard get-prometheus-api-host"}]: dispatch 2026-03-10T05:52:07.346 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:07 vm02 bash[22526]: audit 2026-03-10T05:52:06.088138+0000 mgr.x (mgr.24773) 67 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard get-prometheus-api-host"}]: dispatch 2026-03-10T05:52:07.346 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:07 vm02 bash[22526]: audit 2026-03-10T05:52:06.089813+0000 mon.b (mon.2) 73 : audit [INF] from='mgr.24773 192.168.123.105:0/3918521054' entity='mgr.x' cmd=[{"prefix": "dashboard set-prometheus-api-host", "value": "http://vm05.local:9095"}]: dispatch 2026-03-10T05:52:07.346 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:07 vm02 bash[22526]: audit 2026-03-10T05:52:06.090056+0000 mgr.x (mgr.24773) 68 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard set-prometheus-api-host", "value": "http://vm05.local:9095"}]: dispatch 2026-03-10T05:52:07.346 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:07 vm02 bash[22526]: audit 2026-03-10T05:52:06.094072+0000 mon.a (mon.0) 867 : audit [INF] from='mgr.24773 ' entity='mgr.x' 2026-03-10T05:52:07.346 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:07 vm02 bash[22526]: audit 2026-03-10T05:52:06.140157+0000 mon.b (mon.2) 74 : audit [DBG] from='mgr.24773 192.168.123.105:0/3918521054' entity='mgr.x' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T05:52:07.346 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:07 vm02 bash[22526]: cephadm 2026-03-10T05:52:06.141134+0000 mgr.x (mgr.24773) 69 : cephadm [INF] Upgrade: Need to upgrade myself (mgr.x) 2026-03-10T05:52:07.346 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:07 vm02 bash[22526]: cephadm 2026-03-10T05:52:06.141469+0000 mgr.x (mgr.24773) 70 : cephadm [INF] Upgrade: Need to upgrade myself (mgr.x) 2026-03-10T05:52:07.346 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:07 vm02 bash[22526]: cluster 2026-03-10T05:52:06.634010+0000 mgr.x (mgr.24773) 71 : cluster [DBG] pgmap v24: 161 pgs: 161 active+clean; 457 KiB data, 102 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T05:52:07.752 INFO:journalctl@ceph.mgr.x.vm05.stdout:Mar 10 05:52:07 vm05 bash[37598]: [10/Mar/2026:05:52:07] ENGINE HTTP Server cherrypy._cpwsgi_server.CPWSGIServer(('::', 9283)) shut down 2026-03-10T05:52:07.752 INFO:journalctl@ceph.mgr.x.vm05.stdout:Mar 10 05:52:07 vm05 bash[37598]: [10/Mar/2026:05:52:07] ENGINE Bus STOPPED 2026-03-10T05:52:07.752 INFO:journalctl@ceph.mgr.x.vm05.stdout:Mar 10 05:52:07 vm05 bash[37598]: [10/Mar/2026:05:52:07] ENGINE Bus STARTING 2026-03-10T05:52:07.752 INFO:journalctl@ceph.mgr.x.vm05.stdout:Mar 10 05:52:07 vm05 bash[37598]: [10/Mar/2026:05:52:07] ENGINE Serving on http://:::9283 2026-03-10T05:52:07.752 INFO:journalctl@ceph.mgr.x.vm05.stdout:Mar 10 05:52:07 vm05 bash[37598]: [10/Mar/2026:05:52:07] ENGINE Bus STARTED 2026-03-10T05:52:08.084 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:52:07 vm02 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:52:08.084 INFO:journalctl@ceph.mgr.y.vm02.stdout:Mar 10 05:52:07 vm02 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:52:08.084 INFO:journalctl@ceph.mgr.y.vm02.stdout:Mar 10 05:52:08 vm02 systemd[1]: Stopping Ceph mgr.y for 107483ae-1c44-11f1-b530-c1172cd6122a... 2026-03-10T05:52:08.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:07 vm02 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:52:08.085 INFO:journalctl@ceph.osd.0.vm02.stdout:Mar 10 05:52:07 vm02 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:52:08.085 INFO:journalctl@ceph.osd.1.vm02.stdout:Mar 10 05:52:07 vm02 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:52:08.085 INFO:journalctl@ceph.osd.2.vm02.stdout:Mar 10 05:52:07 vm02 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:52:08.085 INFO:journalctl@ceph.osd.3.vm02.stdout:Mar 10 05:52:07 vm02 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:52:08.085 INFO:journalctl@ceph.alertmanager.a.vm02.stdout:Mar 10 05:52:07 vm02 bash[51578]: ts=2026-03-10T05:52:07.819Z caller=cluster.go:698 level=info component=cluster msg="gossip settled; proceeding" elapsed=10.003025827s 2026-03-10T05:52:08.085 INFO:journalctl@ceph.alertmanager.a.vm02.stdout:Mar 10 05:52:07 vm02 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:52:08.085 INFO:journalctl@ceph.node-exporter.a.vm02.stdout:Mar 10 05:52:07 vm02 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:52:08.425 INFO:journalctl@ceph.mgr.y.vm02.stdout:Mar 10 05:52:08 vm02 bash[52153]: ceph-107483ae-1c44-11f1-b530-c1172cd6122a-mgr-y 2026-03-10T05:52:08.425 INFO:journalctl@ceph.mgr.y.vm02.stdout:Mar 10 05:52:08 vm02 systemd[1]: ceph-107483ae-1c44-11f1-b530-c1172cd6122a@mgr.y.service: Main process exited, code=exited, status=143/n/a 2026-03-10T05:52:08.425 INFO:journalctl@ceph.mgr.y.vm02.stdout:Mar 10 05:52:08 vm02 systemd[1]: ceph-107483ae-1c44-11f1-b530-c1172cd6122a@mgr.y.service: Failed with result 'exit-code'. 2026-03-10T05:52:08.425 INFO:journalctl@ceph.mgr.y.vm02.stdout:Mar 10 05:52:08 vm02 systemd[1]: Stopped Ceph mgr.y for 107483ae-1c44-11f1-b530-c1172cd6122a. 2026-03-10T05:52:08.744 INFO:journalctl@ceph.osd.0.vm02.stdout:Mar 10 05:52:08 vm02 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:52:08.744 INFO:journalctl@ceph.mgr.y.vm02.stdout:Mar 10 05:52:08 vm02 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:52:08.744 INFO:journalctl@ceph.mgr.y.vm02.stdout:Mar 10 05:52:08 vm02 systemd[1]: Started Ceph mgr.y for 107483ae-1c44-11f1-b530-c1172cd6122a. 2026-03-10T05:52:08.744 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:08 vm02 bash[22526]: cephadm 2026-03-10T05:52:07.296669+0000 mgr.x (mgr.24773) 72 : cephadm [INF] Upgrade: Updating mgr.y 2026-03-10T05:52:08.744 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:08 vm02 bash[22526]: audit 2026-03-10T05:52:07.424988+0000 mon.a (mon.0) 868 : audit [INF] from='mgr.24773 ' entity='mgr.x' 2026-03-10T05:52:08.745 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:08 vm02 bash[22526]: audit 2026-03-10T05:52:07.441220+0000 mon.a (mon.0) 869 : audit [INF] from='mgr.24773 ' entity='mgr.x' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.y", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch 2026-03-10T05:52:08.745 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:08 vm02 bash[22526]: audit 2026-03-10T05:52:07.442136+0000 mon.b (mon.2) 75 : audit [INF] from='mgr.24773 192.168.123.105:0/3918521054' entity='mgr.x' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.y", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch 2026-03-10T05:52:08.745 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:08 vm02 bash[22526]: audit 2026-03-10T05:52:07.443850+0000 mon.b (mon.2) 76 : audit [DBG] from='mgr.24773 192.168.123.105:0/3918521054' entity='mgr.x' cmd=[{"prefix": "mgr services"}]: dispatch 2026-03-10T05:52:08.745 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:08 vm02 bash[22526]: audit 2026-03-10T05:52:07.444674+0000 mon.b (mon.2) 77 : audit [DBG] from='mgr.24773 192.168.123.105:0/3918521054' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:52:08.745 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:08 vm02 bash[22526]: cephadm 2026-03-10T05:52:07.445284+0000 mgr.x (mgr.24773) 73 : cephadm [INF] Deploying daemon mgr.y on vm02 2026-03-10T05:52:08.745 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:08 vm02 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:52:08.745 INFO:journalctl@ceph.alertmanager.a.vm02.stdout:Mar 10 05:52:08 vm02 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:52:08.745 INFO:journalctl@ceph.node-exporter.a.vm02.stdout:Mar 10 05:52:08 vm02 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:52:08.745 INFO:journalctl@ceph.osd.2.vm02.stdout:Mar 10 05:52:08 vm02 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:52:08.746 INFO:journalctl@ceph.osd.3.vm02.stdout:Mar 10 05:52:08 vm02 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:52:08.746 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:52:08 vm02 bash[17462]: cephadm 2026-03-10T05:52:07.296669+0000 mgr.x (mgr.24773) 72 : cephadm [INF] Upgrade: Updating mgr.y 2026-03-10T05:52:08.746 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:52:08 vm02 bash[17462]: audit 2026-03-10T05:52:07.424988+0000 mon.a (mon.0) 868 : audit [INF] from='mgr.24773 ' entity='mgr.x' 2026-03-10T05:52:08.746 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:52:08 vm02 bash[17462]: audit 2026-03-10T05:52:07.441220+0000 mon.a (mon.0) 869 : audit [INF] from='mgr.24773 ' entity='mgr.x' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.y", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch 2026-03-10T05:52:08.746 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:52:08 vm02 bash[17462]: audit 2026-03-10T05:52:07.442136+0000 mon.b (mon.2) 75 : audit [INF] from='mgr.24773 192.168.123.105:0/3918521054' entity='mgr.x' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.y", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch 2026-03-10T05:52:08.746 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:52:08 vm02 bash[17462]: audit 2026-03-10T05:52:07.443850+0000 mon.b (mon.2) 76 : audit [DBG] from='mgr.24773 192.168.123.105:0/3918521054' entity='mgr.x' cmd=[{"prefix": "mgr services"}]: dispatch 2026-03-10T05:52:08.746 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:52:08 vm02 bash[17462]: audit 2026-03-10T05:52:07.444674+0000 mon.b (mon.2) 77 : audit [DBG] from='mgr.24773 192.168.123.105:0/3918521054' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:52:08.746 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:52:08 vm02 bash[17462]: cephadm 2026-03-10T05:52:07.445284+0000 mgr.x (mgr.24773) 73 : cephadm [INF] Deploying daemon mgr.y on vm02 2026-03-10T05:52:08.746 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:52:08 vm02 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:52:08.746 INFO:journalctl@ceph.osd.1.vm02.stdout:Mar 10 05:52:08 vm02 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:52:08.751 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:52:08 vm05 bash[17864]: cephadm 2026-03-10T05:52:07.296669+0000 mgr.x (mgr.24773) 72 : cephadm [INF] Upgrade: Updating mgr.y 2026-03-10T05:52:08.751 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:52:08 vm05 bash[17864]: audit 2026-03-10T05:52:07.424988+0000 mon.a (mon.0) 868 : audit [INF] from='mgr.24773 ' entity='mgr.x' 2026-03-10T05:52:08.752 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:52:08 vm05 bash[17864]: audit 2026-03-10T05:52:07.441220+0000 mon.a (mon.0) 869 : audit [INF] from='mgr.24773 ' entity='mgr.x' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.y", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch 2026-03-10T05:52:08.752 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:52:08 vm05 bash[17864]: audit 2026-03-10T05:52:07.442136+0000 mon.b (mon.2) 75 : audit [INF] from='mgr.24773 192.168.123.105:0/3918521054' entity='mgr.x' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.y", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch 2026-03-10T05:52:08.752 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:52:08 vm05 bash[17864]: audit 2026-03-10T05:52:07.443850+0000 mon.b (mon.2) 76 : audit [DBG] from='mgr.24773 192.168.123.105:0/3918521054' entity='mgr.x' cmd=[{"prefix": "mgr services"}]: dispatch 2026-03-10T05:52:08.752 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:52:08 vm05 bash[17864]: audit 2026-03-10T05:52:07.444674+0000 mon.b (mon.2) 77 : audit [DBG] from='mgr.24773 192.168.123.105:0/3918521054' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:52:08.752 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:52:08 vm05 bash[17864]: cephadm 2026-03-10T05:52:07.445284+0000 mgr.x (mgr.24773) 73 : cephadm [INF] Deploying daemon mgr.y on vm02 2026-03-10T05:52:09.064 INFO:journalctl@ceph.mgr.y.vm02.stdout:Mar 10 05:52:08 vm02 bash[52264]: debug 2026-03-10T05:52:08.907+0000 7f95290b3140 -1 mgr[py] Module status has missing NOTIFY_TYPES member 2026-03-10T05:52:09.064 INFO:journalctl@ceph.mgr.y.vm02.stdout:Mar 10 05:52:08 vm02 bash[52264]: debug 2026-03-10T05:52:08.939+0000 7f95290b3140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member 2026-03-10T05:52:09.064 INFO:journalctl@ceph.mgr.y.vm02.stdout:Mar 10 05:52:09 vm02 bash[52264]: debug 2026-03-10T05:52:09.059+0000 7f95290b3140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member 2026-03-10T05:52:09.584 INFO:journalctl@ceph.mgr.y.vm02.stdout:Mar 10 05:52:09 vm02 bash[52264]: debug 2026-03-10T05:52:09.323+0000 7f95290b3140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member 2026-03-10T05:52:10.071 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:52:09 vm02 bash[17462]: cluster 2026-03-10T05:52:08.634321+0000 mgr.x (mgr.24773) 74 : cluster [DBG] pgmap v25: 161 pgs: 161 active+clean; 457 KiB data, 102 MiB used, 160 GiB / 160 GiB avail; 1.1 KiB/s rd, 1 op/s 2026-03-10T05:52:10.071 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:52:09 vm02 bash[17462]: audit 2026-03-10T05:52:08.753087+0000 mon.b (mon.2) 78 : audit [DBG] from='mgr.24773 192.168.123.105:0/3918521054' entity='mgr.x' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T05:52:10.071 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:52:09 vm02 bash[17462]: audit 2026-03-10T05:52:08.754140+0000 mon.a (mon.0) 870 : audit [INF] from='mgr.24773 ' entity='mgr.x' 2026-03-10T05:52:10.072 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:52:09 vm02 bash[17462]: audit 2026-03-10T05:52:08.764101+0000 mon.a (mon.0) 871 : audit [INF] from='mgr.24773 ' entity='mgr.x' 2026-03-10T05:52:10.072 INFO:journalctl@ceph.mgr.y.vm02.stdout:Mar 10 05:52:09 vm02 bash[52264]: debug 2026-03-10T05:52:09.735+0000 7f95290b3140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member 2026-03-10T05:52:10.072 INFO:journalctl@ceph.mgr.y.vm02.stdout:Mar 10 05:52:09 vm02 bash[52264]: debug 2026-03-10T05:52:09.811+0000 7f95290b3140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member 2026-03-10T05:52:10.072 INFO:journalctl@ceph.mgr.y.vm02.stdout:Mar 10 05:52:09 vm02 bash[52264]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode. 2026-03-10T05:52:10.072 INFO:journalctl@ceph.mgr.y.vm02.stdout:Mar 10 05:52:09 vm02 bash[52264]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve. 2026-03-10T05:52:10.072 INFO:journalctl@ceph.mgr.y.vm02.stdout:Mar 10 05:52:09 vm02 bash[52264]: from numpy import show_config as show_numpy_config 2026-03-10T05:52:10.072 INFO:journalctl@ceph.mgr.y.vm02.stdout:Mar 10 05:52:09 vm02 bash[52264]: debug 2026-03-10T05:52:09.931+0000 7f95290b3140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member 2026-03-10T05:52:10.072 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:09 vm02 bash[22526]: cluster 2026-03-10T05:52:08.634321+0000 mgr.x (mgr.24773) 74 : cluster [DBG] pgmap v25: 161 pgs: 161 active+clean; 457 KiB data, 102 MiB used, 160 GiB / 160 GiB avail; 1.1 KiB/s rd, 1 op/s 2026-03-10T05:52:10.072 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:09 vm02 bash[22526]: audit 2026-03-10T05:52:08.753087+0000 mon.b (mon.2) 78 : audit [DBG] from='mgr.24773 192.168.123.105:0/3918521054' entity='mgr.x' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T05:52:10.072 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:09 vm02 bash[22526]: audit 2026-03-10T05:52:08.754140+0000 mon.a (mon.0) 870 : audit [INF] from='mgr.24773 ' entity='mgr.x' 2026-03-10T05:52:10.072 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:09 vm02 bash[22526]: audit 2026-03-10T05:52:08.764101+0000 mon.a (mon.0) 871 : audit [INF] from='mgr.24773 ' entity='mgr.x' 2026-03-10T05:52:10.251 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:52:09 vm05 bash[17864]: cluster 2026-03-10T05:52:08.634321+0000 mgr.x (mgr.24773) 74 : cluster [DBG] pgmap v25: 161 pgs: 161 active+clean; 457 KiB data, 102 MiB used, 160 GiB / 160 GiB avail; 1.1 KiB/s rd, 1 op/s 2026-03-10T05:52:10.251 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:52:09 vm05 bash[17864]: audit 2026-03-10T05:52:08.753087+0000 mon.b (mon.2) 78 : audit [DBG] from='mgr.24773 192.168.123.105:0/3918521054' entity='mgr.x' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T05:52:10.251 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:52:09 vm05 bash[17864]: audit 2026-03-10T05:52:08.754140+0000 mon.a (mon.0) 870 : audit [INF] from='mgr.24773 ' entity='mgr.x' 2026-03-10T05:52:10.251 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:52:09 vm05 bash[17864]: audit 2026-03-10T05:52:08.764101+0000 mon.a (mon.0) 871 : audit [INF] from='mgr.24773 ' entity='mgr.x' 2026-03-10T05:52:10.334 INFO:journalctl@ceph.mgr.y.vm02.stdout:Mar 10 05:52:10 vm02 bash[52264]: debug 2026-03-10T05:52:10.067+0000 7f95290b3140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member 2026-03-10T05:52:10.334 INFO:journalctl@ceph.mgr.y.vm02.stdout:Mar 10 05:52:10 vm02 bash[52264]: debug 2026-03-10T05:52:10.103+0000 7f95290b3140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member 2026-03-10T05:52:10.334 INFO:journalctl@ceph.mgr.y.vm02.stdout:Mar 10 05:52:10 vm02 bash[52264]: debug 2026-03-10T05:52:10.135+0000 7f95290b3140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member 2026-03-10T05:52:10.334 INFO:journalctl@ceph.mgr.y.vm02.stdout:Mar 10 05:52:10 vm02 bash[52264]: debug 2026-03-10T05:52:10.171+0000 7f95290b3140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member 2026-03-10T05:52:10.334 INFO:journalctl@ceph.mgr.y.vm02.stdout:Mar 10 05:52:10 vm02 bash[52264]: debug 2026-03-10T05:52:10.219+0000 7f95290b3140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member 2026-03-10T05:52:10.910 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:52:10 vm02 bash[17462]: cluster 2026-03-10T05:52:10.634819+0000 mgr.x (mgr.24773) 75 : cluster [DBG] pgmap v26: 161 pgs: 161 active+clean; 457 KiB data, 102 MiB used, 160 GiB / 160 GiB avail; 1021 B/s rd, 0 op/s 2026-03-10T05:52:10.910 INFO:journalctl@ceph.mgr.y.vm02.stdout:Mar 10 05:52:10 vm02 bash[52264]: debug 2026-03-10T05:52:10.619+0000 7f95290b3140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member 2026-03-10T05:52:10.910 INFO:journalctl@ceph.mgr.y.vm02.stdout:Mar 10 05:52:10 vm02 bash[52264]: debug 2026-03-10T05:52:10.655+0000 7f95290b3140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member 2026-03-10T05:52:10.910 INFO:journalctl@ceph.mgr.y.vm02.stdout:Mar 10 05:52:10 vm02 bash[52264]: debug 2026-03-10T05:52:10.691+0000 7f95290b3140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member 2026-03-10T05:52:10.910 INFO:journalctl@ceph.mgr.y.vm02.stdout:Mar 10 05:52:10 vm02 bash[52264]: debug 2026-03-10T05:52:10.827+0000 7f95290b3140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member 2026-03-10T05:52:10.910 INFO:journalctl@ceph.mgr.y.vm02.stdout:Mar 10 05:52:10 vm02 bash[52264]: debug 2026-03-10T05:52:10.867+0000 7f95290b3140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member 2026-03-10T05:52:10.910 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:10 vm02 bash[22526]: cluster 2026-03-10T05:52:10.634819+0000 mgr.x (mgr.24773) 75 : cluster [DBG] pgmap v26: 161 pgs: 161 active+clean; 457 KiB data, 102 MiB used, 160 GiB / 160 GiB avail; 1021 B/s rd, 0 op/s 2026-03-10T05:52:11.251 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:52:10 vm05 bash[17864]: cluster 2026-03-10T05:52:10.634819+0000 mgr.x (mgr.24773) 75 : cluster [DBG] pgmap v26: 161 pgs: 161 active+clean; 457 KiB data, 102 MiB used, 160 GiB / 160 GiB avail; 1021 B/s rd, 0 op/s 2026-03-10T05:52:11.313 INFO:journalctl@ceph.mgr.y.vm02.stdout:Mar 10 05:52:10 vm02 bash[52264]: debug 2026-03-10T05:52:10.903+0000 7f95290b3140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member 2026-03-10T05:52:11.313 INFO:journalctl@ceph.mgr.y.vm02.stdout:Mar 10 05:52:11 vm02 bash[52264]: debug 2026-03-10T05:52:11.003+0000 7f95290b3140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member 2026-03-10T05:52:11.313 INFO:journalctl@ceph.mgr.y.vm02.stdout:Mar 10 05:52:11 vm02 bash[52264]: debug 2026-03-10T05:52:11.151+0000 7f95290b3140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member 2026-03-10T05:52:11.584 INFO:journalctl@ceph.mgr.y.vm02.stdout:Mar 10 05:52:11 vm02 bash[52264]: debug 2026-03-10T05:52:11.307+0000 7f95290b3140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member 2026-03-10T05:52:11.584 INFO:journalctl@ceph.mgr.y.vm02.stdout:Mar 10 05:52:11 vm02 bash[52264]: debug 2026-03-10T05:52:11.339+0000 7f95290b3140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member 2026-03-10T05:52:11.584 INFO:journalctl@ceph.mgr.y.vm02.stdout:Mar 10 05:52:11 vm02 bash[52264]: debug 2026-03-10T05:52:11.375+0000 7f95290b3140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member 2026-03-10T05:52:11.584 INFO:journalctl@ceph.mgr.y.vm02.stdout:Mar 10 05:52:11 vm02 bash[52264]: debug 2026-03-10T05:52:11.511+0000 7f95290b3140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member 2026-03-10T05:52:12.084 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:52:11 vm02 bash[17462]: cluster 2026-03-10T05:52:11.723165+0000 mon.a (mon.0) 872 : cluster [DBG] Standby manager daemon y restarted 2026-03-10T05:52:12.084 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:52:11 vm02 bash[17462]: cluster 2026-03-10T05:52:11.723261+0000 mon.a (mon.0) 873 : cluster [DBG] Standby manager daemon y started 2026-03-10T05:52:12.084 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:52:11 vm02 bash[17462]: audit 2026-03-10T05:52:11.725543+0000 mon.c (mon.1) 129 : audit [DBG] from='mgr.? 192.168.123.102:0/1338236995' entity='mgr.y' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/y/crt"}]: dispatch 2026-03-10T05:52:12.084 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:52:11 vm02 bash[17462]: audit 2026-03-10T05:52:11.726039+0000 mon.c (mon.1) 130 : audit [DBG] from='mgr.? 192.168.123.102:0/1338236995' entity='mgr.y' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/crt"}]: dispatch 2026-03-10T05:52:12.084 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:52:11 vm02 bash[17462]: audit 2026-03-10T05:52:11.726833+0000 mon.c (mon.1) 131 : audit [DBG] from='mgr.? 192.168.123.102:0/1338236995' entity='mgr.y' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/y/key"}]: dispatch 2026-03-10T05:52:12.084 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:52:11 vm02 bash[17462]: audit 2026-03-10T05:52:11.727622+0000 mon.c (mon.1) 132 : audit [DBG] from='mgr.? 192.168.123.102:0/1338236995' entity='mgr.y' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/key"}]: dispatch 2026-03-10T05:52:12.084 INFO:journalctl@ceph.mgr.y.vm02.stdout:Mar 10 05:52:11 vm02 bash[52264]: debug 2026-03-10T05:52:11.715+0000 7f95290b3140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member 2026-03-10T05:52:12.084 INFO:journalctl@ceph.mgr.y.vm02.stdout:Mar 10 05:52:11 vm02 bash[52264]: [10/Mar/2026:05:52:11] ENGINE Bus STARTING 2026-03-10T05:52:12.084 INFO:journalctl@ceph.mgr.y.vm02.stdout:Mar 10 05:52:11 vm02 bash[52264]: CherryPy Checker: 2026-03-10T05:52:12.084 INFO:journalctl@ceph.mgr.y.vm02.stdout:Mar 10 05:52:11 vm02 bash[52264]: The Application mounted at '' has an empty config. 2026-03-10T05:52:12.084 INFO:journalctl@ceph.mgr.y.vm02.stdout:Mar 10 05:52:11 vm02 bash[52264]: [10/Mar/2026:05:52:11] ENGINE Serving on http://:::9283 2026-03-10T05:52:12.084 INFO:journalctl@ceph.mgr.y.vm02.stdout:Mar 10 05:52:11 vm02 bash[52264]: [10/Mar/2026:05:52:11] ENGINE Bus STARTED 2026-03-10T05:52:12.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:11 vm02 bash[22526]: cluster 2026-03-10T05:52:11.723165+0000 mon.a (mon.0) 872 : cluster [DBG] Standby manager daemon y restarted 2026-03-10T05:52:12.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:11 vm02 bash[22526]: cluster 2026-03-10T05:52:11.723261+0000 mon.a (mon.0) 873 : cluster [DBG] Standby manager daemon y started 2026-03-10T05:52:12.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:11 vm02 bash[22526]: audit 2026-03-10T05:52:11.725543+0000 mon.c (mon.1) 129 : audit [DBG] from='mgr.? 192.168.123.102:0/1338236995' entity='mgr.y' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/y/crt"}]: dispatch 2026-03-10T05:52:12.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:11 vm02 bash[22526]: audit 2026-03-10T05:52:11.726039+0000 mon.c (mon.1) 130 : audit [DBG] from='mgr.? 192.168.123.102:0/1338236995' entity='mgr.y' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/crt"}]: dispatch 2026-03-10T05:52:12.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:11 vm02 bash[22526]: audit 2026-03-10T05:52:11.726833+0000 mon.c (mon.1) 131 : audit [DBG] from='mgr.? 192.168.123.102:0/1338236995' entity='mgr.y' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/y/key"}]: dispatch 2026-03-10T05:52:12.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:11 vm02 bash[22526]: audit 2026-03-10T05:52:11.727622+0000 mon.c (mon.1) 132 : audit [DBG] from='mgr.? 192.168.123.102:0/1338236995' entity='mgr.y' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/key"}]: dispatch 2026-03-10T05:52:12.251 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:52:11 vm05 bash[17864]: cluster 2026-03-10T05:52:11.723165+0000 mon.a (mon.0) 872 : cluster [DBG] Standby manager daemon y restarted 2026-03-10T05:52:12.251 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:52:11 vm05 bash[17864]: cluster 2026-03-10T05:52:11.723261+0000 mon.a (mon.0) 873 : cluster [DBG] Standby manager daemon y started 2026-03-10T05:52:12.251 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:52:11 vm05 bash[17864]: audit 2026-03-10T05:52:11.725543+0000 mon.c (mon.1) 129 : audit [DBG] from='mgr.? 192.168.123.102:0/1338236995' entity='mgr.y' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/y/crt"}]: dispatch 2026-03-10T05:52:12.251 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:52:11 vm05 bash[17864]: audit 2026-03-10T05:52:11.726039+0000 mon.c (mon.1) 130 : audit [DBG] from='mgr.? 192.168.123.102:0/1338236995' entity='mgr.y' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/crt"}]: dispatch 2026-03-10T05:52:12.251 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:52:11 vm05 bash[17864]: audit 2026-03-10T05:52:11.726833+0000 mon.c (mon.1) 131 : audit [DBG] from='mgr.? 192.168.123.102:0/1338236995' entity='mgr.y' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/y/key"}]: dispatch 2026-03-10T05:52:12.251 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:52:11 vm05 bash[17864]: audit 2026-03-10T05:52:11.727622+0000 mon.c (mon.1) 132 : audit [DBG] from='mgr.? 192.168.123.102:0/1338236995' entity='mgr.y' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/key"}]: dispatch 2026-03-10T05:52:13.251 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:52:12 vm05 bash[17864]: cluster 2026-03-10T05:52:11.854255+0000 mon.a (mon.0) 874 : cluster [DBG] mgrmap e26: x(active, since 34s), standbys: y 2026-03-10T05:52:13.251 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:52:12 vm05 bash[17864]: cluster 2026-03-10T05:52:12.635125+0000 mgr.x (mgr.24773) 76 : cluster [DBG] pgmap v27: 161 pgs: 161 active+clean; 457 KiB data, 102 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T05:52:13.334 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:52:12 vm02 bash[17462]: cluster 2026-03-10T05:52:11.854255+0000 mon.a (mon.0) 874 : cluster [DBG] mgrmap e26: x(active, since 34s), standbys: y 2026-03-10T05:52:13.334 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:52:12 vm02 bash[17462]: cluster 2026-03-10T05:52:12.635125+0000 mgr.x (mgr.24773) 76 : cluster [DBG] pgmap v27: 161 pgs: 161 active+clean; 457 KiB data, 102 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T05:52:13.334 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:12 vm02 bash[22526]: cluster 2026-03-10T05:52:11.854255+0000 mon.a (mon.0) 874 : cluster [DBG] mgrmap e26: x(active, since 34s), standbys: y 2026-03-10T05:52:13.334 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:12 vm02 bash[22526]: cluster 2026-03-10T05:52:12.635125+0000 mgr.x (mgr.24773) 76 : cluster [DBG] pgmap v27: 161 pgs: 161 active+clean; 457 KiB data, 102 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T05:52:14.751 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:52:14 vm05 bash[17864]: audit 2026-03-10T05:52:13.340189+0000 mgr.x (mgr.24773) 77 : audit [DBG] from='client.24940 -' entity='client.iscsi.foo.vm02.mxbwmh' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T05:52:14.751 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:52:14 vm05 bash[17864]: audit 2026-03-10T05:52:14.041869+0000 mon.a (mon.0) 875 : audit [INF] from='mgr.24773 ' entity='mgr.x' 2026-03-10T05:52:14.751 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:52:14 vm05 bash[17864]: audit 2026-03-10T05:52:14.050221+0000 mon.a (mon.0) 876 : audit [INF] from='mgr.24773 ' entity='mgr.x' 2026-03-10T05:52:14.752 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:52:14 vm05 bash[17864]: audit 2026-03-10T05:52:14.156162+0000 mon.a (mon.0) 877 : audit [INF] from='mgr.24773 ' entity='mgr.x' 2026-03-10T05:52:14.752 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:52:14 vm05 bash[17864]: audit 2026-03-10T05:52:14.162050+0000 mon.a (mon.0) 878 : audit [INF] from='mgr.24773 ' entity='mgr.x' 2026-03-10T05:52:14.834 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:52:14 vm02 bash[17462]: audit 2026-03-10T05:52:13.340189+0000 mgr.x (mgr.24773) 77 : audit [DBG] from='client.24940 -' entity='client.iscsi.foo.vm02.mxbwmh' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T05:52:14.834 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:52:14 vm02 bash[17462]: audit 2026-03-10T05:52:14.041869+0000 mon.a (mon.0) 875 : audit [INF] from='mgr.24773 ' entity='mgr.x' 2026-03-10T05:52:14.834 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:52:14 vm02 bash[17462]: audit 2026-03-10T05:52:14.050221+0000 mon.a (mon.0) 876 : audit [INF] from='mgr.24773 ' entity='mgr.x' 2026-03-10T05:52:14.834 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:52:14 vm02 bash[17462]: audit 2026-03-10T05:52:14.156162+0000 mon.a (mon.0) 877 : audit [INF] from='mgr.24773 ' entity='mgr.x' 2026-03-10T05:52:14.834 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:52:14 vm02 bash[17462]: audit 2026-03-10T05:52:14.162050+0000 mon.a (mon.0) 878 : audit [INF] from='mgr.24773 ' entity='mgr.x' 2026-03-10T05:52:14.834 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:14 vm02 bash[22526]: audit 2026-03-10T05:52:13.340189+0000 mgr.x (mgr.24773) 77 : audit [DBG] from='client.24940 -' entity='client.iscsi.foo.vm02.mxbwmh' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T05:52:14.834 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:14 vm02 bash[22526]: audit 2026-03-10T05:52:14.041869+0000 mon.a (mon.0) 875 : audit [INF] from='mgr.24773 ' entity='mgr.x' 2026-03-10T05:52:14.834 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:14 vm02 bash[22526]: audit 2026-03-10T05:52:14.050221+0000 mon.a (mon.0) 876 : audit [INF] from='mgr.24773 ' entity='mgr.x' 2026-03-10T05:52:14.834 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:14 vm02 bash[22526]: audit 2026-03-10T05:52:14.156162+0000 mon.a (mon.0) 877 : audit [INF] from='mgr.24773 ' entity='mgr.x' 2026-03-10T05:52:14.834 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:14 vm02 bash[22526]: audit 2026-03-10T05:52:14.162050+0000 mon.a (mon.0) 878 : audit [INF] from='mgr.24773 ' entity='mgr.x' 2026-03-10T05:52:15.459 INFO:teuthology.orchestra.run.vm02.stdout:true 2026-03-10T05:52:15.834 INFO:teuthology.orchestra.run.vm02.stdout:NAME HOST PORTS STATUS REFRESHED AGE MEM USE MEM LIM VERSION IMAGE ID CONTAINER ID 2026-03-10T05:52:15.834 INFO:teuthology.orchestra.run.vm02.stdout:alertmanager.a vm02 *:9093,9094 running (18s) 1s ago 5m 14.7M - 0.25.0 c8568f914cd2 7a7c5c2cddb6 2026-03-10T05:52:15.834 INFO:teuthology.orchestra.run.vm02.stdout:grafana.a vm05 *:3000 running (16s) 1s ago 4m 38.4M - dad864ee21e9 95c6d977988a 2026-03-10T05:52:15.834 INFO:teuthology.orchestra.run.vm02.stdout:iscsi.foo.vm02.mxbwmh vm02 running (22s) 1s ago 4m 41.4M - 3.5 e1d6a67b021e 0ef84f339486 2026-03-10T05:52:15.834 INFO:teuthology.orchestra.run.vm02.stdout:mgr.x vm05 *:8443,9283 running (46s) 1s ago 7m 528M - 19.2.3-678-ge911bdeb 654f31e6858e eefd57c0b61c 2026-03-10T05:52:15.834 INFO:teuthology.orchestra.run.vm02.stdout:mgr.y vm02 *:8443,9283,8765 running (7s) 1s ago 8m 345M - 19.2.3-678-ge911bdeb 654f31e6858e ef46d0f7b15e 2026-03-10T05:52:15.834 INFO:teuthology.orchestra.run.vm02.stdout:mon.a vm02 running (8m) 1s ago 8m 51.9M 2048M 17.2.0 e1d6a67b021e bf59d12a7baa 2026-03-10T05:52:15.834 INFO:teuthology.orchestra.run.vm02.stdout:mon.b vm05 running (7m) 1s ago 7m 42.1M 2048M 17.2.0 e1d6a67b021e 96a2a71fd403 2026-03-10T05:52:15.834 INFO:teuthology.orchestra.run.vm02.stdout:mon.c vm02 running (7m) 1s ago 7m 37.4M 2048M 17.2.0 e1d6a67b021e 2f6dcf491c61 2026-03-10T05:52:15.834 INFO:teuthology.orchestra.run.vm02.stdout:node-exporter.a vm02 *:9100 running (14s) 1s ago 5m 2784k - 1.7.0 72c9c2088986 90288450bd1f 2026-03-10T05:52:15.834 INFO:teuthology.orchestra.run.vm02.stdout:node-exporter.b vm05 *:9100 running (13s) 1s ago 5m 2792k - 1.7.0 72c9c2088986 4e859143cb0e 2026-03-10T05:52:15.834 INFO:teuthology.orchestra.run.vm02.stdout:osd.0 vm02 running (7m) 1s ago 7m 50.6M 4096M 17.2.0 e1d6a67b021e 563d55a3e6a4 2026-03-10T05:52:15.834 INFO:teuthology.orchestra.run.vm02.stdout:osd.1 vm02 running (7m) 1s ago 7m 53.4M 4096M 17.2.0 e1d6a67b021e 8c25a1e89677 2026-03-10T05:52:15.834 INFO:teuthology.orchestra.run.vm02.stdout:osd.2 vm02 running (6m) 1s ago 6m 48.5M 4096M 17.2.0 e1d6a67b021e 826f54bdbc5c 2026-03-10T05:52:15.834 INFO:teuthology.orchestra.run.vm02.stdout:osd.3 vm02 running (6m) 1s ago 6m 52.2M 4096M 17.2.0 e1d6a67b021e 0c6cfa53c9fd 2026-03-10T05:52:15.834 INFO:teuthology.orchestra.run.vm02.stdout:osd.4 vm05 running (6m) 1s ago 6m 52.5M 4096M 17.2.0 e1d6a67b021e 4ffe1741f201 2026-03-10T05:52:15.834 INFO:teuthology.orchestra.run.vm02.stdout:osd.5 vm05 running (6m) 1s ago 6m 50.9M 4096M 17.2.0 e1d6a67b021e cba5583c238e 2026-03-10T05:52:15.834 INFO:teuthology.orchestra.run.vm02.stdout:osd.6 vm05 running (5m) 1s ago 5m 48.6M 4096M 17.2.0 e1d6a67b021e 9d1b370357d7 2026-03-10T05:52:15.834 INFO:teuthology.orchestra.run.vm02.stdout:osd.7 vm05 running (5m) 1s ago 5m 50.2M 4096M 17.2.0 e1d6a67b021e 8a4837b788cf 2026-03-10T05:52:15.834 INFO:teuthology.orchestra.run.vm02.stdout:prometheus.a vm05 *:9095 running (9s) 1s ago 5m 30.3M - 2.51.0 1d3b7f56885b dbef57bc83d9 2026-03-10T05:52:15.834 INFO:teuthology.orchestra.run.vm02.stdout:rgw.foo.vm02.pbogjd vm02 *:8000 running (4m) 1s ago 4m 85.0M - 17.2.0 e1d6a67b021e 2ab2ffd1abaa 2026-03-10T05:52:15.834 INFO:teuthology.orchestra.run.vm02.stdout:rgw.foo.vm05.hvmsxl vm05 *:8000 running (4m) 1s ago 4m 84.5M - 17.2.0 e1d6a67b021e 85d1c77b7e9d 2026-03-10T05:52:15.834 INFO:teuthology.orchestra.run.vm02.stdout:rgw.smpl.vm02.pglcfm vm02 *:80 running (4m) 1s ago 4m 84.2M - 17.2.0 e1d6a67b021e ef152a460673 2026-03-10T05:52:15.834 INFO:teuthology.orchestra.run.vm02.stdout:rgw.smpl.vm05.hqqmap vm05 *:80 running (4m) 1s ago 4m 84.7M - 17.2.0 e1d6a67b021e 29c9ee794f34 2026-03-10T05:52:16.001 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:52:15 vm05 bash[17864]: cluster 2026-03-10T05:52:14.635460+0000 mgr.x (mgr.24773) 78 : cluster [DBG] pgmap v28: 161 pgs: 161 active+clean; 457 KiB data, 102 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T05:52:16.001 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:52:15 vm05 bash[17864]: audit 2026-03-10T05:52:14.694035+0000 mon.a (mon.0) 879 : audit [INF] from='mgr.24773 ' entity='mgr.x' 2026-03-10T05:52:16.002 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:52:15 vm05 bash[17864]: audit 2026-03-10T05:52:14.703139+0000 mon.a (mon.0) 880 : audit [INF] from='mgr.24773 ' entity='mgr.x' 2026-03-10T05:52:16.064 INFO:teuthology.orchestra.run.vm02.stdout:{ 2026-03-10T05:52:16.064 INFO:teuthology.orchestra.run.vm02.stdout: "mon": { 2026-03-10T05:52:16.064 INFO:teuthology.orchestra.run.vm02.stdout: "ceph version 17.2.0 (43e2e60a7559d3f46c9d53f1ca875fd499a1e35e) quincy (stable)": 3 2026-03-10T05:52:16.064 INFO:teuthology.orchestra.run.vm02.stdout: }, 2026-03-10T05:52:16.064 INFO:teuthology.orchestra.run.vm02.stdout: "mgr": { 2026-03-10T05:52:16.064 INFO:teuthology.orchestra.run.vm02.stdout: "ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable)": 2 2026-03-10T05:52:16.064 INFO:teuthology.orchestra.run.vm02.stdout: }, 2026-03-10T05:52:16.064 INFO:teuthology.orchestra.run.vm02.stdout: "osd": { 2026-03-10T05:52:16.064 INFO:teuthology.orchestra.run.vm02.stdout: "ceph version 17.2.0 (43e2e60a7559d3f46c9d53f1ca875fd499a1e35e) quincy (stable)": 8 2026-03-10T05:52:16.064 INFO:teuthology.orchestra.run.vm02.stdout: }, 2026-03-10T05:52:16.064 INFO:teuthology.orchestra.run.vm02.stdout: "mds": {}, 2026-03-10T05:52:16.064 INFO:teuthology.orchestra.run.vm02.stdout: "rgw": { 2026-03-10T05:52:16.064 INFO:teuthology.orchestra.run.vm02.stdout: "ceph version 17.2.0 (43e2e60a7559d3f46c9d53f1ca875fd499a1e35e) quincy (stable)": 4 2026-03-10T05:52:16.064 INFO:teuthology.orchestra.run.vm02.stdout: }, 2026-03-10T05:52:16.064 INFO:teuthology.orchestra.run.vm02.stdout: "overall": { 2026-03-10T05:52:16.064 INFO:teuthology.orchestra.run.vm02.stdout: "ceph version 17.2.0 (43e2e60a7559d3f46c9d53f1ca875fd499a1e35e) quincy (stable)": 15, 2026-03-10T05:52:16.064 INFO:teuthology.orchestra.run.vm02.stdout: "ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable)": 2 2026-03-10T05:52:16.064 INFO:teuthology.orchestra.run.vm02.stdout: } 2026-03-10T05:52:16.064 INFO:teuthology.orchestra.run.vm02.stdout:} 2026-03-10T05:52:16.084 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:52:15 vm02 bash[17462]: cluster 2026-03-10T05:52:14.635460+0000 mgr.x (mgr.24773) 78 : cluster [DBG] pgmap v28: 161 pgs: 161 active+clean; 457 KiB data, 102 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T05:52:16.084 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:52:15 vm02 bash[17462]: audit 2026-03-10T05:52:14.694035+0000 mon.a (mon.0) 879 : audit [INF] from='mgr.24773 ' entity='mgr.x' 2026-03-10T05:52:16.084 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:52:15 vm02 bash[17462]: audit 2026-03-10T05:52:14.703139+0000 mon.a (mon.0) 880 : audit [INF] from='mgr.24773 ' entity='mgr.x' 2026-03-10T05:52:16.084 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:15 vm02 bash[22526]: cluster 2026-03-10T05:52:14.635460+0000 mgr.x (mgr.24773) 78 : cluster [DBG] pgmap v28: 161 pgs: 161 active+clean; 457 KiB data, 102 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T05:52:16.084 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:15 vm02 bash[22526]: audit 2026-03-10T05:52:14.694035+0000 mon.a (mon.0) 879 : audit [INF] from='mgr.24773 ' entity='mgr.x' 2026-03-10T05:52:16.084 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:15 vm02 bash[22526]: audit 2026-03-10T05:52:14.703139+0000 mon.a (mon.0) 880 : audit [INF] from='mgr.24773 ' entity='mgr.x' 2026-03-10T05:52:16.249 INFO:teuthology.orchestra.run.vm02.stdout:{ 2026-03-10T05:52:16.249 INFO:teuthology.orchestra.run.vm02.stdout: "target_image": "quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df", 2026-03-10T05:52:16.249 INFO:teuthology.orchestra.run.vm02.stdout: "in_progress": true, 2026-03-10T05:52:16.249 INFO:teuthology.orchestra.run.vm02.stdout: "which": "Upgrading all daemon types on all hosts", 2026-03-10T05:52:16.249 INFO:teuthology.orchestra.run.vm02.stdout: "services_complete": [ 2026-03-10T05:52:16.249 INFO:teuthology.orchestra.run.vm02.stdout: "mgr" 2026-03-10T05:52:16.249 INFO:teuthology.orchestra.run.vm02.stdout: ], 2026-03-10T05:52:16.249 INFO:teuthology.orchestra.run.vm02.stdout: "progress": "2/23 daemons upgraded", 2026-03-10T05:52:16.249 INFO:teuthology.orchestra.run.vm02.stdout: "message": "Currently upgrading mgr daemons", 2026-03-10T05:52:16.250 INFO:teuthology.orchestra.run.vm02.stdout: "is_paused": false 2026-03-10T05:52:16.250 INFO:teuthology.orchestra.run.vm02.stdout:} 2026-03-10T05:52:16.469 INFO:teuthology.orchestra.run.vm02.stdout:HEALTH_OK 2026-03-10T05:52:16.501 INFO:journalctl@ceph.mgr.x.vm05.stdout:Mar 10 05:52:16 vm05 bash[37598]: ::ffff:192.168.123.105 - - [10/Mar/2026:05:52:16] "GET /metrics HTTP/1.1" 200 37764 "" "Prometheus/2.51.0" 2026-03-10T05:52:17.001 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:52:16 vm05 bash[17864]: audit 2026-03-10T05:52:15.450006+0000 mgr.x (mgr.24773) 79 : audit [DBG] from='client.24991 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T05:52:17.001 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:52:16 vm05 bash[17864]: audit 2026-03-10T05:52:15.644524+0000 mgr.x (mgr.24773) 80 : audit [DBG] from='client.24994 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T05:52:17.001 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:52:16 vm05 bash[17864]: audit 2026-03-10T05:52:16.063592+0000 mon.a (mon.0) 881 : audit [DBG] from='client.? 192.168.123.102:0/1326361545' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T05:52:17.001 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:52:16 vm05 bash[17864]: audit 2026-03-10T05:52:16.468154+0000 mon.c (mon.1) 133 : audit [DBG] from='client.? 192.168.123.102:0/1628518571' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch 2026-03-10T05:52:17.002 INFO:journalctl@ceph.prometheus.a.vm05.stdout:Mar 10 05:52:16 vm05 bash[40098]: ts=2026-03-10T05:52:16.948Z caller=group.go:483 level=warn name=CephNodeDiskspaceWarning index=4 component="rule manager" file=/etc/prometheus/alerting/ceph_alerts.yml group=nodes msg="Evaluating rule failed" rule="alert: CephNodeDiskspaceWarning\nexpr: predict_linear(node_filesystem_free_bytes{device=~\"/.*\"}[2d], 3600 * 24 * 5)\n * on (instance) group_left (nodename) node_uname_info < 0\nlabels:\n oid: 1.3.6.1.4.1.50495.1.2.1.8.4\n severity: warning\n type: ceph_default\nannotations:\n description: Mountpoint {{ $labels.mountpoint }} on {{ $labels.nodename }} will\n be full in less than 5 days based on the 48 hour trailing fill rate.\n summary: Host filesystem free space is getting low\n" err="found duplicate series for the match group {instance=\"vm05\"} on the right hand-side of the operation: [{__name__=\"node_uname_info\", cluster=\"107483ae-1c44-11f1-b530-c1172cd6122a\", domainname=\"(none)\", instance=\"vm05\", job=\"node\", machine=\"x86_64\", nodename=\"vm05\", release=\"5.15.0-1092-kvm\", sysname=\"Linux\", version=\"#97-Ubuntu SMP Fri Jan 23 15:00:24 UTC 2026\"}, {__name__=\"node_uname_info\", domainname=\"(none)\", instance=\"vm05\", job=\"node\", machine=\"x86_64\", nodename=\"vm05\", release=\"5.15.0-1092-kvm\", sysname=\"Linux\", version=\"#97-Ubuntu SMP Fri Jan 23 15:00:24 UTC 2026\"}];many-to-many matching not allowed: matching labels must be unique on one side" 2026-03-10T05:52:17.084 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:52:16 vm02 bash[17462]: audit 2026-03-10T05:52:15.450006+0000 mgr.x (mgr.24773) 79 : audit [DBG] from='client.24991 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T05:52:17.084 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:52:16 vm02 bash[17462]: audit 2026-03-10T05:52:15.644524+0000 mgr.x (mgr.24773) 80 : audit [DBG] from='client.24994 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T05:52:17.084 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:52:16 vm02 bash[17462]: audit 2026-03-10T05:52:16.063592+0000 mon.a (mon.0) 881 : audit [DBG] from='client.? 192.168.123.102:0/1326361545' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T05:52:17.084 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:52:16 vm02 bash[17462]: audit 2026-03-10T05:52:16.468154+0000 mon.c (mon.1) 133 : audit [DBG] from='client.? 192.168.123.102:0/1628518571' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch 2026-03-10T05:52:17.084 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:16 vm02 bash[22526]: audit 2026-03-10T05:52:15.450006+0000 mgr.x (mgr.24773) 79 : audit [DBG] from='client.24991 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T05:52:17.084 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:16 vm02 bash[22526]: audit 2026-03-10T05:52:15.644524+0000 mgr.x (mgr.24773) 80 : audit [DBG] from='client.24994 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T05:52:17.084 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:16 vm02 bash[22526]: audit 2026-03-10T05:52:16.063592+0000 mon.a (mon.0) 881 : audit [DBG] from='client.? 192.168.123.102:0/1326361545' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T05:52:17.084 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:16 vm02 bash[22526]: audit 2026-03-10T05:52:16.468154+0000 mon.c (mon.1) 133 : audit [DBG] from='client.? 192.168.123.102:0/1628518571' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch 2026-03-10T05:52:18.001 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:52:17 vm05 bash[17864]: audit 2026-03-10T05:52:15.830640+0000 mgr.x (mgr.24773) 81 : audit [DBG] from='client.15051 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T05:52:18.001 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:52:17 vm05 bash[17864]: audit 2026-03-10T05:52:16.250664+0000 mgr.x (mgr.24773) 82 : audit [DBG] from='client.25003 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T05:52:18.001 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:52:17 vm05 bash[17864]: cluster 2026-03-10T05:52:16.636007+0000 mgr.x (mgr.24773) 83 : cluster [DBG] pgmap v29: 161 pgs: 161 active+clean; 457 KiB data, 102 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T05:52:18.084 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:52:17 vm02 bash[17462]: audit 2026-03-10T05:52:15.830640+0000 mgr.x (mgr.24773) 81 : audit [DBG] from='client.15051 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T05:52:18.084 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:52:17 vm02 bash[17462]: audit 2026-03-10T05:52:16.250664+0000 mgr.x (mgr.24773) 82 : audit [DBG] from='client.25003 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T05:52:18.084 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:52:17 vm02 bash[17462]: cluster 2026-03-10T05:52:16.636007+0000 mgr.x (mgr.24773) 83 : cluster [DBG] pgmap v29: 161 pgs: 161 active+clean; 457 KiB data, 102 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T05:52:18.084 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:17 vm02 bash[22526]: audit 2026-03-10T05:52:15.830640+0000 mgr.x (mgr.24773) 81 : audit [DBG] from='client.15051 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T05:52:18.084 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:17 vm02 bash[22526]: audit 2026-03-10T05:52:16.250664+0000 mgr.x (mgr.24773) 82 : audit [DBG] from='client.25003 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T05:52:18.084 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:17 vm02 bash[22526]: cluster 2026-03-10T05:52:16.636007+0000 mgr.x (mgr.24773) 83 : cluster [DBG] pgmap v29: 161 pgs: 161 active+clean; 457 KiB data, 102 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T05:52:19.661 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:52:19 vm02 bash[17462]: cluster 2026-03-10T05:52:18.636362+0000 mgr.x (mgr.24773) 84 : cluster [DBG] pgmap v30: 161 pgs: 161 active+clean; 457 KiB data, 102 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T05:52:19.661 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:19 vm02 bash[22526]: cluster 2026-03-10T05:52:18.636362+0000 mgr.x (mgr.24773) 84 : cluster [DBG] pgmap v30: 161 pgs: 161 active+clean; 457 KiB data, 102 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T05:52:19.751 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:52:19 vm05 bash[17864]: cluster 2026-03-10T05:52:18.636362+0000 mgr.x (mgr.24773) 84 : cluster [DBG] pgmap v30: 161 pgs: 161 active+clean; 457 KiB data, 102 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T05:52:22.001 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:52:21 vm05 bash[17864]: cluster 2026-03-10T05:52:20.636867+0000 mgr.x (mgr.24773) 85 : cluster [DBG] pgmap v31: 161 pgs: 161 active+clean; 457 KiB data, 102 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T05:52:22.001 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:52:21 vm05 bash[17864]: audit 2026-03-10T05:52:21.172636+0000 mon.a (mon.0) 882 : audit [INF] from='mgr.24773 ' entity='mgr.x' 2026-03-10T05:52:22.001 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:52:21 vm05 bash[17864]: audit 2026-03-10T05:52:21.180013+0000 mon.a (mon.0) 883 : audit [INF] from='mgr.24773 ' entity='mgr.x' 2026-03-10T05:52:22.001 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:52:21 vm05 bash[17864]: audit 2026-03-10T05:52:21.183223+0000 mon.b (mon.2) 79 : audit [DBG] from='mgr.24773 192.168.123.105:0/3918521054' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:52:22.001 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:52:21 vm05 bash[17864]: audit 2026-03-10T05:52:21.183878+0000 mon.b (mon.2) 80 : audit [INF] from='mgr.24773 192.168.123.105:0/3918521054' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T05:52:22.001 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:52:21 vm05 bash[17864]: audit 2026-03-10T05:52:21.187091+0000 mon.a (mon.0) 884 : audit [INF] from='mgr.24773 ' entity='mgr.x' 2026-03-10T05:52:22.001 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:52:21 vm05 bash[17864]: audit 2026-03-10T05:52:21.227804+0000 mon.b (mon.2) 81 : audit [DBG] from='mgr.24773 192.168.123.105:0/3918521054' entity='mgr.x' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T05:52:22.001 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:52:21 vm05 bash[17864]: audit 2026-03-10T05:52:21.228932+0000 mon.a (mon.0) 885 : audit [INF] from='mgr.24773 ' entity='mgr.x' cmd=[{"prefix": "mgr fail", "who": "x"}]: dispatch 2026-03-10T05:52:22.001 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:52:21 vm05 bash[17864]: audit 2026-03-10T05:52:21.230636+0000 mon.b (mon.2) 82 : audit [INF] from='mgr.24773 192.168.123.105:0/3918521054' entity='mgr.x' cmd=[{"prefix": "mgr fail", "who": "x"}]: dispatch 2026-03-10T05:52:22.001 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:52:21 vm05 bash[17864]: cluster 2026-03-10T05:52:21.240656+0000 mon.a (mon.0) 886 : cluster [DBG] osdmap e90: 8 total, 8 up, 8 in 2026-03-10T05:52:22.084 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:52:21 vm02 bash[17462]: cluster 2026-03-10T05:52:20.636867+0000 mgr.x (mgr.24773) 85 : cluster [DBG] pgmap v31: 161 pgs: 161 active+clean; 457 KiB data, 102 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T05:52:22.084 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:52:21 vm02 bash[17462]: audit 2026-03-10T05:52:21.172636+0000 mon.a (mon.0) 882 : audit [INF] from='mgr.24773 ' entity='mgr.x' 2026-03-10T05:52:22.084 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:52:21 vm02 bash[17462]: audit 2026-03-10T05:52:21.180013+0000 mon.a (mon.0) 883 : audit [INF] from='mgr.24773 ' entity='mgr.x' 2026-03-10T05:52:22.084 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:52:21 vm02 bash[17462]: audit 2026-03-10T05:52:21.183223+0000 mon.b (mon.2) 79 : audit [DBG] from='mgr.24773 192.168.123.105:0/3918521054' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:52:22.084 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:52:21 vm02 bash[17462]: audit 2026-03-10T05:52:21.183878+0000 mon.b (mon.2) 80 : audit [INF] from='mgr.24773 192.168.123.105:0/3918521054' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T05:52:22.084 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:52:21 vm02 bash[17462]: audit 2026-03-10T05:52:21.187091+0000 mon.a (mon.0) 884 : audit [INF] from='mgr.24773 ' entity='mgr.x' 2026-03-10T05:52:22.084 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:52:21 vm02 bash[17462]: audit 2026-03-10T05:52:21.227804+0000 mon.b (mon.2) 81 : audit [DBG] from='mgr.24773 192.168.123.105:0/3918521054' entity='mgr.x' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T05:52:22.084 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:52:21 vm02 bash[17462]: audit 2026-03-10T05:52:21.228932+0000 mon.a (mon.0) 885 : audit [INF] from='mgr.24773 ' entity='mgr.x' cmd=[{"prefix": "mgr fail", "who": "x"}]: dispatch 2026-03-10T05:52:22.084 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:52:21 vm02 bash[17462]: audit 2026-03-10T05:52:21.230636+0000 mon.b (mon.2) 82 : audit [INF] from='mgr.24773 192.168.123.105:0/3918521054' entity='mgr.x' cmd=[{"prefix": "mgr fail", "who": "x"}]: dispatch 2026-03-10T05:52:22.084 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:52:21 vm02 bash[17462]: cluster 2026-03-10T05:52:21.240656+0000 mon.a (mon.0) 886 : cluster [DBG] osdmap e90: 8 total, 8 up, 8 in 2026-03-10T05:52:22.084 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:21 vm02 bash[22526]: cluster 2026-03-10T05:52:20.636867+0000 mgr.x (mgr.24773) 85 : cluster [DBG] pgmap v31: 161 pgs: 161 active+clean; 457 KiB data, 102 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T05:52:22.084 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:21 vm02 bash[22526]: audit 2026-03-10T05:52:21.172636+0000 mon.a (mon.0) 882 : audit [INF] from='mgr.24773 ' entity='mgr.x' 2026-03-10T05:52:22.084 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:21 vm02 bash[22526]: audit 2026-03-10T05:52:21.180013+0000 mon.a (mon.0) 883 : audit [INF] from='mgr.24773 ' entity='mgr.x' 2026-03-10T05:52:22.084 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:21 vm02 bash[22526]: audit 2026-03-10T05:52:21.183223+0000 mon.b (mon.2) 79 : audit [DBG] from='mgr.24773 192.168.123.105:0/3918521054' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:52:22.084 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:21 vm02 bash[22526]: audit 2026-03-10T05:52:21.183878+0000 mon.b (mon.2) 80 : audit [INF] from='mgr.24773 192.168.123.105:0/3918521054' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T05:52:22.084 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:21 vm02 bash[22526]: audit 2026-03-10T05:52:21.187091+0000 mon.a (mon.0) 884 : audit [INF] from='mgr.24773 ' entity='mgr.x' 2026-03-10T05:52:22.084 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:21 vm02 bash[22526]: audit 2026-03-10T05:52:21.227804+0000 mon.b (mon.2) 81 : audit [DBG] from='mgr.24773 192.168.123.105:0/3918521054' entity='mgr.x' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T05:52:22.084 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:21 vm02 bash[22526]: audit 2026-03-10T05:52:21.228932+0000 mon.a (mon.0) 885 : audit [INF] from='mgr.24773 ' entity='mgr.x' cmd=[{"prefix": "mgr fail", "who": "x"}]: dispatch 2026-03-10T05:52:22.084 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:21 vm02 bash[22526]: audit 2026-03-10T05:52:21.230636+0000 mon.b (mon.2) 82 : audit [INF] from='mgr.24773 192.168.123.105:0/3918521054' entity='mgr.x' cmd=[{"prefix": "mgr fail", "who": "x"}]: dispatch 2026-03-10T05:52:22.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:21 vm02 bash[22526]: cluster 2026-03-10T05:52:21.240656+0000 mon.a (mon.0) 886 : cluster [DBG] osdmap e90: 8 total, 8 up, 8 in 2026-03-10T05:52:22.501 INFO:journalctl@ceph.mgr.x.vm05.stdout:Mar 10 05:52:22 vm05 bash[37598]: ignoring --setuser ceph since I am not root 2026-03-10T05:52:22.501 INFO:journalctl@ceph.mgr.x.vm05.stdout:Mar 10 05:52:22 vm05 bash[37598]: ignoring --setgroup ceph since I am not root 2026-03-10T05:52:22.501 INFO:journalctl@ceph.mgr.x.vm05.stdout:Mar 10 05:52:22 vm05 bash[37598]: debug 2026-03-10T05:52:22.301+0000 7f965961d140 -1 mgr[py] Module status has missing NOTIFY_TYPES member 2026-03-10T05:52:22.501 INFO:journalctl@ceph.mgr.x.vm05.stdout:Mar 10 05:52:22 vm05 bash[37598]: debug 2026-03-10T05:52:22.333+0000 7f965961d140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member 2026-03-10T05:52:22.501 INFO:journalctl@ceph.mgr.x.vm05.stdout:Mar 10 05:52:22 vm05 bash[37598]: debug 2026-03-10T05:52:22.437+0000 7f965961d140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member 2026-03-10T05:52:22.583 INFO:journalctl@ceph.mgr.y.vm02.stdout:Mar 10 05:52:22 vm02 bash[52264]: [10/Mar/2026:05:52:22] ENGINE Bus STOPPING 2026-03-10T05:52:22.583 INFO:journalctl@ceph.mgr.y.vm02.stdout:Mar 10 05:52:22 vm02 bash[52264]: [10/Mar/2026:05:52:22] ENGINE HTTP Server cherrypy._cpwsgi_server.CPWSGIServer(('::', 9283)) shut down 2026-03-10T05:52:22.583 INFO:journalctl@ceph.mgr.y.vm02.stdout:Mar 10 05:52:22 vm02 bash[52264]: [10/Mar/2026:05:52:22] ENGINE Bus STOPPED 2026-03-10T05:52:22.583 INFO:journalctl@ceph.mgr.y.vm02.stdout:Mar 10 05:52:22 vm02 bash[52264]: [10/Mar/2026:05:52:22] ENGINE Bus STARTING 2026-03-10T05:52:22.834 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:52:22 vm02 bash[17462]: cephadm 2026-03-10T05:52:21.228666+0000 mgr.x (mgr.24773) 86 : cephadm [INF] Upgrade: Need to upgrade myself (mgr.x) 2026-03-10T05:52:22.834 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:52:22 vm02 bash[17462]: cephadm 2026-03-10T05:52:21.228887+0000 mgr.x (mgr.24773) 87 : cephadm [INF] Upgrade: Need to upgrade myself (mgr.x) 2026-03-10T05:52:22.834 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:52:22 vm02 bash[17462]: cephadm 2026-03-10T05:52:21.230516+0000 mgr.x (mgr.24773) 88 : cephadm [INF] Failing over to other MGR 2026-03-10T05:52:22.834 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:52:22 vm02 bash[17462]: audit 2026-03-10T05:52:22.197179+0000 mon.a (mon.0) 887 : audit [INF] from='mgr.24773 ' entity='mgr.x' cmd='[{"prefix": "mgr fail", "who": "x"}]': finished 2026-03-10T05:52:22.834 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:52:22 vm02 bash[17462]: cluster 2026-03-10T05:52:22.197244+0000 mon.a (mon.0) 888 : cluster [DBG] mgrmap e27: y(active, starting, since 0.967169s) 2026-03-10T05:52:22.834 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:52:22 vm02 bash[17462]: audit 2026-03-10T05:52:22.198484+0000 mon.c (mon.1) 134 : audit [DBG] from='mgr.24988 192.168.123.102:0/1338236995' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-10T05:52:22.834 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:52:22 vm02 bash[17462]: audit 2026-03-10T05:52:22.198659+0000 mon.c (mon.1) 135 : audit [DBG] from='mgr.24988 192.168.123.102:0/1338236995' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T05:52:22.834 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:52:22 vm02 bash[17462]: audit 2026-03-10T05:52:22.198702+0000 mon.c (mon.1) 136 : audit [DBG] from='mgr.24988 192.168.123.102:0/1338236995' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T05:52:22.835 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:52:22 vm02 bash[17462]: audit 2026-03-10T05:52:22.200528+0000 mon.c (mon.1) 137 : audit [DBG] from='mgr.24988 192.168.123.102:0/1338236995' entity='mgr.y' cmd=[{"prefix": "mgr metadata", "who": "y", "id": "y"}]: dispatch 2026-03-10T05:52:22.835 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:52:22 vm02 bash[17462]: audit 2026-03-10T05:52:22.200891+0000 mon.c (mon.1) 138 : audit [DBG] from='mgr.24988 192.168.123.102:0/1338236995' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T05:52:22.835 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:52:22 vm02 bash[17462]: audit 2026-03-10T05:52:22.201548+0000 mon.c (mon.1) 139 : audit [DBG] from='mgr.24988 192.168.123.102:0/1338236995' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T05:52:22.835 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:52:22 vm02 bash[17462]: audit 2026-03-10T05:52:22.201923+0000 mon.c (mon.1) 140 : audit [DBG] from='mgr.24988 192.168.123.102:0/1338236995' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T05:52:22.835 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:52:22 vm02 bash[17462]: audit 2026-03-10T05:52:22.202529+0000 mon.c (mon.1) 141 : audit [DBG] from='mgr.24988 192.168.123.102:0/1338236995' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-10T05:52:22.835 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:52:22 vm02 bash[17462]: audit 2026-03-10T05:52:22.203093+0000 mon.c (mon.1) 142 : audit [DBG] from='mgr.24988 192.168.123.102:0/1338236995' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-10T05:52:22.835 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:52:22 vm02 bash[17462]: audit 2026-03-10T05:52:22.203601+0000 mon.c (mon.1) 143 : audit [DBG] from='mgr.24988 192.168.123.102:0/1338236995' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-10T05:52:22.835 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:52:22 vm02 bash[17462]: audit 2026-03-10T05:52:22.204115+0000 mon.c (mon.1) 144 : audit [DBG] from='mgr.24988 192.168.123.102:0/1338236995' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-10T05:52:22.835 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:52:22 vm02 bash[17462]: audit 2026-03-10T05:52:22.204697+0000 mon.c (mon.1) 145 : audit [DBG] from='mgr.24988 192.168.123.102:0/1338236995' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-10T05:52:22.835 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:52:22 vm02 bash[17462]: audit 2026-03-10T05:52:22.205294+0000 mon.c (mon.1) 146 : audit [DBG] from='mgr.24988 192.168.123.102:0/1338236995' entity='mgr.y' cmd=[{"prefix": "mds metadata"}]: dispatch 2026-03-10T05:52:22.835 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:52:22 vm02 bash[17462]: audit 2026-03-10T05:52:22.205754+0000 mon.c (mon.1) 147 : audit [DBG] from='mgr.24988 192.168.123.102:0/1338236995' entity='mgr.y' cmd=[{"prefix": "osd metadata"}]: dispatch 2026-03-10T05:52:22.835 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:52:22 vm02 bash[17462]: audit 2026-03-10T05:52:22.206416+0000 mon.c (mon.1) 148 : audit [DBG] from='mgr.24988 192.168.123.102:0/1338236995' entity='mgr.y' cmd=[{"prefix": "mon metadata"}]: dispatch 2026-03-10T05:52:22.835 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:52:22 vm02 bash[17462]: cluster 2026-03-10T05:52:22.398305+0000 mon.a (mon.0) 889 : cluster [INF] Manager daemon y is now available 2026-03-10T05:52:22.835 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:52:22 vm02 bash[17462]: audit 2026-03-10T05:52:22.421640+0000 mon.c (mon.1) 149 : audit [DBG] from='mgr.24988 192.168.123.102:0/1338236995' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T05:52:22.835 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:52:22 vm02 bash[17462]: audit 2026-03-10T05:52:22.440265+0000 mon.c (mon.1) 150 : audit [INF] from='mgr.24988 192.168.123.102:0/1338236995' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/mirror_snapshot_schedule"}]: dispatch 2026-03-10T05:52:22.835 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:52:22 vm02 bash[17462]: audit 2026-03-10T05:52:22.440525+0000 mon.a (mon.0) 890 : audit [INF] from='mgr.24988 ' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/mirror_snapshot_schedule"}]: dispatch 2026-03-10T05:52:22.835 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:52:22 vm02 bash[17462]: audit 2026-03-10T05:52:22.480340+0000 mon.c (mon.1) 151 : audit [INF] from='mgr.24988 192.168.123.102:0/1338236995' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/trash_purge_schedule"}]: dispatch 2026-03-10T05:52:22.835 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:52:22 vm02 bash[17462]: audit 2026-03-10T05:52:22.480580+0000 mon.a (mon.0) 891 : audit [INF] from='mgr.24988 ' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/trash_purge_schedule"}]: dispatch 2026-03-10T05:52:22.835 INFO:journalctl@ceph.mgr.y.vm02.stdout:Mar 10 05:52:22 vm02 bash[52264]: [10/Mar/2026:05:52:22] ENGINE Serving on http://:::9283 2026-03-10T05:52:22.835 INFO:journalctl@ceph.mgr.y.vm02.stdout:Mar 10 05:52:22 vm02 bash[52264]: [10/Mar/2026:05:52:22] ENGINE Bus STARTED 2026-03-10T05:52:22.835 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:22 vm02 bash[22526]: cephadm 2026-03-10T05:52:21.228666+0000 mgr.x (mgr.24773) 86 : cephadm [INF] Upgrade: Need to upgrade myself (mgr.x) 2026-03-10T05:52:22.835 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:22 vm02 bash[22526]: cephadm 2026-03-10T05:52:21.228887+0000 mgr.x (mgr.24773) 87 : cephadm [INF] Upgrade: Need to upgrade myself (mgr.x) 2026-03-10T05:52:22.835 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:22 vm02 bash[22526]: cephadm 2026-03-10T05:52:21.230516+0000 mgr.x (mgr.24773) 88 : cephadm [INF] Failing over to other MGR 2026-03-10T05:52:22.835 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:22 vm02 bash[22526]: audit 2026-03-10T05:52:22.197179+0000 mon.a (mon.0) 887 : audit [INF] from='mgr.24773 ' entity='mgr.x' cmd='[{"prefix": "mgr fail", "who": "x"}]': finished 2026-03-10T05:52:22.835 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:22 vm02 bash[22526]: cluster 2026-03-10T05:52:22.197244+0000 mon.a (mon.0) 888 : cluster [DBG] mgrmap e27: y(active, starting, since 0.967169s) 2026-03-10T05:52:22.835 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:22 vm02 bash[22526]: audit 2026-03-10T05:52:22.198484+0000 mon.c (mon.1) 134 : audit [DBG] from='mgr.24988 192.168.123.102:0/1338236995' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-10T05:52:22.835 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:22 vm02 bash[22526]: audit 2026-03-10T05:52:22.198659+0000 mon.c (mon.1) 135 : audit [DBG] from='mgr.24988 192.168.123.102:0/1338236995' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T05:52:22.835 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:22 vm02 bash[22526]: audit 2026-03-10T05:52:22.198702+0000 mon.c (mon.1) 136 : audit [DBG] from='mgr.24988 192.168.123.102:0/1338236995' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T05:52:22.835 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:22 vm02 bash[22526]: audit 2026-03-10T05:52:22.200528+0000 mon.c (mon.1) 137 : audit [DBG] from='mgr.24988 192.168.123.102:0/1338236995' entity='mgr.y' cmd=[{"prefix": "mgr metadata", "who": "y", "id": "y"}]: dispatch 2026-03-10T05:52:22.835 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:22 vm02 bash[22526]: audit 2026-03-10T05:52:22.200891+0000 mon.c (mon.1) 138 : audit [DBG] from='mgr.24988 192.168.123.102:0/1338236995' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T05:52:22.835 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:22 vm02 bash[22526]: audit 2026-03-10T05:52:22.201548+0000 mon.c (mon.1) 139 : audit [DBG] from='mgr.24988 192.168.123.102:0/1338236995' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T05:52:22.835 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:22 vm02 bash[22526]: audit 2026-03-10T05:52:22.201923+0000 mon.c (mon.1) 140 : audit [DBG] from='mgr.24988 192.168.123.102:0/1338236995' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T05:52:22.835 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:22 vm02 bash[22526]: audit 2026-03-10T05:52:22.202529+0000 mon.c (mon.1) 141 : audit [DBG] from='mgr.24988 192.168.123.102:0/1338236995' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-10T05:52:22.835 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:22 vm02 bash[22526]: audit 2026-03-10T05:52:22.203093+0000 mon.c (mon.1) 142 : audit [DBG] from='mgr.24988 192.168.123.102:0/1338236995' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-10T05:52:22.835 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:22 vm02 bash[22526]: audit 2026-03-10T05:52:22.203601+0000 mon.c (mon.1) 143 : audit [DBG] from='mgr.24988 192.168.123.102:0/1338236995' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-10T05:52:22.835 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:22 vm02 bash[22526]: audit 2026-03-10T05:52:22.204115+0000 mon.c (mon.1) 144 : audit [DBG] from='mgr.24988 192.168.123.102:0/1338236995' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-10T05:52:22.835 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:22 vm02 bash[22526]: audit 2026-03-10T05:52:22.204697+0000 mon.c (mon.1) 145 : audit [DBG] from='mgr.24988 192.168.123.102:0/1338236995' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-10T05:52:22.835 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:22 vm02 bash[22526]: audit 2026-03-10T05:52:22.205294+0000 mon.c (mon.1) 146 : audit [DBG] from='mgr.24988 192.168.123.102:0/1338236995' entity='mgr.y' cmd=[{"prefix": "mds metadata"}]: dispatch 2026-03-10T05:52:22.835 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:22 vm02 bash[22526]: audit 2026-03-10T05:52:22.205754+0000 mon.c (mon.1) 147 : audit [DBG] from='mgr.24988 192.168.123.102:0/1338236995' entity='mgr.y' cmd=[{"prefix": "osd metadata"}]: dispatch 2026-03-10T05:52:22.835 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:22 vm02 bash[22526]: audit 2026-03-10T05:52:22.206416+0000 mon.c (mon.1) 148 : audit [DBG] from='mgr.24988 192.168.123.102:0/1338236995' entity='mgr.y' cmd=[{"prefix": "mon metadata"}]: dispatch 2026-03-10T05:52:22.835 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:22 vm02 bash[22526]: cluster 2026-03-10T05:52:22.398305+0000 mon.a (mon.0) 889 : cluster [INF] Manager daemon y is now available 2026-03-10T05:52:22.835 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:22 vm02 bash[22526]: audit 2026-03-10T05:52:22.421640+0000 mon.c (mon.1) 149 : audit [DBG] from='mgr.24988 192.168.123.102:0/1338236995' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T05:52:22.835 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:22 vm02 bash[22526]: audit 2026-03-10T05:52:22.440265+0000 mon.c (mon.1) 150 : audit [INF] from='mgr.24988 192.168.123.102:0/1338236995' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/mirror_snapshot_schedule"}]: dispatch 2026-03-10T05:52:22.835 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:22 vm02 bash[22526]: audit 2026-03-10T05:52:22.440525+0000 mon.a (mon.0) 890 : audit [INF] from='mgr.24988 ' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/mirror_snapshot_schedule"}]: dispatch 2026-03-10T05:52:22.835 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:22 vm02 bash[22526]: audit 2026-03-10T05:52:22.480340+0000 mon.c (mon.1) 151 : audit [INF] from='mgr.24988 192.168.123.102:0/1338236995' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/trash_purge_schedule"}]: dispatch 2026-03-10T05:52:22.835 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:22 vm02 bash[22526]: audit 2026-03-10T05:52:22.480580+0000 mon.a (mon.0) 891 : audit [INF] from='mgr.24988 ' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/trash_purge_schedule"}]: dispatch 2026-03-10T05:52:23.001 INFO:journalctl@ceph.mgr.x.vm05.stdout:Mar 10 05:52:22 vm05 bash[37598]: debug 2026-03-10T05:52:22.713+0000 7f965961d140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member 2026-03-10T05:52:23.001 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:52:22 vm05 bash[17864]: cephadm 2026-03-10T05:52:21.228666+0000 mgr.x (mgr.24773) 86 : cephadm [INF] Upgrade: Need to upgrade myself (mgr.x) 2026-03-10T05:52:23.001 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:52:22 vm05 bash[17864]: cephadm 2026-03-10T05:52:21.228887+0000 mgr.x (mgr.24773) 87 : cephadm [INF] Upgrade: Need to upgrade myself (mgr.x) 2026-03-10T05:52:23.001 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:52:22 vm05 bash[17864]: cephadm 2026-03-10T05:52:21.230516+0000 mgr.x (mgr.24773) 88 : cephadm [INF] Failing over to other MGR 2026-03-10T05:52:23.001 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:52:22 vm05 bash[17864]: audit 2026-03-10T05:52:22.197179+0000 mon.a (mon.0) 887 : audit [INF] from='mgr.24773 ' entity='mgr.x' cmd='[{"prefix": "mgr fail", "who": "x"}]': finished 2026-03-10T05:52:23.001 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:52:22 vm05 bash[17864]: cluster 2026-03-10T05:52:22.197244+0000 mon.a (mon.0) 888 : cluster [DBG] mgrmap e27: y(active, starting, since 0.967169s) 2026-03-10T05:52:23.001 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:52:22 vm05 bash[17864]: audit 2026-03-10T05:52:22.198484+0000 mon.c (mon.1) 134 : audit [DBG] from='mgr.24988 192.168.123.102:0/1338236995' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-10T05:52:23.001 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:52:22 vm05 bash[17864]: audit 2026-03-10T05:52:22.198659+0000 mon.c (mon.1) 135 : audit [DBG] from='mgr.24988 192.168.123.102:0/1338236995' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T05:52:23.001 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:52:22 vm05 bash[17864]: audit 2026-03-10T05:52:22.198702+0000 mon.c (mon.1) 136 : audit [DBG] from='mgr.24988 192.168.123.102:0/1338236995' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T05:52:23.001 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:52:22 vm05 bash[17864]: audit 2026-03-10T05:52:22.200528+0000 mon.c (mon.1) 137 : audit [DBG] from='mgr.24988 192.168.123.102:0/1338236995' entity='mgr.y' cmd=[{"prefix": "mgr metadata", "who": "y", "id": "y"}]: dispatch 2026-03-10T05:52:23.001 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:52:22 vm05 bash[17864]: audit 2026-03-10T05:52:22.200891+0000 mon.c (mon.1) 138 : audit [DBG] from='mgr.24988 192.168.123.102:0/1338236995' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T05:52:23.001 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:52:22 vm05 bash[17864]: audit 2026-03-10T05:52:22.201548+0000 mon.c (mon.1) 139 : audit [DBG] from='mgr.24988 192.168.123.102:0/1338236995' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T05:52:23.002 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:52:22 vm05 bash[17864]: audit 2026-03-10T05:52:22.201923+0000 mon.c (mon.1) 140 : audit [DBG] from='mgr.24988 192.168.123.102:0/1338236995' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T05:52:23.002 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:52:22 vm05 bash[17864]: audit 2026-03-10T05:52:22.202529+0000 mon.c (mon.1) 141 : audit [DBG] from='mgr.24988 192.168.123.102:0/1338236995' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-10T05:52:23.002 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:52:22 vm05 bash[17864]: audit 2026-03-10T05:52:22.203093+0000 mon.c (mon.1) 142 : audit [DBG] from='mgr.24988 192.168.123.102:0/1338236995' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-10T05:52:23.002 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:52:22 vm05 bash[17864]: audit 2026-03-10T05:52:22.203601+0000 mon.c (mon.1) 143 : audit [DBG] from='mgr.24988 192.168.123.102:0/1338236995' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-10T05:52:23.002 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:52:22 vm05 bash[17864]: audit 2026-03-10T05:52:22.204115+0000 mon.c (mon.1) 144 : audit [DBG] from='mgr.24988 192.168.123.102:0/1338236995' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-10T05:52:23.002 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:52:22 vm05 bash[17864]: audit 2026-03-10T05:52:22.204697+0000 mon.c (mon.1) 145 : audit [DBG] from='mgr.24988 192.168.123.102:0/1338236995' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-10T05:52:23.002 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:52:22 vm05 bash[17864]: audit 2026-03-10T05:52:22.205294+0000 mon.c (mon.1) 146 : audit [DBG] from='mgr.24988 192.168.123.102:0/1338236995' entity='mgr.y' cmd=[{"prefix": "mds metadata"}]: dispatch 2026-03-10T05:52:23.002 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:52:22 vm05 bash[17864]: audit 2026-03-10T05:52:22.205754+0000 mon.c (mon.1) 147 : audit [DBG] from='mgr.24988 192.168.123.102:0/1338236995' entity='mgr.y' cmd=[{"prefix": "osd metadata"}]: dispatch 2026-03-10T05:52:23.002 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:52:22 vm05 bash[17864]: audit 2026-03-10T05:52:22.206416+0000 mon.c (mon.1) 148 : audit [DBG] from='mgr.24988 192.168.123.102:0/1338236995' entity='mgr.y' cmd=[{"prefix": "mon metadata"}]: dispatch 2026-03-10T05:52:23.002 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:52:22 vm05 bash[17864]: cluster 2026-03-10T05:52:22.398305+0000 mon.a (mon.0) 889 : cluster [INF] Manager daemon y is now available 2026-03-10T05:52:23.002 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:52:22 vm05 bash[17864]: audit 2026-03-10T05:52:22.421640+0000 mon.c (mon.1) 149 : audit [DBG] from='mgr.24988 192.168.123.102:0/1338236995' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T05:52:23.002 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:52:22 vm05 bash[17864]: audit 2026-03-10T05:52:22.440265+0000 mon.c (mon.1) 150 : audit [INF] from='mgr.24988 192.168.123.102:0/1338236995' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/mirror_snapshot_schedule"}]: dispatch 2026-03-10T05:52:23.002 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:52:22 vm05 bash[17864]: audit 2026-03-10T05:52:22.440525+0000 mon.a (mon.0) 890 : audit [INF] from='mgr.24988 ' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/mirror_snapshot_schedule"}]: dispatch 2026-03-10T05:52:23.002 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:52:22 vm05 bash[17864]: audit 2026-03-10T05:52:22.480340+0000 mon.c (mon.1) 151 : audit [INF] from='mgr.24988 192.168.123.102:0/1338236995' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/trash_purge_schedule"}]: dispatch 2026-03-10T05:52:23.002 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:52:22 vm05 bash[17864]: audit 2026-03-10T05:52:22.480580+0000 mon.a (mon.0) 891 : audit [INF] from='mgr.24988 ' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/trash_purge_schedule"}]: dispatch 2026-03-10T05:52:23.475 INFO:journalctl@ceph.mgr.x.vm05.stdout:Mar 10 05:52:23 vm05 bash[37598]: debug 2026-03-10T05:52:23.149+0000 7f965961d140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member 2026-03-10T05:52:23.475 INFO:journalctl@ceph.mgr.x.vm05.stdout:Mar 10 05:52:23 vm05 bash[37598]: debug 2026-03-10T05:52:23.233+0000 7f965961d140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member 2026-03-10T05:52:23.475 INFO:journalctl@ceph.mgr.x.vm05.stdout:Mar 10 05:52:23 vm05 bash[37598]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode. 2026-03-10T05:52:23.475 INFO:journalctl@ceph.mgr.x.vm05.stdout:Mar 10 05:52:23 vm05 bash[37598]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve. 2026-03-10T05:52:23.475 INFO:journalctl@ceph.mgr.x.vm05.stdout:Mar 10 05:52:23 vm05 bash[37598]: from numpy import show_config as show_numpy_config 2026-03-10T05:52:23.475 INFO:journalctl@ceph.mgr.x.vm05.stdout:Mar 10 05:52:23 vm05 bash[37598]: debug 2026-03-10T05:52:23.349+0000 7f965961d140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member 2026-03-10T05:52:23.751 INFO:journalctl@ceph.mgr.x.vm05.stdout:Mar 10 05:52:23 vm05 bash[37598]: debug 2026-03-10T05:52:23.473+0000 7f965961d140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member 2026-03-10T05:52:23.751 INFO:journalctl@ceph.mgr.x.vm05.stdout:Mar 10 05:52:23 vm05 bash[37598]: debug 2026-03-10T05:52:23.509+0000 7f965961d140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member 2026-03-10T05:52:23.751 INFO:journalctl@ceph.mgr.x.vm05.stdout:Mar 10 05:52:23 vm05 bash[37598]: debug 2026-03-10T05:52:23.541+0000 7f965961d140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member 2026-03-10T05:52:23.751 INFO:journalctl@ceph.mgr.x.vm05.stdout:Mar 10 05:52:23 vm05 bash[37598]: debug 2026-03-10T05:52:23.577+0000 7f965961d140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member 2026-03-10T05:52:23.751 INFO:journalctl@ceph.mgr.x.vm05.stdout:Mar 10 05:52:23 vm05 bash[37598]: debug 2026-03-10T05:52:23.625+0000 7f965961d140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member 2026-03-10T05:52:24.292 INFO:journalctl@ceph.mgr.x.vm05.stdout:Mar 10 05:52:24 vm05 bash[37598]: debug 2026-03-10T05:52:24.017+0000 7f965961d140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member 2026-03-10T05:52:24.292 INFO:journalctl@ceph.mgr.x.vm05.stdout:Mar 10 05:52:24 vm05 bash[37598]: debug 2026-03-10T05:52:24.053+0000 7f965961d140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member 2026-03-10T05:52:24.292 INFO:journalctl@ceph.mgr.x.vm05.stdout:Mar 10 05:52:24 vm05 bash[37598]: debug 2026-03-10T05:52:24.085+0000 7f965961d140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member 2026-03-10T05:52:24.292 INFO:journalctl@ceph.mgr.x.vm05.stdout:Mar 10 05:52:24 vm05 bash[37598]: debug 2026-03-10T05:52:24.217+0000 7f965961d140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member 2026-03-10T05:52:24.292 INFO:journalctl@ceph.mgr.x.vm05.stdout:Mar 10 05:52:24 vm05 bash[37598]: debug 2026-03-10T05:52:24.257+0000 7f965961d140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member 2026-03-10T05:52:24.292 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:52:24 vm05 bash[17864]: cluster 2026-03-10T05:52:23.204199+0000 mon.a (mon.0) 892 : cluster [DBG] mgrmap e28: y(active, since 1.97412s) 2026-03-10T05:52:24.292 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:52:24 vm05 bash[17864]: cephadm 2026-03-10T05:52:23.266807+0000 mgr.y (mgr.24988) 2 : cephadm [INF] [10/Mar/2026:05:52:23] ENGINE Bus STARTING 2026-03-10T05:52:24.292 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:52:24 vm05 bash[17864]: audit 2026-03-10T05:52:23.344773+0000 mgr.y (mgr.24988) 3 : audit [DBG] from='client.24940 -' entity='client.iscsi.foo.vm02.mxbwmh' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T05:52:24.292 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:52:24 vm05 bash[17864]: cephadm 2026-03-10T05:52:23.374155+0000 mgr.y (mgr.24988) 4 : cephadm [INF] [10/Mar/2026:05:52:23] ENGINE Serving on https://192.168.123.102:7150 2026-03-10T05:52:24.292 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:52:24 vm05 bash[17864]: cephadm 2026-03-10T05:52:23.374682+0000 mgr.y (mgr.24988) 5 : cephadm [INF] [10/Mar/2026:05:52:23] ENGINE Client ('192.168.123.102', 52020) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)') 2026-03-10T05:52:24.292 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:52:24 vm05 bash[17864]: cephadm 2026-03-10T05:52:23.475397+0000 mgr.y (mgr.24988) 6 : cephadm [INF] [10/Mar/2026:05:52:23] ENGINE Serving on http://192.168.123.102:8765 2026-03-10T05:52:24.292 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:52:24 vm05 bash[17864]: cephadm 2026-03-10T05:52:23.475439+0000 mgr.y (mgr.24988) 7 : cephadm [INF] [10/Mar/2026:05:52:23] ENGINE Bus STARTED 2026-03-10T05:52:24.292 INFO:journalctl@ceph.prometheus.a.vm05.stdout:Mar 10 05:52:24 vm05 bash[40098]: ts=2026-03-10T05:52:24.147Z caller=group.go:483 level=warn name=CephOSDFlapping index=13 component="rule manager" file=/etc/prometheus/alerting/ceph_alerts.yml group=osd msg="Evaluating rule failed" rule="alert: CephOSDFlapping\nexpr: (rate(ceph_osd_up[5m]) * on (ceph_daemon) group_left (hostname) ceph_osd_metadata)\n * 60 > 1\nlabels:\n oid: 1.3.6.1.4.1.50495.1.2.1.4.4\n severity: warning\n type: ceph_default\nannotations:\n description: OSD {{ $labels.ceph_daemon }} on {{ $labels.hostname }} was marked\n down and back up {{ $value | humanize }} times once a minute for 5 minutes. This\n may indicate a network issue (latency, packet loss, MTU mismatch) on the cluster\n network, or the public network if no cluster network is deployed. Check the network\n stats on the listed host(s).\n documentation: https://docs.ceph.com/en/latest/rados/troubleshooting/troubleshooting-osd#flapping-osds\n summary: Network issues are causing OSDs to flap (mark each other down)\n" err="found duplicate series for the match group {ceph_daemon=\"osd.0\"} on the right hand-side of the operation: [{__name__=\"ceph_osd_metadata\", ceph_daemon=\"osd.0\", ceph_version=\"ceph version 17.2.0 (43e2e60a7559d3f46c9d53f1ca875fd499a1e35e) quincy (stable)\", cluster=\"107483ae-1c44-11f1-b530-c1172cd6122a\", cluster_addr=\"192.168.123.102\", device_class=\"hdd\", hostname=\"vm02\", instance=\"ceph_cluster\", job=\"ceph\", objectstore=\"bluestore\", public_addr=\"192.168.123.102\"}, {__name__=\"ceph_osd_metadata\", ceph_daemon=\"osd.0\", ceph_version=\"ceph version 17.2.0 (43e2e60a7559d3f46c9d53f1ca875fd499a1e35e) quincy (stable)\", cluster_addr=\"192.168.123.102\", device_class=\"hdd\", hostname=\"vm02\", instance=\"192.168.123.105:9283\", job=\"ceph\", objectstore=\"bluestore\", public_addr=\"192.168.123.102\"}];many-to-many matching not allowed: matching labels must be unique on one side" 2026-03-10T05:52:24.584 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:52:24 vm02 bash[17462]: cluster 2026-03-10T05:52:23.204199+0000 mon.a (mon.0) 892 : cluster [DBG] mgrmap e28: y(active, since 1.97412s) 2026-03-10T05:52:24.584 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:52:24 vm02 bash[17462]: cephadm 2026-03-10T05:52:23.266807+0000 mgr.y (mgr.24988) 2 : cephadm [INF] [10/Mar/2026:05:52:23] ENGINE Bus STARTING 2026-03-10T05:52:24.584 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:52:24 vm02 bash[17462]: audit 2026-03-10T05:52:23.344773+0000 mgr.y (mgr.24988) 3 : audit [DBG] from='client.24940 -' entity='client.iscsi.foo.vm02.mxbwmh' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T05:52:24.584 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:52:24 vm02 bash[17462]: cephadm 2026-03-10T05:52:23.374155+0000 mgr.y (mgr.24988) 4 : cephadm [INF] [10/Mar/2026:05:52:23] ENGINE Serving on https://192.168.123.102:7150 2026-03-10T05:52:24.584 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:52:24 vm02 bash[17462]: cephadm 2026-03-10T05:52:23.374682+0000 mgr.y (mgr.24988) 5 : cephadm [INF] [10/Mar/2026:05:52:23] ENGINE Client ('192.168.123.102', 52020) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)') 2026-03-10T05:52:24.584 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:52:24 vm02 bash[17462]: cephadm 2026-03-10T05:52:23.475397+0000 mgr.y (mgr.24988) 6 : cephadm [INF] [10/Mar/2026:05:52:23] ENGINE Serving on http://192.168.123.102:8765 2026-03-10T05:52:24.584 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:52:24 vm02 bash[17462]: cephadm 2026-03-10T05:52:23.475439+0000 mgr.y (mgr.24988) 7 : cephadm [INF] [10/Mar/2026:05:52:23] ENGINE Bus STARTED 2026-03-10T05:52:24.584 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:24 vm02 bash[22526]: cluster 2026-03-10T05:52:23.204199+0000 mon.a (mon.0) 892 : cluster [DBG] mgrmap e28: y(active, since 1.97412s) 2026-03-10T05:52:24.584 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:24 vm02 bash[22526]: cephadm 2026-03-10T05:52:23.266807+0000 mgr.y (mgr.24988) 2 : cephadm [INF] [10/Mar/2026:05:52:23] ENGINE Bus STARTING 2026-03-10T05:52:24.584 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:24 vm02 bash[22526]: audit 2026-03-10T05:52:23.344773+0000 mgr.y (mgr.24988) 3 : audit [DBG] from='client.24940 -' entity='client.iscsi.foo.vm02.mxbwmh' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T05:52:24.584 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:24 vm02 bash[22526]: cephadm 2026-03-10T05:52:23.374155+0000 mgr.y (mgr.24988) 4 : cephadm [INF] [10/Mar/2026:05:52:23] ENGINE Serving on https://192.168.123.102:7150 2026-03-10T05:52:24.584 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:24 vm02 bash[22526]: cephadm 2026-03-10T05:52:23.374682+0000 mgr.y (mgr.24988) 5 : cephadm [INF] [10/Mar/2026:05:52:23] ENGINE Client ('192.168.123.102', 52020) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)') 2026-03-10T05:52:24.584 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:24 vm02 bash[22526]: cephadm 2026-03-10T05:52:23.475397+0000 mgr.y (mgr.24988) 6 : cephadm [INF] [10/Mar/2026:05:52:23] ENGINE Serving on http://192.168.123.102:8765 2026-03-10T05:52:24.584 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:24 vm02 bash[22526]: cephadm 2026-03-10T05:52:23.475439+0000 mgr.y (mgr.24988) 7 : cephadm [INF] [10/Mar/2026:05:52:23] ENGINE Bus STARTED 2026-03-10T05:52:24.700 INFO:journalctl@ceph.mgr.x.vm05.stdout:Mar 10 05:52:24 vm05 bash[37598]: debug 2026-03-10T05:52:24.293+0000 7f965961d140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member 2026-03-10T05:52:24.701 INFO:journalctl@ceph.mgr.x.vm05.stdout:Mar 10 05:52:24 vm05 bash[37598]: debug 2026-03-10T05:52:24.401+0000 7f965961d140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member 2026-03-10T05:52:24.701 INFO:journalctl@ceph.mgr.x.vm05.stdout:Mar 10 05:52:24 vm05 bash[37598]: debug 2026-03-10T05:52:24.541+0000 7f965961d140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member 2026-03-10T05:52:25.001 INFO:journalctl@ceph.mgr.x.vm05.stdout:Mar 10 05:52:24 vm05 bash[37598]: debug 2026-03-10T05:52:24.701+0000 7f965961d140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member 2026-03-10T05:52:25.001 INFO:journalctl@ceph.mgr.x.vm05.stdout:Mar 10 05:52:24 vm05 bash[37598]: debug 2026-03-10T05:52:24.733+0000 7f965961d140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member 2026-03-10T05:52:25.001 INFO:journalctl@ceph.mgr.x.vm05.stdout:Mar 10 05:52:24 vm05 bash[37598]: debug 2026-03-10T05:52:24.769+0000 7f965961d140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member 2026-03-10T05:52:25.001 INFO:journalctl@ceph.mgr.x.vm05.stdout:Mar 10 05:52:24 vm05 bash[37598]: debug 2026-03-10T05:52:24.909+0000 7f965961d140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member 2026-03-10T05:52:25.501 INFO:journalctl@ceph.mgr.x.vm05.stdout:Mar 10 05:52:25 vm05 bash[37598]: debug 2026-03-10T05:52:25.125+0000 7f965961d140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member 2026-03-10T05:52:25.501 INFO:journalctl@ceph.mgr.x.vm05.stdout:Mar 10 05:52:25 vm05 bash[37598]: [10/Mar/2026:05:52:25] ENGINE Bus STARTING 2026-03-10T05:52:25.501 INFO:journalctl@ceph.mgr.x.vm05.stdout:Mar 10 05:52:25 vm05 bash[37598]: CherryPy Checker: 2026-03-10T05:52:25.501 INFO:journalctl@ceph.mgr.x.vm05.stdout:Mar 10 05:52:25 vm05 bash[37598]: The Application mounted at '' has an empty config. 2026-03-10T05:52:25.501 INFO:journalctl@ceph.mgr.x.vm05.stdout:Mar 10 05:52:25 vm05 bash[37598]: [10/Mar/2026:05:52:25] ENGINE Serving on http://:::9283 2026-03-10T05:52:25.501 INFO:journalctl@ceph.mgr.x.vm05.stdout:Mar 10 05:52:25 vm05 bash[37598]: [10/Mar/2026:05:52:25] ENGINE Bus STARTED 2026-03-10T05:52:25.501 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:52:25 vm05 bash[17864]: cluster 2026-03-10T05:52:24.201399+0000 mgr.y (mgr.24988) 8 : cluster [DBG] pgmap v4: 161 pgs: 161 active+clean; 457 KiB data, 102 MiB used, 160 GiB / 160 GiB avail 2026-03-10T05:52:25.501 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:52:25 vm05 bash[17864]: cluster 2026-03-10T05:52:24.337643+0000 mon.a (mon.0) 893 : cluster [DBG] mgrmap e29: y(active, since 3s) 2026-03-10T05:52:25.501 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:52:25 vm05 bash[17864]: cluster 2026-03-10T05:52:25.125713+0000 mon.a (mon.0) 894 : cluster [DBG] Standby manager daemon x started 2026-03-10T05:52:25.501 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:52:25 vm05 bash[17864]: audit 2026-03-10T05:52:25.129021+0000 mon.b (mon.2) 83 : audit [DBG] from='mgr.? 192.168.123.105:0/871422913' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/x/crt"}]: dispatch 2026-03-10T05:52:25.501 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:52:25 vm05 bash[17864]: audit 2026-03-10T05:52:25.129417+0000 mon.b (mon.2) 84 : audit [DBG] from='mgr.? 192.168.123.105:0/871422913' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/crt"}]: dispatch 2026-03-10T05:52:25.501 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:52:25 vm05 bash[17864]: audit 2026-03-10T05:52:25.130569+0000 mon.b (mon.2) 85 : audit [DBG] from='mgr.? 192.168.123.105:0/871422913' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/x/key"}]: dispatch 2026-03-10T05:52:25.501 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:52:25 vm05 bash[17864]: audit 2026-03-10T05:52:25.130992+0000 mon.b (mon.2) 86 : audit [DBG] from='mgr.? 192.168.123.105:0/871422913' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/key"}]: dispatch 2026-03-10T05:52:25.584 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:52:25 vm02 bash[17462]: cluster 2026-03-10T05:52:24.201399+0000 mgr.y (mgr.24988) 8 : cluster [DBG] pgmap v4: 161 pgs: 161 active+clean; 457 KiB data, 102 MiB used, 160 GiB / 160 GiB avail 2026-03-10T05:52:25.584 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:52:25 vm02 bash[17462]: cluster 2026-03-10T05:52:24.337643+0000 mon.a (mon.0) 893 : cluster [DBG] mgrmap e29: y(active, since 3s) 2026-03-10T05:52:25.584 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:52:25 vm02 bash[17462]: cluster 2026-03-10T05:52:25.125713+0000 mon.a (mon.0) 894 : cluster [DBG] Standby manager daemon x started 2026-03-10T05:52:25.584 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:52:25 vm02 bash[17462]: audit 2026-03-10T05:52:25.129021+0000 mon.b (mon.2) 83 : audit [DBG] from='mgr.? 192.168.123.105:0/871422913' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/x/crt"}]: dispatch 2026-03-10T05:52:25.584 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:52:25 vm02 bash[17462]: audit 2026-03-10T05:52:25.129417+0000 mon.b (mon.2) 84 : audit [DBG] from='mgr.? 192.168.123.105:0/871422913' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/crt"}]: dispatch 2026-03-10T05:52:25.584 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:52:25 vm02 bash[17462]: audit 2026-03-10T05:52:25.130569+0000 mon.b (mon.2) 85 : audit [DBG] from='mgr.? 192.168.123.105:0/871422913' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/x/key"}]: dispatch 2026-03-10T05:52:25.584 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:52:25 vm02 bash[17462]: audit 2026-03-10T05:52:25.130992+0000 mon.b (mon.2) 86 : audit [DBG] from='mgr.? 192.168.123.105:0/871422913' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/key"}]: dispatch 2026-03-10T05:52:25.584 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:25 vm02 bash[22526]: cluster 2026-03-10T05:52:24.201399+0000 mgr.y (mgr.24988) 8 : cluster [DBG] pgmap v4: 161 pgs: 161 active+clean; 457 KiB data, 102 MiB used, 160 GiB / 160 GiB avail 2026-03-10T05:52:25.584 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:25 vm02 bash[22526]: cluster 2026-03-10T05:52:24.337643+0000 mon.a (mon.0) 893 : cluster [DBG] mgrmap e29: y(active, since 3s) 2026-03-10T05:52:25.584 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:25 vm02 bash[22526]: cluster 2026-03-10T05:52:25.125713+0000 mon.a (mon.0) 894 : cluster [DBG] Standby manager daemon x started 2026-03-10T05:52:25.584 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:25 vm02 bash[22526]: audit 2026-03-10T05:52:25.129021+0000 mon.b (mon.2) 83 : audit [DBG] from='mgr.? 192.168.123.105:0/871422913' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/x/crt"}]: dispatch 2026-03-10T05:52:25.584 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:25 vm02 bash[22526]: audit 2026-03-10T05:52:25.129417+0000 mon.b (mon.2) 84 : audit [DBG] from='mgr.? 192.168.123.105:0/871422913' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/crt"}]: dispatch 2026-03-10T05:52:25.584 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:25 vm02 bash[22526]: audit 2026-03-10T05:52:25.130569+0000 mon.b (mon.2) 85 : audit [DBG] from='mgr.? 192.168.123.105:0/871422913' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/x/key"}]: dispatch 2026-03-10T05:52:25.584 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:25 vm02 bash[22526]: audit 2026-03-10T05:52:25.130992+0000 mon.b (mon.2) 86 : audit [DBG] from='mgr.? 192.168.123.105:0/871422913' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/key"}]: dispatch 2026-03-10T05:52:26.347 INFO:journalctl@ceph.mgr.x.vm05.stdout:Mar 10 05:52:26 vm05 bash[37598]: ::ffff:192.168.123.105 - - [10/Mar/2026:05:52:26] "GET /metrics HTTP/1.1" 200 - "" "Prometheus/2.51.0" 2026-03-10T05:52:26.751 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:52:26 vm05 bash[17864]: cluster 2026-03-10T05:52:25.351181+0000 mon.a (mon.0) 895 : cluster [DBG] mgrmap e30: y(active, since 4s), standbys: x 2026-03-10T05:52:26.751 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:52:26 vm05 bash[17864]: audit 2026-03-10T05:52:25.352549+0000 mon.c (mon.1) 152 : audit [DBG] from='mgr.24988 192.168.123.102:0/1338236995' entity='mgr.y' cmd=[{"prefix": "mgr metadata", "who": "x", "id": "x"}]: dispatch 2026-03-10T05:52:26.834 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:52:26 vm02 bash[17462]: cluster 2026-03-10T05:52:25.351181+0000 mon.a (mon.0) 895 : cluster [DBG] mgrmap e30: y(active, since 4s), standbys: x 2026-03-10T05:52:26.834 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:52:26 vm02 bash[17462]: audit 2026-03-10T05:52:25.352549+0000 mon.c (mon.1) 152 : audit [DBG] from='mgr.24988 192.168.123.102:0/1338236995' entity='mgr.y' cmd=[{"prefix": "mgr metadata", "who": "x", "id": "x"}]: dispatch 2026-03-10T05:52:26.834 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:26 vm02 bash[22526]: cluster 2026-03-10T05:52:25.351181+0000 mon.a (mon.0) 895 : cluster [DBG] mgrmap e30: y(active, since 4s), standbys: x 2026-03-10T05:52:26.834 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:26 vm02 bash[22526]: audit 2026-03-10T05:52:25.352549+0000 mon.c (mon.1) 152 : audit [DBG] from='mgr.24988 192.168.123.102:0/1338236995' entity='mgr.y' cmd=[{"prefix": "mgr metadata", "who": "x", "id": "x"}]: dispatch 2026-03-10T05:52:27.251 INFO:journalctl@ceph.prometheus.a.vm05.stdout:Mar 10 05:52:26 vm05 bash[40098]: ts=2026-03-10T05:52:26.947Z caller=group.go:483 level=warn name=CephNodeDiskspaceWarning index=4 component="rule manager" file=/etc/prometheus/alerting/ceph_alerts.yml group=nodes msg="Evaluating rule failed" rule="alert: CephNodeDiskspaceWarning\nexpr: predict_linear(node_filesystem_free_bytes{device=~\"/.*\"}[2d], 3600 * 24 * 5)\n * on (instance) group_left (nodename) node_uname_info < 0\nlabels:\n oid: 1.3.6.1.4.1.50495.1.2.1.8.4\n severity: warning\n type: ceph_default\nannotations:\n description: Mountpoint {{ $labels.mountpoint }} on {{ $labels.nodename }} will\n be full in less than 5 days based on the 48 hour trailing fill rate.\n summary: Host filesystem free space is getting low\n" err="found duplicate series for the match group {instance=\"vm05\"} on the right hand-side of the operation: [{__name__=\"node_uname_info\", cluster=\"107483ae-1c44-11f1-b530-c1172cd6122a\", domainname=\"(none)\", instance=\"vm05\", job=\"node\", machine=\"x86_64\", nodename=\"vm05\", release=\"5.15.0-1092-kvm\", sysname=\"Linux\", version=\"#97-Ubuntu SMP Fri Jan 23 15:00:24 UTC 2026\"}, {__name__=\"node_uname_info\", domainname=\"(none)\", instance=\"vm05\", job=\"node\", machine=\"x86_64\", nodename=\"vm05\", release=\"5.15.0-1092-kvm\", sysname=\"Linux\", version=\"#97-Ubuntu SMP Fri Jan 23 15:00:24 UTC 2026\"}];many-to-many matching not allowed: matching labels must be unique on one side" 2026-03-10T05:52:27.751 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:52:27 vm05 bash[17864]: cluster 2026-03-10T05:52:26.201662+0000 mgr.y (mgr.24988) 9 : cluster [DBG] pgmap v5: 161 pgs: 161 active+clean; 457 KiB data, 102 MiB used, 160 GiB / 160 GiB avail 2026-03-10T05:52:27.751 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:52:27 vm05 bash[17864]: cluster 2026-03-10T05:52:26.357549+0000 mon.a (mon.0) 896 : cluster [DBG] mgrmap e31: y(active, since 5s), standbys: x 2026-03-10T05:52:27.835 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:52:27 vm02 bash[17462]: cluster 2026-03-10T05:52:26.201662+0000 mgr.y (mgr.24988) 9 : cluster [DBG] pgmap v5: 161 pgs: 161 active+clean; 457 KiB data, 102 MiB used, 160 GiB / 160 GiB avail 2026-03-10T05:52:27.835 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:52:27 vm02 bash[17462]: cluster 2026-03-10T05:52:26.357549+0000 mon.a (mon.0) 896 : cluster [DBG] mgrmap e31: y(active, since 5s), standbys: x 2026-03-10T05:52:27.835 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:27 vm02 bash[22526]: cluster 2026-03-10T05:52:26.201662+0000 mgr.y (mgr.24988) 9 : cluster [DBG] pgmap v5: 161 pgs: 161 active+clean; 457 KiB data, 102 MiB used, 160 GiB / 160 GiB avail 2026-03-10T05:52:27.835 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:27 vm02 bash[22526]: cluster 2026-03-10T05:52:26.357549+0000 mon.a (mon.0) 896 : cluster [DBG] mgrmap e31: y(active, since 5s), standbys: x 2026-03-10T05:52:29.334 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:52:29 vm02 bash[17462]: audit 2026-03-10T05:52:28.058992+0000 mon.a (mon.0) 897 : audit [INF] from='mgr.24988 ' entity='mgr.y' 2026-03-10T05:52:29.334 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:52:29 vm02 bash[17462]: audit 2026-03-10T05:52:28.066509+0000 mon.a (mon.0) 898 : audit [INF] from='mgr.24988 ' entity='mgr.y' 2026-03-10T05:52:29.334 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:52:29 vm02 bash[17462]: audit 2026-03-10T05:52:28.136175+0000 mon.a (mon.0) 899 : audit [INF] from='mgr.24988 ' entity='mgr.y' 2026-03-10T05:52:29.334 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:52:29 vm02 bash[17462]: audit 2026-03-10T05:52:28.144524+0000 mon.a (mon.0) 900 : audit [INF] from='mgr.24988 ' entity='mgr.y' 2026-03-10T05:52:29.334 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:52:29 vm02 bash[17462]: cluster 2026-03-10T05:52:28.201965+0000 mgr.y (mgr.24988) 10 : cluster [DBG] pgmap v6: 161 pgs: 161 active+clean; 457 KiB data, 102 MiB used, 160 GiB / 160 GiB avail 2026-03-10T05:52:29.334 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:52:29 vm02 bash[17462]: audit 2026-03-10T05:52:28.611123+0000 mon.a (mon.0) 901 : audit [INF] from='mgr.24988 ' entity='mgr.y' 2026-03-10T05:52:29.334 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:52:29 vm02 bash[17462]: audit 2026-03-10T05:52:28.618148+0000 mon.a (mon.0) 902 : audit [INF] from='mgr.24988 ' entity='mgr.y' 2026-03-10T05:52:29.334 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:52:29 vm02 bash[17462]: audit 2026-03-10T05:52:28.619199+0000 mon.c (mon.1) 153 : audit [INF] from='mgr.24988 192.168.123.102:0/1338236995' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm05", "name": "osd_memory_target"}]: dispatch 2026-03-10T05:52:29.334 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:52:29 vm02 bash[17462]: audit 2026-03-10T05:52:28.619428+0000 mon.a (mon.0) 903 : audit [INF] from='mgr.24988 ' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm05", "name": "osd_memory_target"}]: dispatch 2026-03-10T05:52:29.334 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:52:29 vm02 bash[17462]: audit 2026-03-10T05:52:28.701056+0000 mon.a (mon.0) 904 : audit [INF] from='mgr.24988 ' entity='mgr.y' 2026-03-10T05:52:29.334 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:52:29 vm02 bash[17462]: audit 2026-03-10T05:52:28.706820+0000 mon.a (mon.0) 905 : audit [INF] from='mgr.24988 ' entity='mgr.y' 2026-03-10T05:52:29.334 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:29 vm02 bash[22526]: audit 2026-03-10T05:52:28.058992+0000 mon.a (mon.0) 897 : audit [INF] from='mgr.24988 ' entity='mgr.y' 2026-03-10T05:52:29.335 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:29 vm02 bash[22526]: audit 2026-03-10T05:52:28.066509+0000 mon.a (mon.0) 898 : audit [INF] from='mgr.24988 ' entity='mgr.y' 2026-03-10T05:52:29.335 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:29 vm02 bash[22526]: audit 2026-03-10T05:52:28.136175+0000 mon.a (mon.0) 899 : audit [INF] from='mgr.24988 ' entity='mgr.y' 2026-03-10T05:52:29.335 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:29 vm02 bash[22526]: audit 2026-03-10T05:52:28.144524+0000 mon.a (mon.0) 900 : audit [INF] from='mgr.24988 ' entity='mgr.y' 2026-03-10T05:52:29.335 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:29 vm02 bash[22526]: cluster 2026-03-10T05:52:28.201965+0000 mgr.y (mgr.24988) 10 : cluster [DBG] pgmap v6: 161 pgs: 161 active+clean; 457 KiB data, 102 MiB used, 160 GiB / 160 GiB avail 2026-03-10T05:52:29.335 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:29 vm02 bash[22526]: audit 2026-03-10T05:52:28.611123+0000 mon.a (mon.0) 901 : audit [INF] from='mgr.24988 ' entity='mgr.y' 2026-03-10T05:52:29.335 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:29 vm02 bash[22526]: audit 2026-03-10T05:52:28.618148+0000 mon.a (mon.0) 902 : audit [INF] from='mgr.24988 ' entity='mgr.y' 2026-03-10T05:52:29.335 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:29 vm02 bash[22526]: audit 2026-03-10T05:52:28.619199+0000 mon.c (mon.1) 153 : audit [INF] from='mgr.24988 192.168.123.102:0/1338236995' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm05", "name": "osd_memory_target"}]: dispatch 2026-03-10T05:52:29.335 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:29 vm02 bash[22526]: audit 2026-03-10T05:52:28.619428+0000 mon.a (mon.0) 903 : audit [INF] from='mgr.24988 ' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm05", "name": "osd_memory_target"}]: dispatch 2026-03-10T05:52:29.335 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:29 vm02 bash[22526]: audit 2026-03-10T05:52:28.701056+0000 mon.a (mon.0) 904 : audit [INF] from='mgr.24988 ' entity='mgr.y' 2026-03-10T05:52:29.335 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:29 vm02 bash[22526]: audit 2026-03-10T05:52:28.706820+0000 mon.a (mon.0) 905 : audit [INF] from='mgr.24988 ' entity='mgr.y' 2026-03-10T05:52:29.501 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:52:29 vm05 bash[17864]: audit 2026-03-10T05:52:28.058992+0000 mon.a (mon.0) 897 : audit [INF] from='mgr.24988 ' entity='mgr.y' 2026-03-10T05:52:29.501 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:52:29 vm05 bash[17864]: audit 2026-03-10T05:52:28.066509+0000 mon.a (mon.0) 898 : audit [INF] from='mgr.24988 ' entity='mgr.y' 2026-03-10T05:52:29.501 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:52:29 vm05 bash[17864]: audit 2026-03-10T05:52:28.136175+0000 mon.a (mon.0) 899 : audit [INF] from='mgr.24988 ' entity='mgr.y' 2026-03-10T05:52:29.501 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:52:29 vm05 bash[17864]: audit 2026-03-10T05:52:28.144524+0000 mon.a (mon.0) 900 : audit [INF] from='mgr.24988 ' entity='mgr.y' 2026-03-10T05:52:29.501 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:52:29 vm05 bash[17864]: cluster 2026-03-10T05:52:28.201965+0000 mgr.y (mgr.24988) 10 : cluster [DBG] pgmap v6: 161 pgs: 161 active+clean; 457 KiB data, 102 MiB used, 160 GiB / 160 GiB avail 2026-03-10T05:52:29.501 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:52:29 vm05 bash[17864]: audit 2026-03-10T05:52:28.611123+0000 mon.a (mon.0) 901 : audit [INF] from='mgr.24988 ' entity='mgr.y' 2026-03-10T05:52:29.501 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:52:29 vm05 bash[17864]: audit 2026-03-10T05:52:28.618148+0000 mon.a (mon.0) 902 : audit [INF] from='mgr.24988 ' entity='mgr.y' 2026-03-10T05:52:29.501 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:52:29 vm05 bash[17864]: audit 2026-03-10T05:52:28.619199+0000 mon.c (mon.1) 153 : audit [INF] from='mgr.24988 192.168.123.102:0/1338236995' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm05", "name": "osd_memory_target"}]: dispatch 2026-03-10T05:52:29.501 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:52:29 vm05 bash[17864]: audit 2026-03-10T05:52:28.619428+0000 mon.a (mon.0) 903 : audit [INF] from='mgr.24988 ' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm05", "name": "osd_memory_target"}]: dispatch 2026-03-10T05:52:29.501 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:52:29 vm05 bash[17864]: audit 2026-03-10T05:52:28.701056+0000 mon.a (mon.0) 904 : audit [INF] from='mgr.24988 ' entity='mgr.y' 2026-03-10T05:52:29.501 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:52:29 vm05 bash[17864]: audit 2026-03-10T05:52:28.706820+0000 mon.a (mon.0) 905 : audit [INF] from='mgr.24988 ' entity='mgr.y' 2026-03-10T05:52:31.584 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:52:31 vm02 bash[17462]: cluster 2026-03-10T05:52:30.202462+0000 mgr.y (mgr.24988) 11 : cluster [DBG] pgmap v7: 161 pgs: 161 active+clean; 457 KiB data, 102 MiB used, 160 GiB / 160 GiB avail; 26 KiB/s rd, 0 B/s wr, 11 op/s 2026-03-10T05:52:31.584 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:31 vm02 bash[22526]: cluster 2026-03-10T05:52:30.202462+0000 mgr.y (mgr.24988) 11 : cluster [DBG] pgmap v7: 161 pgs: 161 active+clean; 457 KiB data, 102 MiB used, 160 GiB / 160 GiB avail; 26 KiB/s rd, 0 B/s wr, 11 op/s 2026-03-10T05:52:31.751 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:52:31 vm05 bash[17864]: cluster 2026-03-10T05:52:30.202462+0000 mgr.y (mgr.24988) 11 : cluster [DBG] pgmap v7: 161 pgs: 161 active+clean; 457 KiB data, 102 MiB used, 160 GiB / 160 GiB avail; 26 KiB/s rd, 0 B/s wr, 11 op/s 2026-03-10T05:52:33.520 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:52:33 vm02 bash[17462]: cluster 2026-03-10T05:52:32.202758+0000 mgr.y (mgr.24988) 12 : cluster [DBG] pgmap v8: 161 pgs: 161 active+clean; 457 KiB data, 102 MiB used, 160 GiB / 160 GiB avail; 20 KiB/s rd, 0 B/s wr, 8 op/s 2026-03-10T05:52:33.520 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:33 vm02 bash[22526]: cluster 2026-03-10T05:52:32.202758+0000 mgr.y (mgr.24988) 12 : cluster [DBG] pgmap v8: 161 pgs: 161 active+clean; 457 KiB data, 102 MiB used, 160 GiB / 160 GiB avail; 20 KiB/s rd, 0 B/s wr, 8 op/s 2026-03-10T05:52:33.751 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:52:33 vm05 bash[17864]: cluster 2026-03-10T05:52:32.202758+0000 mgr.y (mgr.24988) 12 : cluster [DBG] pgmap v8: 161 pgs: 161 active+clean; 457 KiB data, 102 MiB used, 160 GiB / 160 GiB avail; 20 KiB/s rd, 0 B/s wr, 8 op/s 2026-03-10T05:52:34.751 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:52:34 vm05 bash[17864]: audit 2026-03-10T05:52:33.346339+0000 mgr.y (mgr.24988) 13 : audit [DBG] from='client.24940 -' entity='client.iscsi.foo.vm02.mxbwmh' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T05:52:34.834 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:52:34 vm02 bash[17462]: audit 2026-03-10T05:52:33.346339+0000 mgr.y (mgr.24988) 13 : audit [DBG] from='client.24940 -' entity='client.iscsi.foo.vm02.mxbwmh' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T05:52:34.834 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:34 vm02 bash[22526]: audit 2026-03-10T05:52:33.346339+0000 mgr.y (mgr.24988) 13 : audit [DBG] from='client.24940 -' entity='client.iscsi.foo.vm02.mxbwmh' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T05:52:35.834 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:52:35 vm02 bash[17462]: cluster 2026-03-10T05:52:34.203235+0000 mgr.y (mgr.24988) 14 : cluster [DBG] pgmap v9: 161 pgs: 161 active+clean; 457 KiB data, 102 MiB used, 160 GiB / 160 GiB avail; 17 KiB/s rd, 0 B/s wr, 7 op/s 2026-03-10T05:52:35.834 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:35 vm02 bash[22526]: cluster 2026-03-10T05:52:34.203235+0000 mgr.y (mgr.24988) 14 : cluster [DBG] pgmap v9: 161 pgs: 161 active+clean; 457 KiB data, 102 MiB used, 160 GiB / 160 GiB avail; 17 KiB/s rd, 0 B/s wr, 7 op/s 2026-03-10T05:52:36.001 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:52:35 vm05 bash[17864]: cluster 2026-03-10T05:52:34.203235+0000 mgr.y (mgr.24988) 14 : cluster [DBG] pgmap v9: 161 pgs: 161 active+clean; 457 KiB data, 102 MiB used, 160 GiB / 160 GiB avail; 17 KiB/s rd, 0 B/s wr, 7 op/s 2026-03-10T05:52:36.501 INFO:journalctl@ceph.mgr.x.vm05.stdout:Mar 10 05:52:36 vm05 bash[37598]: ::ffff:192.168.123.105 - - [10/Mar/2026:05:52:36] "GET /metrics HTTP/1.1" 200 - "" "Prometheus/2.51.0" 2026-03-10T05:52:36.827 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:52:36 vm02 bash[17462]: audit 2026-03-10T05:52:35.599774+0000 mon.a (mon.0) 906 : audit [INF] from='mgr.24988 ' entity='mgr.y' 2026-03-10T05:52:36.827 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:52:36 vm02 bash[17462]: audit 2026-03-10T05:52:35.607822+0000 mon.a (mon.0) 907 : audit [INF] from='mgr.24988 ' entity='mgr.y' 2026-03-10T05:52:36.827 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:52:36 vm02 bash[17462]: audit 2026-03-10T05:52:35.609019+0000 mon.c (mon.1) 154 : audit [INF] from='mgr.24988 192.168.123.102:0/1338236995' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm02", "name": "osd_memory_target"}]: dispatch 2026-03-10T05:52:36.827 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:52:36 vm02 bash[17462]: audit 2026-03-10T05:52:35.609285+0000 mon.a (mon.0) 908 : audit [INF] from='mgr.24988 ' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm02", "name": "osd_memory_target"}]: dispatch 2026-03-10T05:52:36.827 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:52:36 vm02 bash[17462]: audit 2026-03-10T05:52:35.610390+0000 mon.c (mon.1) 155 : audit [DBG] from='mgr.24988 192.168.123.102:0/1338236995' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:52:36.828 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:52:36 vm02 bash[17462]: audit 2026-03-10T05:52:35.611072+0000 mon.c (mon.1) 156 : audit [INF] from='mgr.24988 192.168.123.102:0/1338236995' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T05:52:36.828 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:52:36 vm02 bash[17462]: cephadm 2026-03-10T05:52:35.611863+0000 mgr.y (mgr.24988) 15 : cephadm [INF] Updating vm02:/etc/ceph/ceph.conf 2026-03-10T05:52:36.828 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:52:36 vm02 bash[17462]: cephadm 2026-03-10T05:52:35.611945+0000 mgr.y (mgr.24988) 16 : cephadm [INF] Updating vm05:/etc/ceph/ceph.conf 2026-03-10T05:52:36.828 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:52:36 vm02 bash[17462]: cephadm 2026-03-10T05:52:35.652802+0000 mgr.y (mgr.24988) 17 : cephadm [INF] Updating vm05:/var/lib/ceph/107483ae-1c44-11f1-b530-c1172cd6122a/config/ceph.conf 2026-03-10T05:52:36.828 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:52:36 vm02 bash[17462]: cephadm 2026-03-10T05:52:35.652919+0000 mgr.y (mgr.24988) 18 : cephadm [INF] Updating vm02:/var/lib/ceph/107483ae-1c44-11f1-b530-c1172cd6122a/config/ceph.conf 2026-03-10T05:52:36.828 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:52:36 vm02 bash[17462]: cephadm 2026-03-10T05:52:35.689038+0000 mgr.y (mgr.24988) 19 : cephadm [INF] Updating vm05:/etc/ceph/ceph.client.admin.keyring 2026-03-10T05:52:36.828 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:52:36 vm02 bash[17462]: cephadm 2026-03-10T05:52:35.689169+0000 mgr.y (mgr.24988) 20 : cephadm [INF] Updating vm02:/etc/ceph/ceph.client.admin.keyring 2026-03-10T05:52:36.828 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:52:36 vm02 bash[17462]: cephadm 2026-03-10T05:52:35.723253+0000 mgr.y (mgr.24988) 21 : cephadm [INF] Updating vm05:/var/lib/ceph/107483ae-1c44-11f1-b530-c1172cd6122a/config/ceph.client.admin.keyring 2026-03-10T05:52:36.828 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:52:36 vm02 bash[17462]: cephadm 2026-03-10T05:52:35.723371+0000 mgr.y (mgr.24988) 22 : cephadm [INF] Updating vm02:/var/lib/ceph/107483ae-1c44-11f1-b530-c1172cd6122a/config/ceph.client.admin.keyring 2026-03-10T05:52:36.828 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:52:36 vm02 bash[17462]: audit 2026-03-10T05:52:35.953424+0000 mon.a (mon.0) 909 : audit [INF] from='mgr.24988 ' entity='mgr.y' 2026-03-10T05:52:36.828 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:52:36 vm02 bash[17462]: audit 2026-03-10T05:52:35.970532+0000 mon.a (mon.0) 910 : audit [INF] from='mgr.24988 ' entity='mgr.y' 2026-03-10T05:52:36.828 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:52:36 vm02 bash[17462]: audit 2026-03-10T05:52:35.975826+0000 mon.a (mon.0) 911 : audit [INF] from='mgr.24988 ' entity='mgr.y' 2026-03-10T05:52:36.828 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:52:36 vm02 bash[17462]: audit 2026-03-10T05:52:35.981343+0000 mon.a (mon.0) 912 : audit [INF] from='mgr.24988 ' entity='mgr.y' 2026-03-10T05:52:36.828 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:52:36 vm02 bash[17462]: audit 2026-03-10T05:52:35.998686+0000 mon.a (mon.0) 913 : audit [INF] from='mgr.24988 ' entity='mgr.y' 2026-03-10T05:52:36.828 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:52:36 vm02 bash[17462]: audit 2026-03-10T05:52:36.010074+0000 mon.c (mon.1) 157 : audit [INF] from='mgr.24988 192.168.123.102:0/1338236995' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "client.iscsi.foo.vm02.mxbwmh", "caps": ["mon", "profile rbd, allow command \"osd blocklist\", allow command \"config-key get\" with \"key\" prefix \"iscsi/\"", "mgr", "allow command \"service status\"", "osd", "allow rwx"]}]: dispatch 2026-03-10T05:52:36.828 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:52:36 vm02 bash[17462]: audit 2026-03-10T05:52:36.010397+0000 mon.a (mon.0) 914 : audit [INF] from='mgr.24988 ' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "client.iscsi.foo.vm02.mxbwmh", "caps": ["mon", "profile rbd, allow command \"osd blocklist\", allow command \"config-key get\" with \"key\" prefix \"iscsi/\"", "mgr", "allow command \"service status\"", "osd", "allow rwx"]}]: dispatch 2026-03-10T05:52:36.828 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:52:36 vm02 bash[17462]: audit 2026-03-10T05:52:36.013802+0000 mon.c (mon.1) 158 : audit [DBG] from='mgr.24988 192.168.123.102:0/1338236995' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:52:36.828 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:52:36 vm02 bash[17462]: audit 2026-03-10T05:52:36.468352+0000 mon.a (mon.0) 915 : audit [INF] from='mgr.24988 ' entity='mgr.y' 2026-03-10T05:52:36.828 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:52:36 vm02 bash[17462]: audit 2026-03-10T05:52:36.476090+0000 mon.a (mon.0) 916 : audit [INF] from='mgr.24988 ' entity='mgr.y' 2026-03-10T05:52:36.828 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:36 vm02 bash[22526]: audit 2026-03-10T05:52:35.599774+0000 mon.a (mon.0) 906 : audit [INF] from='mgr.24988 ' entity='mgr.y' 2026-03-10T05:52:36.828 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:36 vm02 bash[22526]: audit 2026-03-10T05:52:35.607822+0000 mon.a (mon.0) 907 : audit [INF] from='mgr.24988 ' entity='mgr.y' 2026-03-10T05:52:36.828 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:36 vm02 bash[22526]: audit 2026-03-10T05:52:35.609019+0000 mon.c (mon.1) 154 : audit [INF] from='mgr.24988 192.168.123.102:0/1338236995' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm02", "name": "osd_memory_target"}]: dispatch 2026-03-10T05:52:36.828 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:36 vm02 bash[22526]: audit 2026-03-10T05:52:35.609285+0000 mon.a (mon.0) 908 : audit [INF] from='mgr.24988 ' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm02", "name": "osd_memory_target"}]: dispatch 2026-03-10T05:52:36.828 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:36 vm02 bash[22526]: audit 2026-03-10T05:52:35.610390+0000 mon.c (mon.1) 155 : audit [DBG] from='mgr.24988 192.168.123.102:0/1338236995' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:52:36.828 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:36 vm02 bash[22526]: audit 2026-03-10T05:52:35.611072+0000 mon.c (mon.1) 156 : audit [INF] from='mgr.24988 192.168.123.102:0/1338236995' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T05:52:36.828 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:36 vm02 bash[22526]: cephadm 2026-03-10T05:52:35.611863+0000 mgr.y (mgr.24988) 15 : cephadm [INF] Updating vm02:/etc/ceph/ceph.conf 2026-03-10T05:52:36.828 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:36 vm02 bash[22526]: cephadm 2026-03-10T05:52:35.611945+0000 mgr.y (mgr.24988) 16 : cephadm [INF] Updating vm05:/etc/ceph/ceph.conf 2026-03-10T05:52:36.828 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:36 vm02 bash[22526]: cephadm 2026-03-10T05:52:35.652802+0000 mgr.y (mgr.24988) 17 : cephadm [INF] Updating vm05:/var/lib/ceph/107483ae-1c44-11f1-b530-c1172cd6122a/config/ceph.conf 2026-03-10T05:52:36.828 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:36 vm02 bash[22526]: cephadm 2026-03-10T05:52:35.652919+0000 mgr.y (mgr.24988) 18 : cephadm [INF] Updating vm02:/var/lib/ceph/107483ae-1c44-11f1-b530-c1172cd6122a/config/ceph.conf 2026-03-10T05:52:36.828 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:36 vm02 bash[22526]: cephadm 2026-03-10T05:52:35.689038+0000 mgr.y (mgr.24988) 19 : cephadm [INF] Updating vm05:/etc/ceph/ceph.client.admin.keyring 2026-03-10T05:52:36.828 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:36 vm02 bash[22526]: cephadm 2026-03-10T05:52:35.689169+0000 mgr.y (mgr.24988) 20 : cephadm [INF] Updating vm02:/etc/ceph/ceph.client.admin.keyring 2026-03-10T05:52:36.828 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:36 vm02 bash[22526]: cephadm 2026-03-10T05:52:35.723253+0000 mgr.y (mgr.24988) 21 : cephadm [INF] Updating vm05:/var/lib/ceph/107483ae-1c44-11f1-b530-c1172cd6122a/config/ceph.client.admin.keyring 2026-03-10T05:52:36.828 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:36 vm02 bash[22526]: cephadm 2026-03-10T05:52:35.723371+0000 mgr.y (mgr.24988) 22 : cephadm [INF] Updating vm02:/var/lib/ceph/107483ae-1c44-11f1-b530-c1172cd6122a/config/ceph.client.admin.keyring 2026-03-10T05:52:36.828 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:36 vm02 bash[22526]: audit 2026-03-10T05:52:35.953424+0000 mon.a (mon.0) 909 : audit [INF] from='mgr.24988 ' entity='mgr.y' 2026-03-10T05:52:36.828 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:36 vm02 bash[22526]: audit 2026-03-10T05:52:35.970532+0000 mon.a (mon.0) 910 : audit [INF] from='mgr.24988 ' entity='mgr.y' 2026-03-10T05:52:36.828 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:36 vm02 bash[22526]: audit 2026-03-10T05:52:35.975826+0000 mon.a (mon.0) 911 : audit [INF] from='mgr.24988 ' entity='mgr.y' 2026-03-10T05:52:36.828 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:36 vm02 bash[22526]: audit 2026-03-10T05:52:35.981343+0000 mon.a (mon.0) 912 : audit [INF] from='mgr.24988 ' entity='mgr.y' 2026-03-10T05:52:36.828 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:36 vm02 bash[22526]: audit 2026-03-10T05:52:35.998686+0000 mon.a (mon.0) 913 : audit [INF] from='mgr.24988 ' entity='mgr.y' 2026-03-10T05:52:36.828 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:36 vm02 bash[22526]: audit 2026-03-10T05:52:36.010074+0000 mon.c (mon.1) 157 : audit [INF] from='mgr.24988 192.168.123.102:0/1338236995' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "client.iscsi.foo.vm02.mxbwmh", "caps": ["mon", "profile rbd, allow command \"osd blocklist\", allow command \"config-key get\" with \"key\" prefix \"iscsi/\"", "mgr", "allow command \"service status\"", "osd", "allow rwx"]}]: dispatch 2026-03-10T05:52:36.828 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:36 vm02 bash[22526]: audit 2026-03-10T05:52:36.010397+0000 mon.a (mon.0) 914 : audit [INF] from='mgr.24988 ' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "client.iscsi.foo.vm02.mxbwmh", "caps": ["mon", "profile rbd, allow command \"osd blocklist\", allow command \"config-key get\" with \"key\" prefix \"iscsi/\"", "mgr", "allow command \"service status\"", "osd", "allow rwx"]}]: dispatch 2026-03-10T05:52:36.828 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:36 vm02 bash[22526]: audit 2026-03-10T05:52:36.013802+0000 mon.c (mon.1) 158 : audit [DBG] from='mgr.24988 192.168.123.102:0/1338236995' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:52:36.828 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:36 vm02 bash[22526]: audit 2026-03-10T05:52:36.468352+0000 mon.a (mon.0) 915 : audit [INF] from='mgr.24988 ' entity='mgr.y' 2026-03-10T05:52:36.828 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:36 vm02 bash[22526]: audit 2026-03-10T05:52:36.476090+0000 mon.a (mon.0) 916 : audit [INF] from='mgr.24988 ' entity='mgr.y' 2026-03-10T05:52:36.946 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:52:36 vm05 bash[17864]: audit 2026-03-10T05:52:35.599774+0000 mon.a (mon.0) 906 : audit [INF] from='mgr.24988 ' entity='mgr.y' 2026-03-10T05:52:36.970 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:52:36 vm05 bash[17864]: audit 2026-03-10T05:52:35.607822+0000 mon.a (mon.0) 907 : audit [INF] from='mgr.24988 ' entity='mgr.y' 2026-03-10T05:52:36.970 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:52:36 vm05 bash[17864]: audit 2026-03-10T05:52:35.609019+0000 mon.c (mon.1) 154 : audit [INF] from='mgr.24988 192.168.123.102:0/1338236995' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm02", "name": "osd_memory_target"}]: dispatch 2026-03-10T05:52:36.970 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:52:36 vm05 bash[17864]: audit 2026-03-10T05:52:35.609285+0000 mon.a (mon.0) 908 : audit [INF] from='mgr.24988 ' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm02", "name": "osd_memory_target"}]: dispatch 2026-03-10T05:52:36.970 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:52:36 vm05 bash[17864]: audit 2026-03-10T05:52:35.610390+0000 mon.c (mon.1) 155 : audit [DBG] from='mgr.24988 192.168.123.102:0/1338236995' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:52:36.970 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:52:36 vm05 bash[17864]: audit 2026-03-10T05:52:35.611072+0000 mon.c (mon.1) 156 : audit [INF] from='mgr.24988 192.168.123.102:0/1338236995' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T05:52:36.970 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:52:36 vm05 bash[17864]: cephadm 2026-03-10T05:52:35.611863+0000 mgr.y (mgr.24988) 15 : cephadm [INF] Updating vm02:/etc/ceph/ceph.conf 2026-03-10T05:52:36.970 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:52:36 vm05 bash[17864]: cephadm 2026-03-10T05:52:35.611945+0000 mgr.y (mgr.24988) 16 : cephadm [INF] Updating vm05:/etc/ceph/ceph.conf 2026-03-10T05:52:36.970 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:52:36 vm05 bash[17864]: cephadm 2026-03-10T05:52:35.652802+0000 mgr.y (mgr.24988) 17 : cephadm [INF] Updating vm05:/var/lib/ceph/107483ae-1c44-11f1-b530-c1172cd6122a/config/ceph.conf 2026-03-10T05:52:36.970 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:52:36 vm05 bash[17864]: cephadm 2026-03-10T05:52:35.652919+0000 mgr.y (mgr.24988) 18 : cephadm [INF] Updating vm02:/var/lib/ceph/107483ae-1c44-11f1-b530-c1172cd6122a/config/ceph.conf 2026-03-10T05:52:36.970 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:52:36 vm05 bash[17864]: cephadm 2026-03-10T05:52:35.689038+0000 mgr.y (mgr.24988) 19 : cephadm [INF] Updating vm05:/etc/ceph/ceph.client.admin.keyring 2026-03-10T05:52:36.970 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:52:36 vm05 bash[17864]: cephadm 2026-03-10T05:52:35.689169+0000 mgr.y (mgr.24988) 20 : cephadm [INF] Updating vm02:/etc/ceph/ceph.client.admin.keyring 2026-03-10T05:52:36.970 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:52:36 vm05 bash[17864]: cephadm 2026-03-10T05:52:35.723253+0000 mgr.y (mgr.24988) 21 : cephadm [INF] Updating vm05:/var/lib/ceph/107483ae-1c44-11f1-b530-c1172cd6122a/config/ceph.client.admin.keyring 2026-03-10T05:52:36.970 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:52:36 vm05 bash[17864]: cephadm 2026-03-10T05:52:35.723371+0000 mgr.y (mgr.24988) 22 : cephadm [INF] Updating vm02:/var/lib/ceph/107483ae-1c44-11f1-b530-c1172cd6122a/config/ceph.client.admin.keyring 2026-03-10T05:52:36.970 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:52:36 vm05 bash[17864]: audit 2026-03-10T05:52:35.953424+0000 mon.a (mon.0) 909 : audit [INF] from='mgr.24988 ' entity='mgr.y' 2026-03-10T05:52:36.970 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:52:36 vm05 bash[17864]: audit 2026-03-10T05:52:35.970532+0000 mon.a (mon.0) 910 : audit [INF] from='mgr.24988 ' entity='mgr.y' 2026-03-10T05:52:36.970 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:52:36 vm05 bash[17864]: audit 2026-03-10T05:52:35.975826+0000 mon.a (mon.0) 911 : audit [INF] from='mgr.24988 ' entity='mgr.y' 2026-03-10T05:52:36.970 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:52:36 vm05 bash[17864]: audit 2026-03-10T05:52:35.981343+0000 mon.a (mon.0) 912 : audit [INF] from='mgr.24988 ' entity='mgr.y' 2026-03-10T05:52:36.970 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:52:36 vm05 bash[17864]: audit 2026-03-10T05:52:35.998686+0000 mon.a (mon.0) 913 : audit [INF] from='mgr.24988 ' entity='mgr.y' 2026-03-10T05:52:36.970 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:52:36 vm05 bash[17864]: audit 2026-03-10T05:52:36.010074+0000 mon.c (mon.1) 157 : audit [INF] from='mgr.24988 192.168.123.102:0/1338236995' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "client.iscsi.foo.vm02.mxbwmh", "caps": ["mon", "profile rbd, allow command \"osd blocklist\", allow command \"config-key get\" with \"key\" prefix \"iscsi/\"", "mgr", "allow command \"service status\"", "osd", "allow rwx"]}]: dispatch 2026-03-10T05:52:36.970 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:52:36 vm05 bash[17864]: audit 2026-03-10T05:52:36.010397+0000 mon.a (mon.0) 914 : audit [INF] from='mgr.24988 ' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "client.iscsi.foo.vm02.mxbwmh", "caps": ["mon", "profile rbd, allow command \"osd blocklist\", allow command \"config-key get\" with \"key\" prefix \"iscsi/\"", "mgr", "allow command \"service status\"", "osd", "allow rwx"]}]: dispatch 2026-03-10T05:52:36.970 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:52:36 vm05 bash[17864]: audit 2026-03-10T05:52:36.013802+0000 mon.c (mon.1) 158 : audit [DBG] from='mgr.24988 192.168.123.102:0/1338236995' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:52:36.970 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:52:36 vm05 bash[17864]: audit 2026-03-10T05:52:36.468352+0000 mon.a (mon.0) 915 : audit [INF] from='mgr.24988 ' entity='mgr.y' 2026-03-10T05:52:36.970 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:52:36 vm05 bash[17864]: audit 2026-03-10T05:52:36.476090+0000 mon.a (mon.0) 916 : audit [INF] from='mgr.24988 ' entity='mgr.y' 2026-03-10T05:52:37.251 INFO:journalctl@ceph.prometheus.a.vm05.stdout:Mar 10 05:52:36 vm05 bash[40098]: ts=2026-03-10T05:52:36.947Z caller=group.go:483 level=warn name=CephNodeDiskspaceWarning index=4 component="rule manager" file=/etc/prometheus/alerting/ceph_alerts.yml group=nodes msg="Evaluating rule failed" rule="alert: CephNodeDiskspaceWarning\nexpr: predict_linear(node_filesystem_free_bytes{device=~\"/.*\"}[2d], 3600 * 24 * 5)\n * on (instance) group_left (nodename) node_uname_info < 0\nlabels:\n oid: 1.3.6.1.4.1.50495.1.2.1.8.4\n severity: warning\n type: ceph_default\nannotations:\n description: Mountpoint {{ $labels.mountpoint }} on {{ $labels.nodename }} will\n be full in less than 5 days based on the 48 hour trailing fill rate.\n summary: Host filesystem free space is getting low\n" err="found duplicate series for the match group {instance=\"vm05\"} on the right hand-side of the operation: [{__name__=\"node_uname_info\", cluster=\"107483ae-1c44-11f1-b530-c1172cd6122a\", domainname=\"(none)\", instance=\"vm05\", job=\"node\", machine=\"x86_64\", nodename=\"vm05\", release=\"5.15.0-1092-kvm\", sysname=\"Linux\", version=\"#97-Ubuntu SMP Fri Jan 23 15:00:24 UTC 2026\"}, {__name__=\"node_uname_info\", domainname=\"(none)\", instance=\"vm05\", job=\"node\", machine=\"x86_64\", nodename=\"vm05\", release=\"5.15.0-1092-kvm\", sysname=\"Linux\", version=\"#97-Ubuntu SMP Fri Jan 23 15:00:24 UTC 2026\"}];many-to-many matching not allowed: matching labels must be unique on one side" 2026-03-10T05:52:37.580 INFO:journalctl@ceph.prometheus.a.vm05.stdout:Mar 10 05:52:37 vm05 systemd[1]: Stopping Ceph prometheus.a for 107483ae-1c44-11f1-b530-c1172cd6122a... 2026-03-10T05:52:37.581 INFO:journalctl@ceph.prometheus.a.vm05.stdout:Mar 10 05:52:37 vm05 bash[40098]: ts=2026-03-10T05:52:37.391Z caller=main.go:964 level=warn msg="Received SIGTERM, exiting gracefully..." 2026-03-10T05:52:37.581 INFO:journalctl@ceph.prometheus.a.vm05.stdout:Mar 10 05:52:37 vm05 bash[40098]: ts=2026-03-10T05:52:37.391Z caller=main.go:988 level=info msg="Stopping scrape discovery manager..." 2026-03-10T05:52:37.581 INFO:journalctl@ceph.prometheus.a.vm05.stdout:Mar 10 05:52:37 vm05 bash[40098]: ts=2026-03-10T05:52:37.391Z caller=main.go:1002 level=info msg="Stopping notify discovery manager..." 2026-03-10T05:52:37.581 INFO:journalctl@ceph.prometheus.a.vm05.stdout:Mar 10 05:52:37 vm05 bash[40098]: ts=2026-03-10T05:52:37.391Z caller=manager.go:177 level=info component="rule manager" msg="Stopping rule manager..." 2026-03-10T05:52:37.581 INFO:journalctl@ceph.prometheus.a.vm05.stdout:Mar 10 05:52:37 vm05 bash[40098]: ts=2026-03-10T05:52:37.391Z caller=main.go:984 level=info msg="Scrape discovery manager stopped" 2026-03-10T05:52:37.581 INFO:journalctl@ceph.prometheus.a.vm05.stdout:Mar 10 05:52:37 vm05 bash[40098]: ts=2026-03-10T05:52:37.391Z caller=main.go:998 level=info msg="Notify discovery manager stopped" 2026-03-10T05:52:37.581 INFO:journalctl@ceph.prometheus.a.vm05.stdout:Mar 10 05:52:37 vm05 bash[40098]: ts=2026-03-10T05:52:37.391Z caller=manager.go:187 level=info component="rule manager" msg="Rule manager stopped" 2026-03-10T05:52:37.581 INFO:journalctl@ceph.prometheus.a.vm05.stdout:Mar 10 05:52:37 vm05 bash[40098]: ts=2026-03-10T05:52:37.391Z caller=main.go:1039 level=info msg="Stopping scrape manager..." 2026-03-10T05:52:37.581 INFO:journalctl@ceph.prometheus.a.vm05.stdout:Mar 10 05:52:37 vm05 bash[40098]: ts=2026-03-10T05:52:37.392Z caller=main.go:1031 level=info msg="Scrape manager stopped" 2026-03-10T05:52:37.581 INFO:journalctl@ceph.prometheus.a.vm05.stdout:Mar 10 05:52:37 vm05 bash[40098]: ts=2026-03-10T05:52:37.393Z caller=notifier.go:618 level=info component=notifier msg="Stopping notification manager..." 2026-03-10T05:52:37.581 INFO:journalctl@ceph.prometheus.a.vm05.stdout:Mar 10 05:52:37 vm05 bash[40098]: ts=2026-03-10T05:52:37.393Z caller=main.go:1261 level=info msg="Notifier manager stopped" 2026-03-10T05:52:37.581 INFO:journalctl@ceph.prometheus.a.vm05.stdout:Mar 10 05:52:37 vm05 bash[40098]: ts=2026-03-10T05:52:37.393Z caller=main.go:1273 level=info msg="See you next time!" 2026-03-10T05:52:37.581 INFO:journalctl@ceph.prometheus.a.vm05.stdout:Mar 10 05:52:37 vm05 bash[41192]: ceph-107483ae-1c44-11f1-b530-c1172cd6122a-prometheus-a 2026-03-10T05:52:37.581 INFO:journalctl@ceph.prometheus.a.vm05.stdout:Mar 10 05:52:37 vm05 systemd[1]: ceph-107483ae-1c44-11f1-b530-c1172cd6122a@prometheus.a.service: Deactivated successfully. 2026-03-10T05:52:37.581 INFO:journalctl@ceph.prometheus.a.vm05.stdout:Mar 10 05:52:37 vm05 systemd[1]: Stopped Ceph prometheus.a for 107483ae-1c44-11f1-b530-c1172cd6122a. 2026-03-10T05:52:37.581 INFO:journalctl@ceph.prometheus.a.vm05.stdout:Mar 10 05:52:37 vm05 systemd[1]: Started Ceph prometheus.a for 107483ae-1c44-11f1-b530-c1172cd6122a. 2026-03-10T05:52:37.876 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:52:37 vm05 bash[17864]: cephadm 2026-03-10T05:52:36.009684+0000 mgr.y (mgr.24988) 23 : cephadm [INF] Reconfiguring iscsi.foo.vm02.mxbwmh (dependencies changed)... 2026-03-10T05:52:37.876 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:52:37 vm05 bash[17864]: cephadm 2026-03-10T05:52:36.014632+0000 mgr.y (mgr.24988) 24 : cephadm [INF] Reconfiguring daemon iscsi.foo.vm02.mxbwmh on vm02 2026-03-10T05:52:37.876 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:52:37 vm05 bash[17864]: cluster 2026-03-10T05:52:36.203538+0000 mgr.y (mgr.24988) 25 : cluster [DBG] pgmap v10: 161 pgs: 161 active+clean; 457 KiB data, 102 MiB used, 160 GiB / 160 GiB avail; 15 KiB/s rd, 0 B/s wr, 6 op/s 2026-03-10T05:52:37.876 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:52:37 vm05 bash[17864]: cephadm 2026-03-10T05:52:36.480334+0000 mgr.y (mgr.24988) 26 : cephadm [INF] Reconfiguring prometheus.a (dependencies changed)... 2026-03-10T05:52:37.876 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:52:37 vm05 bash[17864]: cephadm 2026-03-10T05:52:36.722967+0000 mgr.y (mgr.24988) 27 : cephadm [INF] Reconfiguring daemon prometheus.a on vm05 2026-03-10T05:52:37.876 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:52:37 vm05 bash[17864]: audit 2026-03-10T05:52:37.015993+0000 mon.c (mon.1) 159 : audit [DBG] from='client.? 192.168.123.102:0/2581955639' entity='client.iscsi.foo.vm02.mxbwmh' cmd=[{"prefix": "osd blocklist ls"}]: dispatch 2026-03-10T05:52:37.876 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:52:37 vm05 bash[17864]: audit 2026-03-10T05:52:37.422102+0000 mon.c (mon.1) 160 : audit [DBG] from='mgr.24988 192.168.123.102:0/1338236995' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T05:52:37.876 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:52:37 vm05 bash[17864]: audit 2026-03-10T05:52:37.467831+0000 mon.a (mon.0) 917 : audit [INF] from='mgr.24988 ' entity='mgr.y' 2026-03-10T05:52:37.876 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:52:37 vm05 bash[17864]: audit 2026-03-10T05:52:37.474435+0000 mon.a (mon.0) 918 : audit [INF] from='mgr.24988 ' entity='mgr.y' 2026-03-10T05:52:37.876 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:52:37 vm05 bash[17864]: audit 2026-03-10T05:52:37.478063+0000 mon.c (mon.1) 161 : audit [DBG] from='mgr.24988 192.168.123.102:0/1338236995' entity='mgr.y' cmd=[{"prefix": "dashboard iscsi-gateway-list"}]: dispatch 2026-03-10T05:52:37.876 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:52:37 vm05 bash[17864]: audit 2026-03-10T05:52:37.486463+0000 mon.a (mon.0) 919 : audit [INF] from='mgr.24988 ' entity='mgr.y' 2026-03-10T05:52:37.876 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:52:37 vm05 bash[17864]: audit 2026-03-10T05:52:37.489091+0000 mon.c (mon.1) 162 : audit [INF] from='mgr.24988 192.168.123.102:0/1338236995' entity='mgr.y' cmd=[{"prefix": "dashboard set-iscsi-api-ssl-verification", "value": "true"}]: dispatch 2026-03-10T05:52:37.876 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:52:37 vm05 bash[17864]: audit 2026-03-10T05:52:37.491160+0000 mon.c (mon.1) 163 : audit [INF] from='mgr.24988 192.168.123.102:0/1338236995' entity='mgr.y' cmd=[{"prefix": "dashboard iscsi-gateway-add", "name": "vm02"}]: dispatch 2026-03-10T05:52:37.876 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:52:37 vm05 bash[17864]: audit 2026-03-10T05:52:37.494722+0000 mon.a (mon.0) 920 : audit [INF] from='mgr.24988 ' entity='mgr.y' 2026-03-10T05:52:37.876 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:52:37 vm05 bash[17864]: audit 2026-03-10T05:52:37.498522+0000 mon.c (mon.1) 164 : audit [DBG] from='mgr.24988 192.168.123.102:0/1338236995' entity='mgr.y' cmd=[{"prefix": "dashboard get-prometheus-api-host"}]: dispatch 2026-03-10T05:52:37.876 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:52:37 vm05 bash[17864]: audit 2026-03-10T05:52:37.527121+0000 mon.c (mon.1) 165 : audit [DBG] from='mgr.24988 192.168.123.102:0/1338236995' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T05:52:37.877 INFO:journalctl@ceph.prometheus.a.vm05.stdout:Mar 10 05:52:37 vm05 bash[41269]: ts=2026-03-10T05:52:37.582Z caller=main.go:617 level=info msg="Starting Prometheus Server" mode=server version="(version=2.51.0, branch=HEAD, revision=c05c15512acb675e3f6cd662a6727854e93fc024)" 2026-03-10T05:52:37.877 INFO:journalctl@ceph.prometheus.a.vm05.stdout:Mar 10 05:52:37 vm05 bash[41269]: ts=2026-03-10T05:52:37.584Z caller=main.go:622 level=info build_context="(go=go1.22.1, platform=linux/amd64, user=root@b5723e458358, date=20240319-10:54:45, tags=netgo,builtinassets,stringlabels)" 2026-03-10T05:52:37.877 INFO:journalctl@ceph.prometheus.a.vm05.stdout:Mar 10 05:52:37 vm05 bash[41269]: ts=2026-03-10T05:52:37.584Z caller=main.go:623 level=info host_details="(Linux 5.15.0-1092-kvm #97-Ubuntu SMP Fri Jan 23 15:00:24 UTC 2026 x86_64 vm05 (none))" 2026-03-10T05:52:37.877 INFO:journalctl@ceph.prometheus.a.vm05.stdout:Mar 10 05:52:37 vm05 bash[41269]: ts=2026-03-10T05:52:37.585Z caller=main.go:624 level=info fd_limits="(soft=1048576, hard=1048576)" 2026-03-10T05:52:37.877 INFO:journalctl@ceph.prometheus.a.vm05.stdout:Mar 10 05:52:37 vm05 bash[41269]: ts=2026-03-10T05:52:37.585Z caller=main.go:625 level=info vm_limits="(soft=unlimited, hard=unlimited)" 2026-03-10T05:52:37.877 INFO:journalctl@ceph.prometheus.a.vm05.stdout:Mar 10 05:52:37 vm05 bash[41269]: ts=2026-03-10T05:52:37.587Z caller=web.go:568 level=info component=web msg="Start listening for connections" address=:9095 2026-03-10T05:52:37.877 INFO:journalctl@ceph.prometheus.a.vm05.stdout:Mar 10 05:52:37 vm05 bash[41269]: ts=2026-03-10T05:52:37.587Z caller=main.go:1129 level=info msg="Starting TSDB ..." 2026-03-10T05:52:37.877 INFO:journalctl@ceph.prometheus.a.vm05.stdout:Mar 10 05:52:37 vm05 bash[41269]: ts=2026-03-10T05:52:37.589Z caller=head.go:616 level=info component=tsdb msg="Replaying on-disk memory mappable chunks if any" 2026-03-10T05:52:37.877 INFO:journalctl@ceph.prometheus.a.vm05.stdout:Mar 10 05:52:37 vm05 bash[41269]: ts=2026-03-10T05:52:37.589Z caller=head.go:698 level=info component=tsdb msg="On-disk memory mappable chunks replay completed" duration=1.072µs 2026-03-10T05:52:37.877 INFO:journalctl@ceph.prometheus.a.vm05.stdout:Mar 10 05:52:37 vm05 bash[41269]: ts=2026-03-10T05:52:37.589Z caller=head.go:706 level=info component=tsdb msg="Replaying WAL, this may take a while" 2026-03-10T05:52:37.877 INFO:journalctl@ceph.prometheus.a.vm05.stdout:Mar 10 05:52:37 vm05 bash[41269]: ts=2026-03-10T05:52:37.589Z caller=tls_config.go:313 level=info component=web msg="Listening on" address=[::]:9095 2026-03-10T05:52:37.877 INFO:journalctl@ceph.prometheus.a.vm05.stdout:Mar 10 05:52:37 vm05 bash[41269]: ts=2026-03-10T05:52:37.589Z caller=tls_config.go:316 level=info component=web msg="TLS is disabled." http2=false address=[::]:9095 2026-03-10T05:52:37.877 INFO:journalctl@ceph.prometheus.a.vm05.stdout:Mar 10 05:52:37 vm05 bash[41269]: ts=2026-03-10T05:52:37.598Z caller=head.go:778 level=info component=tsdb msg="WAL segment loaded" segment=0 maxSegment=3 2026-03-10T05:52:37.877 INFO:journalctl@ceph.prometheus.a.vm05.stdout:Mar 10 05:52:37 vm05 bash[41269]: ts=2026-03-10T05:52:37.618Z caller=head.go:778 level=info component=tsdb msg="WAL segment loaded" segment=1 maxSegment=3 2026-03-10T05:52:37.877 INFO:journalctl@ceph.prometheus.a.vm05.stdout:Mar 10 05:52:37 vm05 bash[41269]: ts=2026-03-10T05:52:37.620Z caller=head.go:778 level=info component=tsdb msg="WAL segment loaded" segment=2 maxSegment=3 2026-03-10T05:52:37.877 INFO:journalctl@ceph.prometheus.a.vm05.stdout:Mar 10 05:52:37 vm05 bash[41269]: ts=2026-03-10T05:52:37.621Z caller=head.go:778 level=info component=tsdb msg="WAL segment loaded" segment=3 maxSegment=3 2026-03-10T05:52:37.877 INFO:journalctl@ceph.prometheus.a.vm05.stdout:Mar 10 05:52:37 vm05 bash[41269]: ts=2026-03-10T05:52:37.621Z caller=head.go:815 level=info component=tsdb msg="WAL replay completed" checkpoint_replay_duration=26.53µs wal_replay_duration=31.672907ms wbl_replay_duration=14.688µs total_replay_duration=32.015792ms 2026-03-10T05:52:37.877 INFO:journalctl@ceph.prometheus.a.vm05.stdout:Mar 10 05:52:37 vm05 bash[41269]: ts=2026-03-10T05:52:37.622Z caller=main.go:1150 level=info fs_type=EXT4_SUPER_MAGIC 2026-03-10T05:52:37.877 INFO:journalctl@ceph.prometheus.a.vm05.stdout:Mar 10 05:52:37 vm05 bash[41269]: ts=2026-03-10T05:52:37.622Z caller=main.go:1153 level=info msg="TSDB started" 2026-03-10T05:52:37.877 INFO:journalctl@ceph.prometheus.a.vm05.stdout:Mar 10 05:52:37 vm05 bash[41269]: ts=2026-03-10T05:52:37.623Z caller=main.go:1335 level=info msg="Loading configuration file" filename=/etc/prometheus/prometheus.yml 2026-03-10T05:52:37.877 INFO:journalctl@ceph.prometheus.a.vm05.stdout:Mar 10 05:52:37 vm05 bash[41269]: ts=2026-03-10T05:52:37.633Z caller=main.go:1372 level=info msg="Completed loading of configuration file" filename=/etc/prometheus/prometheus.yml totalDuration=10.627914ms db_storage=992ns remote_storage=811ns web_handler=431ns query_engine=701ns scrape=680.399µs scrape_sd=69.101µs notify=6.893µs notify_sd=4.979µs rules=9.40687ms tracing=3.436µs 2026-03-10T05:52:37.877 INFO:journalctl@ceph.prometheus.a.vm05.stdout:Mar 10 05:52:37 vm05 bash[41269]: ts=2026-03-10T05:52:37.633Z caller=main.go:1114 level=info msg="Server is ready to receive web requests." 2026-03-10T05:52:37.877 INFO:journalctl@ceph.prometheus.a.vm05.stdout:Mar 10 05:52:37 vm05 bash[41269]: ts=2026-03-10T05:52:37.633Z caller=manager.go:163 level=info component="rule manager" msg="Starting rule manager..." 2026-03-10T05:52:38.084 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:52:37 vm02 bash[17462]: cephadm 2026-03-10T05:52:36.009684+0000 mgr.y (mgr.24988) 23 : cephadm [INF] Reconfiguring iscsi.foo.vm02.mxbwmh (dependencies changed)... 2026-03-10T05:52:38.084 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:52:37 vm02 bash[17462]: cephadm 2026-03-10T05:52:36.014632+0000 mgr.y (mgr.24988) 24 : cephadm [INF] Reconfiguring daemon iscsi.foo.vm02.mxbwmh on vm02 2026-03-10T05:52:38.084 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:52:37 vm02 bash[17462]: cluster 2026-03-10T05:52:36.203538+0000 mgr.y (mgr.24988) 25 : cluster [DBG] pgmap v10: 161 pgs: 161 active+clean; 457 KiB data, 102 MiB used, 160 GiB / 160 GiB avail; 15 KiB/s rd, 0 B/s wr, 6 op/s 2026-03-10T05:52:38.084 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:52:37 vm02 bash[17462]: cephadm 2026-03-10T05:52:36.480334+0000 mgr.y (mgr.24988) 26 : cephadm [INF] Reconfiguring prometheus.a (dependencies changed)... 2026-03-10T05:52:38.084 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:52:37 vm02 bash[17462]: cephadm 2026-03-10T05:52:36.722967+0000 mgr.y (mgr.24988) 27 : cephadm [INF] Reconfiguring daemon prometheus.a on vm05 2026-03-10T05:52:38.084 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:52:37 vm02 bash[17462]: audit 2026-03-10T05:52:37.015993+0000 mon.c (mon.1) 159 : audit [DBG] from='client.? 192.168.123.102:0/2581955639' entity='client.iscsi.foo.vm02.mxbwmh' cmd=[{"prefix": "osd blocklist ls"}]: dispatch 2026-03-10T05:52:38.084 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:52:37 vm02 bash[17462]: audit 2026-03-10T05:52:37.422102+0000 mon.c (mon.1) 160 : audit [DBG] from='mgr.24988 192.168.123.102:0/1338236995' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T05:52:38.084 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:52:37 vm02 bash[17462]: audit 2026-03-10T05:52:37.467831+0000 mon.a (mon.0) 917 : audit [INF] from='mgr.24988 ' entity='mgr.y' 2026-03-10T05:52:38.084 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:52:37 vm02 bash[17462]: audit 2026-03-10T05:52:37.474435+0000 mon.a (mon.0) 918 : audit [INF] from='mgr.24988 ' entity='mgr.y' 2026-03-10T05:52:38.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:52:37 vm02 bash[17462]: audit 2026-03-10T05:52:37.478063+0000 mon.c (mon.1) 161 : audit [DBG] from='mgr.24988 192.168.123.102:0/1338236995' entity='mgr.y' cmd=[{"prefix": "dashboard iscsi-gateway-list"}]: dispatch 2026-03-10T05:52:38.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:52:37 vm02 bash[17462]: audit 2026-03-10T05:52:37.486463+0000 mon.a (mon.0) 919 : audit [INF] from='mgr.24988 ' entity='mgr.y' 2026-03-10T05:52:38.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:52:37 vm02 bash[17462]: audit 2026-03-10T05:52:37.489091+0000 mon.c (mon.1) 162 : audit [INF] from='mgr.24988 192.168.123.102:0/1338236995' entity='mgr.y' cmd=[{"prefix": "dashboard set-iscsi-api-ssl-verification", "value": "true"}]: dispatch 2026-03-10T05:52:38.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:52:37 vm02 bash[17462]: audit 2026-03-10T05:52:37.491160+0000 mon.c (mon.1) 163 : audit [INF] from='mgr.24988 192.168.123.102:0/1338236995' entity='mgr.y' cmd=[{"prefix": "dashboard iscsi-gateway-add", "name": "vm02"}]: dispatch 2026-03-10T05:52:38.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:52:37 vm02 bash[17462]: audit 2026-03-10T05:52:37.494722+0000 mon.a (mon.0) 920 : audit [INF] from='mgr.24988 ' entity='mgr.y' 2026-03-10T05:52:38.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:52:37 vm02 bash[17462]: audit 2026-03-10T05:52:37.498522+0000 mon.c (mon.1) 164 : audit [DBG] from='mgr.24988 192.168.123.102:0/1338236995' entity='mgr.y' cmd=[{"prefix": "dashboard get-prometheus-api-host"}]: dispatch 2026-03-10T05:52:38.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:52:37 vm02 bash[17462]: audit 2026-03-10T05:52:37.527121+0000 mon.c (mon.1) 165 : audit [DBG] from='mgr.24988 192.168.123.102:0/1338236995' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T05:52:38.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:37 vm02 bash[22526]: cephadm 2026-03-10T05:52:36.009684+0000 mgr.y (mgr.24988) 23 : cephadm [INF] Reconfiguring iscsi.foo.vm02.mxbwmh (dependencies changed)... 2026-03-10T05:52:38.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:37 vm02 bash[22526]: cephadm 2026-03-10T05:52:36.014632+0000 mgr.y (mgr.24988) 24 : cephadm [INF] Reconfiguring daemon iscsi.foo.vm02.mxbwmh on vm02 2026-03-10T05:52:38.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:37 vm02 bash[22526]: cluster 2026-03-10T05:52:36.203538+0000 mgr.y (mgr.24988) 25 : cluster [DBG] pgmap v10: 161 pgs: 161 active+clean; 457 KiB data, 102 MiB used, 160 GiB / 160 GiB avail; 15 KiB/s rd, 0 B/s wr, 6 op/s 2026-03-10T05:52:38.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:37 vm02 bash[22526]: cephadm 2026-03-10T05:52:36.480334+0000 mgr.y (mgr.24988) 26 : cephadm [INF] Reconfiguring prometheus.a (dependencies changed)... 2026-03-10T05:52:38.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:37 vm02 bash[22526]: cephadm 2026-03-10T05:52:36.722967+0000 mgr.y (mgr.24988) 27 : cephadm [INF] Reconfiguring daemon prometheus.a on vm05 2026-03-10T05:52:38.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:37 vm02 bash[22526]: audit 2026-03-10T05:52:37.015993+0000 mon.c (mon.1) 159 : audit [DBG] from='client.? 192.168.123.102:0/2581955639' entity='client.iscsi.foo.vm02.mxbwmh' cmd=[{"prefix": "osd blocklist ls"}]: dispatch 2026-03-10T05:52:38.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:37 vm02 bash[22526]: audit 2026-03-10T05:52:37.422102+0000 mon.c (mon.1) 160 : audit [DBG] from='mgr.24988 192.168.123.102:0/1338236995' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T05:52:38.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:37 vm02 bash[22526]: audit 2026-03-10T05:52:37.467831+0000 mon.a (mon.0) 917 : audit [INF] from='mgr.24988 ' entity='mgr.y' 2026-03-10T05:52:38.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:37 vm02 bash[22526]: audit 2026-03-10T05:52:37.474435+0000 mon.a (mon.0) 918 : audit [INF] from='mgr.24988 ' entity='mgr.y' 2026-03-10T05:52:38.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:37 vm02 bash[22526]: audit 2026-03-10T05:52:37.478063+0000 mon.c (mon.1) 161 : audit [DBG] from='mgr.24988 192.168.123.102:0/1338236995' entity='mgr.y' cmd=[{"prefix": "dashboard iscsi-gateway-list"}]: dispatch 2026-03-10T05:52:38.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:37 vm02 bash[22526]: audit 2026-03-10T05:52:37.486463+0000 mon.a (mon.0) 919 : audit [INF] from='mgr.24988 ' entity='mgr.y' 2026-03-10T05:52:38.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:37 vm02 bash[22526]: audit 2026-03-10T05:52:37.489091+0000 mon.c (mon.1) 162 : audit [INF] from='mgr.24988 192.168.123.102:0/1338236995' entity='mgr.y' cmd=[{"prefix": "dashboard set-iscsi-api-ssl-verification", "value": "true"}]: dispatch 2026-03-10T05:52:38.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:37 vm02 bash[22526]: audit 2026-03-10T05:52:37.491160+0000 mon.c (mon.1) 163 : audit [INF] from='mgr.24988 192.168.123.102:0/1338236995' entity='mgr.y' cmd=[{"prefix": "dashboard iscsi-gateway-add", "name": "vm02"}]: dispatch 2026-03-10T05:52:38.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:37 vm02 bash[22526]: audit 2026-03-10T05:52:37.494722+0000 mon.a (mon.0) 920 : audit [INF] from='mgr.24988 ' entity='mgr.y' 2026-03-10T05:52:38.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:37 vm02 bash[22526]: audit 2026-03-10T05:52:37.498522+0000 mon.c (mon.1) 164 : audit [DBG] from='mgr.24988 192.168.123.102:0/1338236995' entity='mgr.y' cmd=[{"prefix": "dashboard get-prometheus-api-host"}]: dispatch 2026-03-10T05:52:38.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:37 vm02 bash[22526]: audit 2026-03-10T05:52:37.527121+0000 mon.c (mon.1) 165 : audit [DBG] from='mgr.24988 192.168.123.102:0/1338236995' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T05:52:38.483 INFO:journalctl@ceph.mgr.x.vm05.stdout:Mar 10 05:52:38 vm05 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:52:38.483 INFO:journalctl@ceph.mgr.x.vm05.stdout:Mar 10 05:52:38 vm05 systemd[1]: Stopping Ceph mgr.x for 107483ae-1c44-11f1-b530-c1172cd6122a... 2026-03-10T05:52:38.483 INFO:journalctl@ceph.osd.4.vm05.stdout:Mar 10 05:52:38 vm05 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:52:38.483 INFO:journalctl@ceph.osd.6.vm05.stdout:Mar 10 05:52:38 vm05 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:52:38.484 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:52:38 vm05 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:52:38.484 INFO:journalctl@ceph.osd.5.vm05.stdout:Mar 10 05:52:38 vm05 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:52:38.484 INFO:journalctl@ceph.osd.7.vm05.stdout:Mar 10 05:52:38 vm05 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:52:38.484 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:52:38 vm05 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:52:38.484 INFO:journalctl@ceph.prometheus.a.vm05.stdout:Mar 10 05:52:38 vm05 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:52:38.484 INFO:journalctl@ceph.node-exporter.b.vm05.stdout:Mar 10 05:52:38 vm05 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:52:38.751 INFO:journalctl@ceph.mgr.x.vm05.stdout:Mar 10 05:52:38 vm05 bash[41541]: ceph-107483ae-1c44-11f1-b530-c1172cd6122a-mgr-x 2026-03-10T05:52:38.751 INFO:journalctl@ceph.mgr.x.vm05.stdout:Mar 10 05:52:38 vm05 systemd[1]: ceph-107483ae-1c44-11f1-b530-c1172cd6122a@mgr.x.service: Main process exited, code=exited, status=143/n/a 2026-03-10T05:52:38.751 INFO:journalctl@ceph.mgr.x.vm05.stdout:Mar 10 05:52:38 vm05 systemd[1]: ceph-107483ae-1c44-11f1-b530-c1172cd6122a@mgr.x.service: Failed with result 'exit-code'. 2026-03-10T05:52:38.751 INFO:journalctl@ceph.mgr.x.vm05.stdout:Mar 10 05:52:38 vm05 systemd[1]: Stopped Ceph mgr.x for 107483ae-1c44-11f1-b530-c1172cd6122a. 2026-03-10T05:52:38.751 INFO:journalctl@ceph.mgr.x.vm05.stdout:Mar 10 05:52:38 vm05 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:52:38.751 INFO:journalctl@ceph.mgr.x.vm05.stdout:Mar 10 05:52:38 vm05 systemd[1]: Started Ceph mgr.x for 107483ae-1c44-11f1-b530-c1172cd6122a. 2026-03-10T05:52:38.751 INFO:journalctl@ceph.osd.4.vm05.stdout:Mar 10 05:52:38 vm05 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:52:38.751 INFO:journalctl@ceph.osd.6.vm05.stdout:Mar 10 05:52:38 vm05 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:52:38.751 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:52:38 vm05 bash[17864]: audit 2026-03-10T05:52:37.478703+0000 mgr.y (mgr.24988) 28 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard iscsi-gateway-list"}]: dispatch 2026-03-10T05:52:38.751 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:52:38 vm05 bash[17864]: cephadm 2026-03-10T05:52:37.488869+0000 mgr.y (mgr.24988) 29 : cephadm [INF] Adding iSCSI gateway http://:@192.168.123.102:5000 to Dashboard 2026-03-10T05:52:38.751 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:52:38 vm05 bash[17864]: audit 2026-03-10T05:52:37.489402+0000 mgr.y (mgr.24988) 30 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard set-iscsi-api-ssl-verification", "value": "true"}]: dispatch 2026-03-10T05:52:38.751 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:52:38 vm05 bash[17864]: audit 2026-03-10T05:52:37.491421+0000 mgr.y (mgr.24988) 31 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard iscsi-gateway-add", "name": "vm02"}]: dispatch 2026-03-10T05:52:38.751 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:52:38 vm05 bash[17864]: audit 2026-03-10T05:52:37.498775+0000 mgr.y (mgr.24988) 32 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard get-prometheus-api-host"}]: dispatch 2026-03-10T05:52:38.752 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:52:38 vm05 bash[17864]: audit 2026-03-10T05:52:37.913897+0000 mon.c (mon.1) 166 : audit [INF] from='mgr.24988 192.168.123.102:0/1338236995' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.x", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch 2026-03-10T05:52:38.752 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:52:38 vm05 bash[17864]: audit 2026-03-10T05:52:37.914265+0000 mon.a (mon.0) 921 : audit [INF] from='mgr.24988 ' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.x", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch 2026-03-10T05:52:38.752 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:52:38 vm05 bash[17864]: audit 2026-03-10T05:52:37.915178+0000 mon.c (mon.1) 167 : audit [DBG] from='mgr.24988 192.168.123.102:0/1338236995' entity='mgr.y' cmd=[{"prefix": "mgr services"}]: dispatch 2026-03-10T05:52:38.752 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:52:38 vm05 bash[17864]: audit 2026-03-10T05:52:37.915940+0000 mon.c (mon.1) 168 : audit [DBG] from='mgr.24988 192.168.123.102:0/1338236995' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:52:38.752 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:52:38 vm05 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:52:38.752 INFO:journalctl@ceph.osd.5.vm05.stdout:Mar 10 05:52:38 vm05 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:52:38.752 INFO:journalctl@ceph.osd.7.vm05.stdout:Mar 10 05:52:38 vm05 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:52:38.752 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:52:38 vm05 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:52:38.752 INFO:journalctl@ceph.prometheus.a.vm05.stdout:Mar 10 05:52:38 vm05 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:52:38.752 INFO:journalctl@ceph.node-exporter.b.vm05.stdout:Mar 10 05:52:38 vm05 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:52:39.084 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:52:38 vm02 bash[17462]: audit 2026-03-10T05:52:37.478703+0000 mgr.y (mgr.24988) 28 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard iscsi-gateway-list"}]: dispatch 2026-03-10T05:52:39.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:52:38 vm02 bash[17462]: cephadm 2026-03-10T05:52:37.488869+0000 mgr.y (mgr.24988) 29 : cephadm [INF] Adding iSCSI gateway http://:@192.168.123.102:5000 to Dashboard 2026-03-10T05:52:39.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:52:38 vm02 bash[17462]: audit 2026-03-10T05:52:37.489402+0000 mgr.y (mgr.24988) 30 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard set-iscsi-api-ssl-verification", "value": "true"}]: dispatch 2026-03-10T05:52:39.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:52:38 vm02 bash[17462]: audit 2026-03-10T05:52:37.491421+0000 mgr.y (mgr.24988) 31 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard iscsi-gateway-add", "name": "vm02"}]: dispatch 2026-03-10T05:52:39.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:52:38 vm02 bash[17462]: audit 2026-03-10T05:52:37.498775+0000 mgr.y (mgr.24988) 32 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard get-prometheus-api-host"}]: dispatch 2026-03-10T05:52:39.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:52:38 vm02 bash[17462]: audit 2026-03-10T05:52:37.913897+0000 mon.c (mon.1) 166 : audit [INF] from='mgr.24988 192.168.123.102:0/1338236995' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.x", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch 2026-03-10T05:52:39.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:52:38 vm02 bash[17462]: audit 2026-03-10T05:52:37.914265+0000 mon.a (mon.0) 921 : audit [INF] from='mgr.24988 ' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.x", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch 2026-03-10T05:52:39.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:52:38 vm02 bash[17462]: audit 2026-03-10T05:52:37.915178+0000 mon.c (mon.1) 167 : audit [DBG] from='mgr.24988 192.168.123.102:0/1338236995' entity='mgr.y' cmd=[{"prefix": "mgr services"}]: dispatch 2026-03-10T05:52:39.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:52:38 vm02 bash[17462]: audit 2026-03-10T05:52:37.915940+0000 mon.c (mon.1) 168 : audit [DBG] from='mgr.24988 192.168.123.102:0/1338236995' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:52:39.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:38 vm02 bash[22526]: audit 2026-03-10T05:52:37.478703+0000 mgr.y (mgr.24988) 28 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard iscsi-gateway-list"}]: dispatch 2026-03-10T05:52:39.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:38 vm02 bash[22526]: cephadm 2026-03-10T05:52:37.488869+0000 mgr.y (mgr.24988) 29 : cephadm [INF] Adding iSCSI gateway http://:@192.168.123.102:5000 to Dashboard 2026-03-10T05:52:39.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:38 vm02 bash[22526]: audit 2026-03-10T05:52:37.489402+0000 mgr.y (mgr.24988) 30 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard set-iscsi-api-ssl-verification", "value": "true"}]: dispatch 2026-03-10T05:52:39.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:38 vm02 bash[22526]: audit 2026-03-10T05:52:37.491421+0000 mgr.y (mgr.24988) 31 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard iscsi-gateway-add", "name": "vm02"}]: dispatch 2026-03-10T05:52:39.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:38 vm02 bash[22526]: audit 2026-03-10T05:52:37.498775+0000 mgr.y (mgr.24988) 32 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard get-prometheus-api-host"}]: dispatch 2026-03-10T05:52:39.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:38 vm02 bash[22526]: audit 2026-03-10T05:52:37.913897+0000 mon.c (mon.1) 166 : audit [INF] from='mgr.24988 192.168.123.102:0/1338236995' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.x", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch 2026-03-10T05:52:39.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:38 vm02 bash[22526]: audit 2026-03-10T05:52:37.914265+0000 mon.a (mon.0) 921 : audit [INF] from='mgr.24988 ' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.x", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch 2026-03-10T05:52:39.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:38 vm02 bash[22526]: audit 2026-03-10T05:52:37.915178+0000 mon.c (mon.1) 167 : audit [DBG] from='mgr.24988 192.168.123.102:0/1338236995' entity='mgr.y' cmd=[{"prefix": "mgr services"}]: dispatch 2026-03-10T05:52:39.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:38 vm02 bash[22526]: audit 2026-03-10T05:52:37.915940+0000 mon.c (mon.1) 168 : audit [DBG] from='mgr.24988 192.168.123.102:0/1338236995' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:52:39.224 INFO:journalctl@ceph.mgr.x.vm05.stdout:Mar 10 05:52:39 vm05 bash[41654]: debug 2026-03-10T05:52:39.057+0000 7fc0f16e6140 -1 mgr[py] Module status has missing NOTIFY_TYPES member 2026-03-10T05:52:39.224 INFO:journalctl@ceph.mgr.x.vm05.stdout:Mar 10 05:52:39 vm05 bash[41654]: debug 2026-03-10T05:52:39.105+0000 7fc0f16e6140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member 2026-03-10T05:52:39.494 INFO:journalctl@ceph.mgr.x.vm05.stdout:Mar 10 05:52:39 vm05 bash[41654]: debug 2026-03-10T05:52:39.221+0000 7fc0f16e6140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member 2026-03-10T05:52:39.751 INFO:journalctl@ceph.mgr.x.vm05.stdout:Mar 10 05:52:39 vm05 bash[41654]: debug 2026-03-10T05:52:39.489+0000 7fc0f16e6140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member 2026-03-10T05:52:40.250 INFO:journalctl@ceph.mgr.x.vm05.stdout:Mar 10 05:52:39 vm05 bash[41654]: debug 2026-03-10T05:52:39.913+0000 7fc0f16e6140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member 2026-03-10T05:52:40.250 INFO:journalctl@ceph.mgr.x.vm05.stdout:Mar 10 05:52:39 vm05 bash[41654]: debug 2026-03-10T05:52:39.989+0000 7fc0f16e6140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member 2026-03-10T05:52:40.250 INFO:journalctl@ceph.mgr.x.vm05.stdout:Mar 10 05:52:40 vm05 bash[41654]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode. 2026-03-10T05:52:40.250 INFO:journalctl@ceph.mgr.x.vm05.stdout:Mar 10 05:52:40 vm05 bash[41654]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve. 2026-03-10T05:52:40.250 INFO:journalctl@ceph.mgr.x.vm05.stdout:Mar 10 05:52:40 vm05 bash[41654]: from numpy import show_config as show_numpy_config 2026-03-10T05:52:40.250 INFO:journalctl@ceph.mgr.x.vm05.stdout:Mar 10 05:52:40 vm05 bash[41654]: debug 2026-03-10T05:52:40.117+0000 7fc0f16e6140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member 2026-03-10T05:52:40.250 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:52:39 vm05 bash[17864]: cephadm 2026-03-10T05:52:37.913443+0000 mgr.y (mgr.24988) 33 : cephadm [INF] Upgrade: Updating mgr.x 2026-03-10T05:52:40.250 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:52:39 vm05 bash[17864]: cephadm 2026-03-10T05:52:37.916609+0000 mgr.y (mgr.24988) 34 : cephadm [INF] Deploying daemon mgr.x on vm05 2026-03-10T05:52:40.250 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:52:39 vm05 bash[17864]: cluster 2026-03-10T05:52:38.203849+0000 mgr.y (mgr.24988) 35 : cluster [DBG] pgmap v11: 161 pgs: 161 active+clean; 457 KiB data, 102 MiB used, 160 GiB / 160 GiB avail; 15 KiB/s rd, 0 B/s wr, 6 op/s 2026-03-10T05:52:40.250 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:52:39 vm05 bash[17864]: audit 2026-03-10T05:52:38.817675+0000 mon.a (mon.0) 922 : audit [INF] from='mgr.24988 ' entity='mgr.y' 2026-03-10T05:52:40.250 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:52:39 vm05 bash[17864]: audit 2026-03-10T05:52:38.838370+0000 mon.a (mon.0) 923 : audit [INF] from='mgr.24988 ' entity='mgr.y' 2026-03-10T05:52:40.334 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:39 vm02 bash[22526]: cephadm 2026-03-10T05:52:37.913443+0000 mgr.y (mgr.24988) 33 : cephadm [INF] Upgrade: Updating mgr.x 2026-03-10T05:52:40.334 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:39 vm02 bash[22526]: cephadm 2026-03-10T05:52:37.916609+0000 mgr.y (mgr.24988) 34 : cephadm [INF] Deploying daemon mgr.x on vm05 2026-03-10T05:52:40.335 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:39 vm02 bash[22526]: cluster 2026-03-10T05:52:38.203849+0000 mgr.y (mgr.24988) 35 : cluster [DBG] pgmap v11: 161 pgs: 161 active+clean; 457 KiB data, 102 MiB used, 160 GiB / 160 GiB avail; 15 KiB/s rd, 0 B/s wr, 6 op/s 2026-03-10T05:52:40.335 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:39 vm02 bash[22526]: audit 2026-03-10T05:52:38.817675+0000 mon.a (mon.0) 922 : audit [INF] from='mgr.24988 ' entity='mgr.y' 2026-03-10T05:52:40.335 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:39 vm02 bash[22526]: audit 2026-03-10T05:52:38.838370+0000 mon.a (mon.0) 923 : audit [INF] from='mgr.24988 ' entity='mgr.y' 2026-03-10T05:52:40.335 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:52:39 vm02 bash[17462]: cephadm 2026-03-10T05:52:37.913443+0000 mgr.y (mgr.24988) 33 : cephadm [INF] Upgrade: Updating mgr.x 2026-03-10T05:52:40.335 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:52:39 vm02 bash[17462]: cephadm 2026-03-10T05:52:37.916609+0000 mgr.y (mgr.24988) 34 : cephadm [INF] Deploying daemon mgr.x on vm05 2026-03-10T05:52:40.335 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:52:39 vm02 bash[17462]: cluster 2026-03-10T05:52:38.203849+0000 mgr.y (mgr.24988) 35 : cluster [DBG] pgmap v11: 161 pgs: 161 active+clean; 457 KiB data, 102 MiB used, 160 GiB / 160 GiB avail; 15 KiB/s rd, 0 B/s wr, 6 op/s 2026-03-10T05:52:40.335 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:52:39 vm02 bash[17462]: audit 2026-03-10T05:52:38.817675+0000 mon.a (mon.0) 922 : audit [INF] from='mgr.24988 ' entity='mgr.y' 2026-03-10T05:52:40.335 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:52:39 vm02 bash[17462]: audit 2026-03-10T05:52:38.838370+0000 mon.a (mon.0) 923 : audit [INF] from='mgr.24988 ' entity='mgr.y' 2026-03-10T05:52:40.501 INFO:journalctl@ceph.mgr.x.vm05.stdout:Mar 10 05:52:40 vm05 bash[41654]: debug 2026-03-10T05:52:40.245+0000 7fc0f16e6140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member 2026-03-10T05:52:40.501 INFO:journalctl@ceph.mgr.x.vm05.stdout:Mar 10 05:52:40 vm05 bash[41654]: debug 2026-03-10T05:52:40.281+0000 7fc0f16e6140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member 2026-03-10T05:52:40.501 INFO:journalctl@ceph.mgr.x.vm05.stdout:Mar 10 05:52:40 vm05 bash[41654]: debug 2026-03-10T05:52:40.313+0000 7fc0f16e6140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member 2026-03-10T05:52:40.501 INFO:journalctl@ceph.mgr.x.vm05.stdout:Mar 10 05:52:40 vm05 bash[41654]: debug 2026-03-10T05:52:40.349+0000 7fc0f16e6140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member 2026-03-10T05:52:40.501 INFO:journalctl@ceph.mgr.x.vm05.stdout:Mar 10 05:52:40 vm05 bash[41654]: debug 2026-03-10T05:52:40.397+0000 7fc0f16e6140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member 2026-03-10T05:52:41.092 INFO:journalctl@ceph.mgr.x.vm05.stdout:Mar 10 05:52:40 vm05 bash[41654]: debug 2026-03-10T05:52:40.809+0000 7fc0f16e6140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member 2026-03-10T05:52:41.092 INFO:journalctl@ceph.mgr.x.vm05.stdout:Mar 10 05:52:40 vm05 bash[41654]: debug 2026-03-10T05:52:40.845+0000 7fc0f16e6140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member 2026-03-10T05:52:41.092 INFO:journalctl@ceph.mgr.x.vm05.stdout:Mar 10 05:52:40 vm05 bash[41654]: debug 2026-03-10T05:52:40.877+0000 7fc0f16e6140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member 2026-03-10T05:52:41.092 INFO:journalctl@ceph.mgr.x.vm05.stdout:Mar 10 05:52:41 vm05 bash[41654]: debug 2026-03-10T05:52:41.013+0000 7fc0f16e6140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member 2026-03-10T05:52:41.092 INFO:journalctl@ceph.mgr.x.vm05.stdout:Mar 10 05:52:41 vm05 bash[41654]: debug 2026-03-10T05:52:41.049+0000 7fc0f16e6140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member 2026-03-10T05:52:41.092 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:52:40 vm05 bash[17864]: cluster 2026-03-10T05:52:40.204353+0000 mgr.y (mgr.24988) 36 : cluster [DBG] pgmap v12: 161 pgs: 161 active+clean; 457 KiB data, 102 MiB used, 160 GiB / 160 GiB avail; 16 KiB/s rd, 0 B/s wr, 7 op/s 2026-03-10T05:52:41.334 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:40 vm02 bash[22526]: cluster 2026-03-10T05:52:40.204353+0000 mgr.y (mgr.24988) 36 : cluster [DBG] pgmap v12: 161 pgs: 161 active+clean; 457 KiB data, 102 MiB used, 160 GiB / 160 GiB avail; 16 KiB/s rd, 0 B/s wr, 7 op/s 2026-03-10T05:52:41.334 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:52:40 vm02 bash[17462]: cluster 2026-03-10T05:52:40.204353+0000 mgr.y (mgr.24988) 36 : cluster [DBG] pgmap v12: 161 pgs: 161 active+clean; 457 KiB data, 102 MiB used, 160 GiB / 160 GiB avail; 16 KiB/s rd, 0 B/s wr, 7 op/s 2026-03-10T05:52:41.345 INFO:journalctl@ceph.mgr.x.vm05.stdout:Mar 10 05:52:41 vm05 bash[41654]: debug 2026-03-10T05:52:41.089+0000 7fc0f16e6140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member 2026-03-10T05:52:41.345 INFO:journalctl@ceph.mgr.x.vm05.stdout:Mar 10 05:52:41 vm05 bash[41654]: debug 2026-03-10T05:52:41.197+0000 7fc0f16e6140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member 2026-03-10T05:52:41.713 INFO:journalctl@ceph.mgr.x.vm05.stdout:Mar 10 05:52:41 vm05 bash[41654]: debug 2026-03-10T05:52:41.341+0000 7fc0f16e6140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member 2026-03-10T05:52:41.713 INFO:journalctl@ceph.mgr.x.vm05.stdout:Mar 10 05:52:41 vm05 bash[41654]: debug 2026-03-10T05:52:41.501+0000 7fc0f16e6140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member 2026-03-10T05:52:41.713 INFO:journalctl@ceph.mgr.x.vm05.stdout:Mar 10 05:52:41 vm05 bash[41654]: debug 2026-03-10T05:52:41.533+0000 7fc0f16e6140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member 2026-03-10T05:52:41.713 INFO:journalctl@ceph.mgr.x.vm05.stdout:Mar 10 05:52:41 vm05 bash[41654]: debug 2026-03-10T05:52:41.573+0000 7fc0f16e6140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member 2026-03-10T05:52:42.001 INFO:journalctl@ceph.mgr.x.vm05.stdout:Mar 10 05:52:41 vm05 bash[41654]: debug 2026-03-10T05:52:41.709+0000 7fc0f16e6140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member 2026-03-10T05:52:42.001 INFO:journalctl@ceph.mgr.x.vm05.stdout:Mar 10 05:52:41 vm05 bash[41654]: debug 2026-03-10T05:52:41.917+0000 7fc0f16e6140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member 2026-03-10T05:52:42.001 INFO:journalctl@ceph.mgr.x.vm05.stdout:Mar 10 05:52:41 vm05 bash[41654]: [10/Mar/2026:05:52:41] ENGINE Bus STARTING 2026-03-10T05:52:42.001 INFO:journalctl@ceph.mgr.x.vm05.stdout:Mar 10 05:52:41 vm05 bash[41654]: CherryPy Checker: 2026-03-10T05:52:42.001 INFO:journalctl@ceph.mgr.x.vm05.stdout:Mar 10 05:52:41 vm05 bash[41654]: The Application mounted at '' has an empty config. 2026-03-10T05:52:42.001 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:52:41 vm05 bash[17864]: cluster 2026-03-10T05:52:41.925625+0000 mon.a (mon.0) 924 : cluster [DBG] Standby manager daemon x restarted 2026-03-10T05:52:42.001 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:52:41 vm05 bash[17864]: cluster 2026-03-10T05:52:41.925923+0000 mon.a (mon.0) 925 : cluster [DBG] Standby manager daemon x started 2026-03-10T05:52:42.001 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:52:41 vm05 bash[17864]: audit 2026-03-10T05:52:41.926442+0000 mon.a (mon.0) 926 : audit [DBG] from='mgr.? 192.168.123.105:0/3283129829' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/x/crt"}]: dispatch 2026-03-10T05:52:42.001 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:52:41 vm05 bash[17864]: audit 2026-03-10T05:52:41.926929+0000 mon.a (mon.0) 927 : audit [DBG] from='mgr.? 192.168.123.105:0/3283129829' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/crt"}]: dispatch 2026-03-10T05:52:42.001 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:52:41 vm05 bash[17864]: audit 2026-03-10T05:52:41.927949+0000 mon.a (mon.0) 928 : audit [DBG] from='mgr.? 192.168.123.105:0/3283129829' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/x/key"}]: dispatch 2026-03-10T05:52:42.001 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:52:41 vm05 bash[17864]: audit 2026-03-10T05:52:41.928484+0000 mon.a (mon.0) 929 : audit [DBG] from='mgr.? 192.168.123.105:0/3283129829' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/key"}]: dispatch 2026-03-10T05:52:42.334 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:41 vm02 bash[22526]: cluster 2026-03-10T05:52:41.925625+0000 mon.a (mon.0) 924 : cluster [DBG] Standby manager daemon x restarted 2026-03-10T05:52:42.334 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:41 vm02 bash[22526]: cluster 2026-03-10T05:52:41.925923+0000 mon.a (mon.0) 925 : cluster [DBG] Standby manager daemon x started 2026-03-10T05:52:42.334 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:41 vm02 bash[22526]: audit 2026-03-10T05:52:41.926442+0000 mon.a (mon.0) 926 : audit [DBG] from='mgr.? 192.168.123.105:0/3283129829' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/x/crt"}]: dispatch 2026-03-10T05:52:42.334 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:41 vm02 bash[22526]: audit 2026-03-10T05:52:41.926929+0000 mon.a (mon.0) 927 : audit [DBG] from='mgr.? 192.168.123.105:0/3283129829' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/crt"}]: dispatch 2026-03-10T05:52:42.334 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:41 vm02 bash[22526]: audit 2026-03-10T05:52:41.927949+0000 mon.a (mon.0) 928 : audit [DBG] from='mgr.? 192.168.123.105:0/3283129829' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/x/key"}]: dispatch 2026-03-10T05:52:42.334 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:41 vm02 bash[22526]: audit 2026-03-10T05:52:41.928484+0000 mon.a (mon.0) 929 : audit [DBG] from='mgr.? 192.168.123.105:0/3283129829' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/key"}]: dispatch 2026-03-10T05:52:42.335 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:52:41 vm02 bash[17462]: cluster 2026-03-10T05:52:41.925625+0000 mon.a (mon.0) 924 : cluster [DBG] Standby manager daemon x restarted 2026-03-10T05:52:42.335 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:52:41 vm02 bash[17462]: cluster 2026-03-10T05:52:41.925923+0000 mon.a (mon.0) 925 : cluster [DBG] Standby manager daemon x started 2026-03-10T05:52:42.335 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:52:41 vm02 bash[17462]: audit 2026-03-10T05:52:41.926442+0000 mon.a (mon.0) 926 : audit [DBG] from='mgr.? 192.168.123.105:0/3283129829' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/x/crt"}]: dispatch 2026-03-10T05:52:42.335 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:52:41 vm02 bash[17462]: audit 2026-03-10T05:52:41.926929+0000 mon.a (mon.0) 927 : audit [DBG] from='mgr.? 192.168.123.105:0/3283129829' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/crt"}]: dispatch 2026-03-10T05:52:42.335 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:52:41 vm02 bash[17462]: audit 2026-03-10T05:52:41.927949+0000 mon.a (mon.0) 928 : audit [DBG] from='mgr.? 192.168.123.105:0/3283129829' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/x/key"}]: dispatch 2026-03-10T05:52:42.335 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:52:41 vm02 bash[17462]: audit 2026-03-10T05:52:41.928484+0000 mon.a (mon.0) 929 : audit [DBG] from='mgr.? 192.168.123.105:0/3283129829' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/key"}]: dispatch 2026-03-10T05:52:42.501 INFO:journalctl@ceph.mgr.x.vm05.stdout:Mar 10 05:52:42 vm05 bash[41654]: [10/Mar/2026:05:52:42] ENGINE Serving on http://:::9283 2026-03-10T05:52:42.501 INFO:journalctl@ceph.mgr.x.vm05.stdout:Mar 10 05:52:42 vm05 bash[41654]: [10/Mar/2026:05:52:42] ENGINE Bus STARTED 2026-03-10T05:52:43.251 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:52:42 vm05 bash[17864]: cluster 2026-03-10T05:52:41.947435+0000 mon.a (mon.0) 930 : cluster [DBG] mgrmap e32: y(active, since 20s), standbys: x 2026-03-10T05:52:43.251 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:52:42 vm05 bash[17864]: cluster 2026-03-10T05:52:42.204676+0000 mgr.y (mgr.24988) 37 : cluster [DBG] pgmap v13: 161 pgs: 161 active+clean; 457 KiB data, 102 MiB used, 160 GiB / 160 GiB avail; 938 B/s rd, 1 op/s 2026-03-10T05:52:43.334 INFO:journalctl@ceph.mgr.y.vm02.stdout:Mar 10 05:52:42 vm02 bash[52264]: ::ffff:192.168.123.105 - - [10/Mar/2026:05:52:42] "GET /metrics HTTP/1.1" 200 37777 "" "Prometheus/2.51.0" 2026-03-10T05:52:43.334 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:42 vm02 bash[22526]: cluster 2026-03-10T05:52:41.947435+0000 mon.a (mon.0) 930 : cluster [DBG] mgrmap e32: y(active, since 20s), standbys: x 2026-03-10T05:52:43.334 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:42 vm02 bash[22526]: cluster 2026-03-10T05:52:42.204676+0000 mgr.y (mgr.24988) 37 : cluster [DBG] pgmap v13: 161 pgs: 161 active+clean; 457 KiB data, 102 MiB used, 160 GiB / 160 GiB avail; 938 B/s rd, 1 op/s 2026-03-10T05:52:43.334 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:52:42 vm02 bash[17462]: cluster 2026-03-10T05:52:41.947435+0000 mon.a (mon.0) 930 : cluster [DBG] mgrmap e32: y(active, since 20s), standbys: x 2026-03-10T05:52:43.334 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:52:42 vm02 bash[17462]: cluster 2026-03-10T05:52:42.204676+0000 mgr.y (mgr.24988) 37 : cluster [DBG] pgmap v13: 161 pgs: 161 active+clean; 457 KiB data, 102 MiB used, 160 GiB / 160 GiB avail; 938 B/s rd, 1 op/s 2026-03-10T05:52:44.501 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:52:44 vm05 bash[17864]: audit 2026-03-10T05:52:43.129829+0000 mon.a (mon.0) 931 : audit [INF] from='mgr.24988 ' entity='mgr.y' 2026-03-10T05:52:44.501 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:52:44 vm05 bash[17864]: audit 2026-03-10T05:52:43.140916+0000 mon.a (mon.0) 932 : audit [INF] from='mgr.24988 ' entity='mgr.y' 2026-03-10T05:52:44.501 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:52:44 vm05 bash[17864]: audit 2026-03-10T05:52:43.691534+0000 mon.a (mon.0) 933 : audit [INF] from='mgr.24988 ' entity='mgr.y' 2026-03-10T05:52:44.501 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:52:44 vm05 bash[17864]: audit 2026-03-10T05:52:43.700251+0000 mon.a (mon.0) 934 : audit [INF] from='mgr.24988 ' entity='mgr.y' 2026-03-10T05:52:44.501 INFO:journalctl@ceph.prometheus.a.vm05.stdout:Mar 10 05:52:44 vm05 bash[41269]: ts=2026-03-10T05:52:44.147Z caller=group.go:483 level=warn name=CephOSDFlapping index=13 component="rule manager" file=/etc/prometheus/alerting/ceph_alerts.yml group=osd msg="Evaluating rule failed" rule="alert: CephOSDFlapping\nexpr: (rate(ceph_osd_up[5m]) * on (ceph_daemon) group_left (hostname) ceph_osd_metadata)\n * 60 > 1\nlabels:\n oid: 1.3.6.1.4.1.50495.1.2.1.4.4\n severity: warning\n type: ceph_default\nannotations:\n description: OSD {{ $labels.ceph_daemon }} on {{ $labels.hostname }} was marked\n down and back up {{ $value | humanize }} times once a minute for 5 minutes. This\n may indicate a network issue (latency, packet loss, MTU mismatch) on the cluster\n network, or the public network if no cluster network is deployed. Check the network\n stats on the listed host(s).\n documentation: https://docs.ceph.com/en/latest/rados/troubleshooting/troubleshooting-osd#flapping-osds\n summary: Network issues are causing OSDs to flap (mark each other down)\n" err="found duplicate series for the match group {ceph_daemon=\"osd.0\"} on the right hand-side of the operation: [{__name__=\"ceph_osd_metadata\", ceph_daemon=\"osd.0\", ceph_version=\"ceph version 17.2.0 (43e2e60a7559d3f46c9d53f1ca875fd499a1e35e) quincy (stable)\", cluster=\"107483ae-1c44-11f1-b530-c1172cd6122a\", cluster_addr=\"192.168.123.102\", device_class=\"hdd\", hostname=\"vm02\", instance=\"ceph_cluster\", job=\"ceph\", objectstore=\"bluestore\", public_addr=\"192.168.123.102\"}, {__name__=\"ceph_osd_metadata\", ceph_daemon=\"osd.0\", ceph_version=\"ceph version 17.2.0 (43e2e60a7559d3f46c9d53f1ca875fd499a1e35e) quincy (stable)\", cluster_addr=\"192.168.123.102\", device_class=\"hdd\", hostname=\"vm02\", instance=\"192.168.123.105:9283\", job=\"ceph\", objectstore=\"bluestore\", public_addr=\"192.168.123.102\"}];many-to-many matching not allowed: matching labels must be unique on one side" 2026-03-10T05:52:44.584 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:52:44 vm02 bash[17462]: audit 2026-03-10T05:52:43.129829+0000 mon.a (mon.0) 931 : audit [INF] from='mgr.24988 ' entity='mgr.y' 2026-03-10T05:52:44.584 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:52:44 vm02 bash[17462]: audit 2026-03-10T05:52:43.140916+0000 mon.a (mon.0) 932 : audit [INF] from='mgr.24988 ' entity='mgr.y' 2026-03-10T05:52:44.584 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:52:44 vm02 bash[17462]: audit 2026-03-10T05:52:43.691534+0000 mon.a (mon.0) 933 : audit [INF] from='mgr.24988 ' entity='mgr.y' 2026-03-10T05:52:44.584 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:52:44 vm02 bash[17462]: audit 2026-03-10T05:52:43.700251+0000 mon.a (mon.0) 934 : audit [INF] from='mgr.24988 ' entity='mgr.y' 2026-03-10T05:52:44.585 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:44 vm02 bash[22526]: audit 2026-03-10T05:52:43.129829+0000 mon.a (mon.0) 931 : audit [INF] from='mgr.24988 ' entity='mgr.y' 2026-03-10T05:52:44.585 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:44 vm02 bash[22526]: audit 2026-03-10T05:52:43.140916+0000 mon.a (mon.0) 932 : audit [INF] from='mgr.24988 ' entity='mgr.y' 2026-03-10T05:52:44.585 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:44 vm02 bash[22526]: audit 2026-03-10T05:52:43.691534+0000 mon.a (mon.0) 933 : audit [INF] from='mgr.24988 ' entity='mgr.y' 2026-03-10T05:52:44.585 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:44 vm02 bash[22526]: audit 2026-03-10T05:52:43.700251+0000 mon.a (mon.0) 934 : audit [INF] from='mgr.24988 ' entity='mgr.y' 2026-03-10T05:52:45.584 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:52:45 vm02 bash[17462]: cluster 2026-03-10T05:52:44.205190+0000 mgr.y (mgr.24988) 38 : cluster [DBG] pgmap v14: 161 pgs: 161 active+clean; 457 KiB data, 102 MiB used, 160 GiB / 160 GiB avail; 1.3 KiB/s rd, 1 op/s 2026-03-10T05:52:45.584 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:52:45 vm02 bash[17462]: audit 2026-03-10T05:52:44.250100+0000 mon.a (mon.0) 935 : audit [INF] from='mgr.24988 ' entity='mgr.y' 2026-03-10T05:52:45.584 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:52:45 vm02 bash[17462]: audit 2026-03-10T05:52:44.257492+0000 mon.a (mon.0) 936 : audit [INF] from='mgr.24988 ' entity='mgr.y' 2026-03-10T05:52:45.584 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:45 vm02 bash[22526]: cluster 2026-03-10T05:52:44.205190+0000 mgr.y (mgr.24988) 38 : cluster [DBG] pgmap v14: 161 pgs: 161 active+clean; 457 KiB data, 102 MiB used, 160 GiB / 160 GiB avail; 1.3 KiB/s rd, 1 op/s 2026-03-10T05:52:45.585 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:45 vm02 bash[22526]: audit 2026-03-10T05:52:44.250100+0000 mon.a (mon.0) 935 : audit [INF] from='mgr.24988 ' entity='mgr.y' 2026-03-10T05:52:45.585 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:45 vm02 bash[22526]: audit 2026-03-10T05:52:44.257492+0000 mon.a (mon.0) 936 : audit [INF] from='mgr.24988 ' entity='mgr.y' 2026-03-10T05:52:45.751 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:52:45 vm05 bash[17864]: cluster 2026-03-10T05:52:44.205190+0000 mgr.y (mgr.24988) 38 : cluster [DBG] pgmap v14: 161 pgs: 161 active+clean; 457 KiB data, 102 MiB used, 160 GiB / 160 GiB avail; 1.3 KiB/s rd, 1 op/s 2026-03-10T05:52:45.751 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:52:45 vm05 bash[17864]: audit 2026-03-10T05:52:44.250100+0000 mon.a (mon.0) 935 : audit [INF] from='mgr.24988 ' entity='mgr.y' 2026-03-10T05:52:45.751 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:52:45 vm05 bash[17864]: audit 2026-03-10T05:52:44.257492+0000 mon.a (mon.0) 936 : audit [INF] from='mgr.24988 ' entity='mgr.y' 2026-03-10T05:52:46.672 INFO:teuthology.orchestra.run.vm02.stdout:true 2026-03-10T05:52:47.038 INFO:teuthology.orchestra.run.vm02.stdout:NAME HOST PORTS STATUS REFRESHED AGE MEM USE MEM LIM VERSION IMAGE ID CONTAINER ID 2026-03-10T05:52:47.054 INFO:teuthology.orchestra.run.vm02.stdout:alertmanager.a vm02 *:9093,9094 running (49s) 2s ago 5m 14.8M - 0.25.0 c8568f914cd2 7a7c5c2cddb6 2026-03-10T05:52:47.054 INFO:teuthology.orchestra.run.vm02.stdout:grafana.a vm05 *:3000 running (47s) 3s ago 5m 38.9M - dad864ee21e9 95c6d977988a 2026-03-10T05:52:47.054 INFO:teuthology.orchestra.run.vm02.stdout:iscsi.foo.vm02.mxbwmh vm02 running (10s) 2s ago 5m 41.2M - 3.5 e1d6a67b021e 62aba5b41046 2026-03-10T05:52:47.054 INFO:teuthology.orchestra.run.vm02.stdout:mgr.x vm05 *:8443,9283,8765 running (8s) 3s ago 8m 191M - 19.2.3-678-ge911bdeb 654f31e6858e 7579626ada90 2026-03-10T05:52:47.054 INFO:teuthology.orchestra.run.vm02.stdout:mgr.y vm02 *:8443,9283,8765 running (38s) 2s ago 8m 518M - 19.2.3-678-ge911bdeb 654f31e6858e ef46d0f7b15e 2026-03-10T05:52:47.054 INFO:teuthology.orchestra.run.vm02.stdout:mon.a vm02 running (8m) 2s ago 8m 53.6M 2048M 17.2.0 e1d6a67b021e bf59d12a7baa 2026-03-10T05:52:47.054 INFO:teuthology.orchestra.run.vm02.stdout:mon.b vm05 running (8m) 3s ago 8m 43.8M 2048M 17.2.0 e1d6a67b021e 96a2a71fd403 2026-03-10T05:52:47.054 INFO:teuthology.orchestra.run.vm02.stdout:mon.c vm02 running (8m) 2s ago 8m 45.1M 2048M 17.2.0 e1d6a67b021e 2f6dcf491c61 2026-03-10T05:52:47.054 INFO:teuthology.orchestra.run.vm02.stdout:node-exporter.a vm02 *:9100 running (46s) 2s ago 5m 6303k - 1.7.0 72c9c2088986 90288450bd1f 2026-03-10T05:52:47.054 INFO:teuthology.orchestra.run.vm02.stdout:node-exporter.b vm05 *:9100 running (44s) 3s ago 5m 6876k - 1.7.0 72c9c2088986 4e859143cb0e 2026-03-10T05:52:47.054 INFO:teuthology.orchestra.run.vm02.stdout:osd.0 vm02 running (7m) 2s ago 7m 51.1M 4096M 17.2.0 e1d6a67b021e 563d55a3e6a4 2026-03-10T05:52:47.054 INFO:teuthology.orchestra.run.vm02.stdout:osd.1 vm02 running (7m) 2s ago 7m 53.9M 4096M 17.2.0 e1d6a67b021e 8c25a1e89677 2026-03-10T05:52:47.054 INFO:teuthology.orchestra.run.vm02.stdout:osd.2 vm02 running (7m) 2s ago 7m 49.0M 4096M 17.2.0 e1d6a67b021e 826f54bdbc5c 2026-03-10T05:52:47.054 INFO:teuthology.orchestra.run.vm02.stdout:osd.3 vm02 running (7m) 2s ago 7m 52.8M 4096M 17.2.0 e1d6a67b021e 0c6cfa53c9fd 2026-03-10T05:52:47.054 INFO:teuthology.orchestra.run.vm02.stdout:osd.4 vm05 running (6m) 3s ago 6m 52.9M 4096M 17.2.0 e1d6a67b021e 4ffe1741f201 2026-03-10T05:52:47.054 INFO:teuthology.orchestra.run.vm02.stdout:osd.5 vm05 running (6m) 3s ago 6m 51.4M 4096M 17.2.0 e1d6a67b021e cba5583c238e 2026-03-10T05:52:47.054 INFO:teuthology.orchestra.run.vm02.stdout:osd.6 vm05 running (6m) 3s ago 6m 49.2M 4096M 17.2.0 e1d6a67b021e 9d1b370357d7 2026-03-10T05:52:47.054 INFO:teuthology.orchestra.run.vm02.stdout:osd.7 vm05 running (6m) 3s ago 6m 50.7M 4096M 17.2.0 e1d6a67b021e 8a4837b788cf 2026-03-10T05:52:47.054 INFO:teuthology.orchestra.run.vm02.stdout:prometheus.a vm05 *:9095 running (9s) 3s ago 5m 36.1M - 2.51.0 1d3b7f56885b 3328811f8f28 2026-03-10T05:52:47.054 INFO:teuthology.orchestra.run.vm02.stdout:rgw.foo.vm02.pbogjd vm02 *:8000 running (5m) 2s ago 5m 85.9M - 17.2.0 e1d6a67b021e 2ab2ffd1abaa 2026-03-10T05:52:47.054 INFO:teuthology.orchestra.run.vm02.stdout:rgw.foo.vm05.hvmsxl vm05 *:8000 running (5m) 3s ago 5m 85.3M - 17.2.0 e1d6a67b021e 85d1c77b7e9d 2026-03-10T05:52:47.055 INFO:teuthology.orchestra.run.vm02.stdout:rgw.smpl.vm02.pglcfm vm02 *:80 running (5m) 2s ago 5m 84.8M - 17.2.0 e1d6a67b021e ef152a460673 2026-03-10T05:52:47.055 INFO:teuthology.orchestra.run.vm02.stdout:rgw.smpl.vm05.hqqmap vm05 *:80 running (5m) 3s ago 5m 85.4M - 17.2.0 e1d6a67b021e 29c9ee794f34 2026-03-10T05:52:47.251 INFO:journalctl@ceph.prometheus.a.vm05.stdout:Mar 10 05:52:46 vm05 bash[41269]: ts=2026-03-10T05:52:46.948Z caller=group.go:483 level=warn name=CephNodeDiskspaceWarning index=4 component="rule manager" file=/etc/prometheus/alerting/ceph_alerts.yml group=nodes msg="Evaluating rule failed" rule="alert: CephNodeDiskspaceWarning\nexpr: predict_linear(node_filesystem_free_bytes{device=~\"/.*\"}[2d], 3600 * 24 * 5)\n * on (instance) group_left (nodename) node_uname_info < 0\nlabels:\n oid: 1.3.6.1.4.1.50495.1.2.1.8.4\n severity: warning\n type: ceph_default\nannotations:\n description: Mountpoint {{ $labels.mountpoint }} on {{ $labels.nodename }} will\n be full in less than 5 days based on the 48 hour trailing fill rate.\n summary: Host filesystem free space is getting low\n" err="found duplicate series for the match group {instance=\"vm05\"} on the right hand-side of the operation: [{__name__=\"node_uname_info\", cluster=\"107483ae-1c44-11f1-b530-c1172cd6122a\", domainname=\"(none)\", instance=\"vm05\", job=\"node\", machine=\"x86_64\", nodename=\"vm05\", release=\"5.15.0-1092-kvm\", sysname=\"Linux\", version=\"#97-Ubuntu SMP Fri Jan 23 15:00:24 UTC 2026\"}, {__name__=\"node_uname_info\", domainname=\"(none)\", instance=\"vm05\", job=\"node\", machine=\"x86_64\", nodename=\"vm05\", release=\"5.15.0-1092-kvm\", sysname=\"Linux\", version=\"#97-Ubuntu SMP Fri Jan 23 15:00:24 UTC 2026\"}];many-to-many matching not allowed: matching labels must be unique on one side" 2026-03-10T05:52:47.258 INFO:teuthology.orchestra.run.vm02.stdout:{ 2026-03-10T05:52:47.258 INFO:teuthology.orchestra.run.vm02.stdout: "mon": { 2026-03-10T05:52:47.258 INFO:teuthology.orchestra.run.vm02.stdout: "ceph version 17.2.0 (43e2e60a7559d3f46c9d53f1ca875fd499a1e35e) quincy (stable)": 3 2026-03-10T05:52:47.258 INFO:teuthology.orchestra.run.vm02.stdout: }, 2026-03-10T05:52:47.258 INFO:teuthology.orchestra.run.vm02.stdout: "mgr": { 2026-03-10T05:52:47.258 INFO:teuthology.orchestra.run.vm02.stdout: "ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable)": 2 2026-03-10T05:52:47.258 INFO:teuthology.orchestra.run.vm02.stdout: }, 2026-03-10T05:52:47.258 INFO:teuthology.orchestra.run.vm02.stdout: "osd": { 2026-03-10T05:52:47.258 INFO:teuthology.orchestra.run.vm02.stdout: "ceph version 17.2.0 (43e2e60a7559d3f46c9d53f1ca875fd499a1e35e) quincy (stable)": 8 2026-03-10T05:52:47.258 INFO:teuthology.orchestra.run.vm02.stdout: }, 2026-03-10T05:52:47.258 INFO:teuthology.orchestra.run.vm02.stdout: "mds": {}, 2026-03-10T05:52:47.258 INFO:teuthology.orchestra.run.vm02.stdout: "rgw": { 2026-03-10T05:52:47.258 INFO:teuthology.orchestra.run.vm02.stdout: "ceph version 17.2.0 (43e2e60a7559d3f46c9d53f1ca875fd499a1e35e) quincy (stable)": 4 2026-03-10T05:52:47.258 INFO:teuthology.orchestra.run.vm02.stdout: }, 2026-03-10T05:52:47.258 INFO:teuthology.orchestra.run.vm02.stdout: "overall": { 2026-03-10T05:52:47.258 INFO:teuthology.orchestra.run.vm02.stdout: "ceph version 17.2.0 (43e2e60a7559d3f46c9d53f1ca875fd499a1e35e) quincy (stable)": 15, 2026-03-10T05:52:47.258 INFO:teuthology.orchestra.run.vm02.stdout: "ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable)": 2 2026-03-10T05:52:47.258 INFO:teuthology.orchestra.run.vm02.stdout: } 2026-03-10T05:52:47.258 INFO:teuthology.orchestra.run.vm02.stdout:} 2026-03-10T05:52:47.508 INFO:teuthology.orchestra.run.vm02.stdout:{ 2026-03-10T05:52:47.509 INFO:teuthology.orchestra.run.vm02.stdout: "target_image": "quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df", 2026-03-10T05:52:47.509 INFO:teuthology.orchestra.run.vm02.stdout: "in_progress": true, 2026-03-10T05:52:47.509 INFO:teuthology.orchestra.run.vm02.stdout: "which": "Upgrading all daemon types on all hosts", 2026-03-10T05:52:47.510 INFO:teuthology.orchestra.run.vm02.stdout: "services_complete": [ 2026-03-10T05:52:47.510 INFO:teuthology.orchestra.run.vm02.stdout: "mgr" 2026-03-10T05:52:47.510 INFO:teuthology.orchestra.run.vm02.stdout: ], 2026-03-10T05:52:47.510 INFO:teuthology.orchestra.run.vm02.stdout: "progress": "2/23 daemons upgraded", 2026-03-10T05:52:47.510 INFO:teuthology.orchestra.run.vm02.stdout: "message": "Currently upgrading mgr daemons", 2026-03-10T05:52:47.510 INFO:teuthology.orchestra.run.vm02.stdout: "is_paused": false 2026-03-10T05:52:47.510 INFO:teuthology.orchestra.run.vm02.stdout:} 2026-03-10T05:52:47.745 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:52:47 vm02 bash[17462]: cluster 2026-03-10T05:52:46.205563+0000 mgr.y (mgr.24988) 39 : cluster [DBG] pgmap v15: 161 pgs: 161 active+clean; 457 KiB data, 102 MiB used, 160 GiB / 160 GiB avail; 938 B/s rd, 1 op/s 2026-03-10T05:52:47.745 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:52:47 vm02 bash[17462]: audit 2026-03-10T05:52:46.661235+0000 mgr.y (mgr.24988) 40 : audit [DBG] from='client.15123 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T05:52:47.745 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:52:47 vm02 bash[17462]: audit 2026-03-10T05:52:46.849798+0000 mgr.y (mgr.24988) 41 : audit [DBG] from='client.15126 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T05:52:47.745 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:52:47 vm02 bash[17462]: audit 2026-03-10T05:52:46.854141+0000 mgr.y (mgr.24988) 42 : audit [DBG] from='client.25048 -' entity='client.iscsi.foo.vm02.mxbwmh' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T05:52:48.001 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:52:47 vm05 bash[17864]: cluster 2026-03-10T05:52:46.205563+0000 mgr.y (mgr.24988) 39 : cluster [DBG] pgmap v15: 161 pgs: 161 active+clean; 457 KiB data, 102 MiB used, 160 GiB / 160 GiB avail; 938 B/s rd, 1 op/s 2026-03-10T05:52:48.001 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:52:47 vm05 bash[17864]: audit 2026-03-10T05:52:46.661235+0000 mgr.y (mgr.24988) 40 : audit [DBG] from='client.15123 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T05:52:48.001 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:52:47 vm05 bash[17864]: audit 2026-03-10T05:52:46.849798+0000 mgr.y (mgr.24988) 41 : audit [DBG] from='client.15126 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T05:52:48.001 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:52:47 vm05 bash[17864]: audit 2026-03-10T05:52:46.854141+0000 mgr.y (mgr.24988) 42 : audit [DBG] from='client.25048 -' entity='client.iscsi.foo.vm02.mxbwmh' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T05:52:48.084 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:47 vm02 bash[22526]: cluster 2026-03-10T05:52:46.205563+0000 mgr.y (mgr.24988) 39 : cluster [DBG] pgmap v15: 161 pgs: 161 active+clean; 457 KiB data, 102 MiB used, 160 GiB / 160 GiB avail; 938 B/s rd, 1 op/s 2026-03-10T05:52:48.086 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:47 vm02 bash[22526]: audit 2026-03-10T05:52:46.661235+0000 mgr.y (mgr.24988) 40 : audit [DBG] from='client.15123 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T05:52:48.086 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:47 vm02 bash[22526]: audit 2026-03-10T05:52:46.849798+0000 mgr.y (mgr.24988) 41 : audit [DBG] from='client.15126 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T05:52:48.086 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:47 vm02 bash[22526]: audit 2026-03-10T05:52:46.854141+0000 mgr.y (mgr.24988) 42 : audit [DBG] from='client.25048 -' entity='client.iscsi.foo.vm02.mxbwmh' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T05:52:48.160 INFO:teuthology.orchestra.run.vm02.stdout:HEALTH_OK 2026-03-10T05:52:48.751 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:52:48 vm05 bash[17864]: audit 2026-03-10T05:52:47.033247+0000 mgr.y (mgr.24988) 43 : audit [DBG] from='client.15132 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T05:52:48.751 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:52:48 vm05 bash[17864]: audit 2026-03-10T05:52:47.257376+0000 mon.c (mon.1) 169 : audit [DBG] from='client.? 192.168.123.102:0/1097513665' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T05:52:48.751 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:52:48 vm05 bash[17864]: audit 2026-03-10T05:52:47.507866+0000 mgr.y (mgr.24988) 44 : audit [DBG] from='client.25084 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T05:52:48.751 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:52:48 vm05 bash[17864]: audit 2026-03-10T05:52:48.160147+0000 mon.a (mon.0) 937 : audit [DBG] from='client.? 192.168.123.102:0/2722779720' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch 2026-03-10T05:52:48.834 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:52:48 vm02 bash[17462]: audit 2026-03-10T05:52:47.033247+0000 mgr.y (mgr.24988) 43 : audit [DBG] from='client.15132 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T05:52:48.834 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:52:48 vm02 bash[17462]: audit 2026-03-10T05:52:47.257376+0000 mon.c (mon.1) 169 : audit [DBG] from='client.? 192.168.123.102:0/1097513665' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T05:52:48.834 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:52:48 vm02 bash[17462]: audit 2026-03-10T05:52:47.507866+0000 mgr.y (mgr.24988) 44 : audit [DBG] from='client.25084 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T05:52:48.834 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:52:48 vm02 bash[17462]: audit 2026-03-10T05:52:48.160147+0000 mon.a (mon.0) 937 : audit [DBG] from='client.? 192.168.123.102:0/2722779720' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch 2026-03-10T05:52:48.835 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:48 vm02 bash[22526]: audit 2026-03-10T05:52:47.033247+0000 mgr.y (mgr.24988) 43 : audit [DBG] from='client.15132 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T05:52:48.835 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:48 vm02 bash[22526]: audit 2026-03-10T05:52:47.257376+0000 mon.c (mon.1) 169 : audit [DBG] from='client.? 192.168.123.102:0/1097513665' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T05:52:48.835 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:48 vm02 bash[22526]: audit 2026-03-10T05:52:47.507866+0000 mgr.y (mgr.24988) 44 : audit [DBG] from='client.25084 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T05:52:48.835 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:48 vm02 bash[22526]: audit 2026-03-10T05:52:48.160147+0000 mon.a (mon.0) 937 : audit [DBG] from='client.? 192.168.123.102:0/2722779720' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch 2026-03-10T05:52:49.751 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:52:49 vm05 bash[17864]: cluster 2026-03-10T05:52:48.205908+0000 mgr.y (mgr.24988) 45 : cluster [DBG] pgmap v16: 161 pgs: 161 active+clean; 457 KiB data, 102 MiB used, 160 GiB / 160 GiB avail; 938 B/s rd, 1 op/s 2026-03-10T05:52:49.834 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:52:49 vm02 bash[17462]: cluster 2026-03-10T05:52:48.205908+0000 mgr.y (mgr.24988) 45 : cluster [DBG] pgmap v16: 161 pgs: 161 active+clean; 457 KiB data, 102 MiB used, 160 GiB / 160 GiB avail; 938 B/s rd, 1 op/s 2026-03-10T05:52:49.834 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:49 vm02 bash[22526]: cluster 2026-03-10T05:52:48.205908+0000 mgr.y (mgr.24988) 45 : cluster [DBG] pgmap v16: 161 pgs: 161 active+clean; 457 KiB data, 102 MiB used, 160 GiB / 160 GiB avail; 938 B/s rd, 1 op/s 2026-03-10T05:52:51.208 INFO:journalctl@ceph.osd.0.vm02.stdout:Mar 10 05:52:51 vm02 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:52:51.208 INFO:journalctl@ceph.osd.1.vm02.stdout:Mar 10 05:52:51 vm02 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:52:51.208 INFO:journalctl@ceph.osd.2.vm02.stdout:Mar 10 05:52:51 vm02 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:52:51.208 INFO:journalctl@ceph.osd.3.vm02.stdout:Mar 10 05:52:51 vm02 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:52:51.208 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:52:51 vm02 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:52:51.208 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:52:51 vm02 bash[17462]: audit 2026-03-10T05:52:50.154940+0000 mon.a (mon.0) 938 : audit [INF] from='mgr.24988 ' entity='mgr.y' 2026-03-10T05:52:51.208 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:52:51 vm02 bash[17462]: audit 2026-03-10T05:52:50.160416+0000 mon.a (mon.0) 939 : audit [INF] from='mgr.24988 ' entity='mgr.y' 2026-03-10T05:52:51.208 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:52:51 vm02 bash[17462]: audit 2026-03-10T05:52:50.162798+0000 mon.c (mon.1) 170 : audit [DBG] from='mgr.24988 192.168.123.102:0/1338236995' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:52:51.208 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:52:51 vm02 bash[17462]: audit 2026-03-10T05:52:50.163605+0000 mon.c (mon.1) 171 : audit [INF] from='mgr.24988 192.168.123.102:0/1338236995' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T05:52:51.208 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:52:51 vm02 bash[17462]: audit 2026-03-10T05:52:50.168010+0000 mon.a (mon.0) 940 : audit [INF] from='mgr.24988 ' entity='mgr.y' 2026-03-10T05:52:51.208 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:52:51 vm02 bash[17462]: cluster 2026-03-10T05:52:50.206323+0000 mgr.y (mgr.24988) 46 : cluster [DBG] pgmap v17: 161 pgs: 161 active+clean; 457 KiB data, 102 MiB used, 160 GiB / 160 GiB avail; 1.3 KiB/s rd, 1 op/s 2026-03-10T05:52:51.209 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:52:51 vm02 bash[17462]: audit 2026-03-10T05:52:50.206619+0000 mon.c (mon.1) 172 : audit [DBG] from='mgr.24988 192.168.123.102:0/1338236995' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T05:52:51.209 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:52:51 vm02 bash[17462]: audit 2026-03-10T05:52:50.208183+0000 mon.c (mon.1) 173 : audit [DBG] from='mgr.24988 192.168.123.102:0/1338236995' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T05:52:51.209 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:52:51 vm02 bash[17462]: cephadm 2026-03-10T05:52:50.208853+0000 mgr.y (mgr.24988) 47 : cephadm [INF] Upgrade: Setting container_image for all mgr 2026-03-10T05:52:51.209 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:52:51 vm02 bash[17462]: audit 2026-03-10T05:52:50.213968+0000 mon.a (mon.0) 941 : audit [INF] from='mgr.24988 ' entity='mgr.y' 2026-03-10T05:52:51.209 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:52:51 vm02 bash[17462]: audit 2026-03-10T05:52:50.215586+0000 mon.c (mon.1) 174 : audit [INF] from='mgr.24988 192.168.123.102:0/1338236995' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mgr.x"}]: dispatch 2026-03-10T05:52:51.209 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:52:51 vm02 bash[17462]: audit 2026-03-10T05:52:50.215813+0000 mon.a (mon.0) 942 : audit [INF] from='mgr.24988 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mgr.x"}]: dispatch 2026-03-10T05:52:51.209 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:52:51 vm02 bash[17462]: audit 2026-03-10T05:52:50.219852+0000 mon.a (mon.0) 943 : audit [INF] from='mgr.24988 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "mgr.x"}]': finished 2026-03-10T05:52:51.209 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:52:51 vm02 bash[17462]: audit 2026-03-10T05:52:50.221210+0000 mon.c (mon.1) 175 : audit [INF] from='mgr.24988 192.168.123.102:0/1338236995' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mgr.y"}]: dispatch 2026-03-10T05:52:51.209 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:52:51 vm02 bash[17462]: audit 2026-03-10T05:52:50.221416+0000 mon.a (mon.0) 944 : audit [INF] from='mgr.24988 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mgr.y"}]: dispatch 2026-03-10T05:52:51.209 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:52:51 vm02 bash[17462]: audit 2026-03-10T05:52:50.225240+0000 mon.a (mon.0) 945 : audit [INF] from='mgr.24988 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "mgr.y"}]': finished 2026-03-10T05:52:51.209 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:52:51 vm02 bash[17462]: audit 2026-03-10T05:52:50.226803+0000 mon.c (mon.1) 176 : audit [DBG] from='mgr.24988 192.168.123.102:0/1338236995' entity='mgr.y' cmd=[{"prefix": "quorum_status"}]: dispatch 2026-03-10T05:52:51.209 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:52:51 vm02 bash[17462]: audit 2026-03-10T05:52:50.227560+0000 mon.c (mon.1) 177 : audit [DBG] from='mgr.24988 192.168.123.102:0/1338236995' entity='mgr.y' cmd=[{"prefix": "mon ok-to-stop", "ids": ["c"]}]: dispatch 2026-03-10T05:52:51.209 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:52:51 vm02 bash[17462]: cephadm 2026-03-10T05:52:50.228131+0000 mgr.y (mgr.24988) 48 : cephadm [INF] Upgrade: It appears safe to stop mon.c 2026-03-10T05:52:51.209 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:52:51 vm02 bash[17462]: cephadm 2026-03-10T05:52:50.643454+0000 mgr.y (mgr.24988) 49 : cephadm [INF] Upgrade: Updating mon.c 2026-03-10T05:52:51.209 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:52:51 vm02 bash[17462]: audit 2026-03-10T05:52:50.650162+0000 mon.a (mon.0) 946 : audit [INF] from='mgr.24988 ' entity='mgr.y' 2026-03-10T05:52:51.209 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:52:51 vm02 bash[17462]: audit 2026-03-10T05:52:50.651927+0000 mon.c (mon.1) 178 : audit [INF] from='mgr.24988 192.168.123.102:0/1338236995' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-10T05:52:51.209 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:52:51 vm02 bash[17462]: audit 2026-03-10T05:52:50.652760+0000 mon.c (mon.1) 179 : audit [DBG] from='mgr.24988 192.168.123.102:0/1338236995' entity='mgr.y' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch 2026-03-10T05:52:51.209 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:52:51 vm02 bash[17462]: audit 2026-03-10T05:52:50.653559+0000 mon.c (mon.1) 180 : audit [DBG] from='mgr.24988 192.168.123.102:0/1338236995' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:52:51.209 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:52:51 vm02 bash[17462]: cephadm 2026-03-10T05:52:50.654451+0000 mgr.y (mgr.24988) 50 : cephadm [INF] Deploying daemon mon.c on vm02 2026-03-10T05:52:51.209 INFO:journalctl@ceph.mgr.y.vm02.stdout:Mar 10 05:52:51 vm02 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:52:51.209 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:51 vm02 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:52:51.209 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:51 vm02 bash[22526]: audit 2026-03-10T05:52:50.154940+0000 mon.a (mon.0) 938 : audit [INF] from='mgr.24988 ' entity='mgr.y' 2026-03-10T05:52:51.209 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:51 vm02 bash[22526]: audit 2026-03-10T05:52:50.160416+0000 mon.a (mon.0) 939 : audit [INF] from='mgr.24988 ' entity='mgr.y' 2026-03-10T05:52:51.209 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:51 vm02 bash[22526]: audit 2026-03-10T05:52:50.162798+0000 mon.c (mon.1) 170 : audit [DBG] from='mgr.24988 192.168.123.102:0/1338236995' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:52:51.209 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:51 vm02 bash[22526]: audit 2026-03-10T05:52:50.163605+0000 mon.c (mon.1) 171 : audit [INF] from='mgr.24988 192.168.123.102:0/1338236995' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T05:52:51.209 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:51 vm02 bash[22526]: audit 2026-03-10T05:52:50.168010+0000 mon.a (mon.0) 940 : audit [INF] from='mgr.24988 ' entity='mgr.y' 2026-03-10T05:52:51.209 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:51 vm02 bash[22526]: cluster 2026-03-10T05:52:50.206323+0000 mgr.y (mgr.24988) 46 : cluster [DBG] pgmap v17: 161 pgs: 161 active+clean; 457 KiB data, 102 MiB used, 160 GiB / 160 GiB avail; 1.3 KiB/s rd, 1 op/s 2026-03-10T05:52:51.209 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:51 vm02 bash[22526]: audit 2026-03-10T05:52:50.206619+0000 mon.c (mon.1) 172 : audit [DBG] from='mgr.24988 192.168.123.102:0/1338236995' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T05:52:51.209 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:51 vm02 bash[22526]: audit 2026-03-10T05:52:50.208183+0000 mon.c (mon.1) 173 : audit [DBG] from='mgr.24988 192.168.123.102:0/1338236995' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T05:52:51.209 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:51 vm02 bash[22526]: cephadm 2026-03-10T05:52:50.208853+0000 mgr.y (mgr.24988) 47 : cephadm [INF] Upgrade: Setting container_image for all mgr 2026-03-10T05:52:51.209 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:51 vm02 bash[22526]: audit 2026-03-10T05:52:50.213968+0000 mon.a (mon.0) 941 : audit [INF] from='mgr.24988 ' entity='mgr.y' 2026-03-10T05:52:51.209 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:51 vm02 bash[22526]: audit 2026-03-10T05:52:50.215586+0000 mon.c (mon.1) 174 : audit [INF] from='mgr.24988 192.168.123.102:0/1338236995' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mgr.x"}]: dispatch 2026-03-10T05:52:51.209 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:51 vm02 bash[22526]: audit 2026-03-10T05:52:50.215813+0000 mon.a (mon.0) 942 : audit [INF] from='mgr.24988 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mgr.x"}]: dispatch 2026-03-10T05:52:51.209 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:51 vm02 bash[22526]: audit 2026-03-10T05:52:50.219852+0000 mon.a (mon.0) 943 : audit [INF] from='mgr.24988 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "mgr.x"}]': finished 2026-03-10T05:52:51.209 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:51 vm02 bash[22526]: audit 2026-03-10T05:52:50.221210+0000 mon.c (mon.1) 175 : audit [INF] from='mgr.24988 192.168.123.102:0/1338236995' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mgr.y"}]: dispatch 2026-03-10T05:52:51.209 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:51 vm02 bash[22526]: audit 2026-03-10T05:52:50.221416+0000 mon.a (mon.0) 944 : audit [INF] from='mgr.24988 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mgr.y"}]: dispatch 2026-03-10T05:52:51.209 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:51 vm02 bash[22526]: audit 2026-03-10T05:52:50.225240+0000 mon.a (mon.0) 945 : audit [INF] from='mgr.24988 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "mgr.y"}]': finished 2026-03-10T05:52:51.209 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:51 vm02 bash[22526]: audit 2026-03-10T05:52:50.226803+0000 mon.c (mon.1) 176 : audit [DBG] from='mgr.24988 192.168.123.102:0/1338236995' entity='mgr.y' cmd=[{"prefix": "quorum_status"}]: dispatch 2026-03-10T05:52:51.209 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:51 vm02 bash[22526]: audit 2026-03-10T05:52:50.227560+0000 mon.c (mon.1) 177 : audit [DBG] from='mgr.24988 192.168.123.102:0/1338236995' entity='mgr.y' cmd=[{"prefix": "mon ok-to-stop", "ids": ["c"]}]: dispatch 2026-03-10T05:52:51.209 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:51 vm02 bash[22526]: cephadm 2026-03-10T05:52:50.228131+0000 mgr.y (mgr.24988) 48 : cephadm [INF] Upgrade: It appears safe to stop mon.c 2026-03-10T05:52:51.209 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:51 vm02 bash[22526]: cephadm 2026-03-10T05:52:50.643454+0000 mgr.y (mgr.24988) 49 : cephadm [INF] Upgrade: Updating mon.c 2026-03-10T05:52:51.209 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:51 vm02 bash[22526]: audit 2026-03-10T05:52:50.650162+0000 mon.a (mon.0) 946 : audit [INF] from='mgr.24988 ' entity='mgr.y' 2026-03-10T05:52:51.209 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:51 vm02 bash[22526]: audit 2026-03-10T05:52:50.651927+0000 mon.c (mon.1) 178 : audit [INF] from='mgr.24988 192.168.123.102:0/1338236995' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-10T05:52:51.209 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:51 vm02 bash[22526]: audit 2026-03-10T05:52:50.652760+0000 mon.c (mon.1) 179 : audit [DBG] from='mgr.24988 192.168.123.102:0/1338236995' entity='mgr.y' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch 2026-03-10T05:52:51.209 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:51 vm02 bash[22526]: audit 2026-03-10T05:52:50.653559+0000 mon.c (mon.1) 180 : audit [DBG] from='mgr.24988 192.168.123.102:0/1338236995' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:52:51.209 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:51 vm02 bash[22526]: cephadm 2026-03-10T05:52:50.654451+0000 mgr.y (mgr.24988) 50 : cephadm [INF] Deploying daemon mon.c on vm02 2026-03-10T05:52:51.209 INFO:journalctl@ceph.alertmanager.a.vm02.stdout:Mar 10 05:52:51 vm02 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:52:51.209 INFO:journalctl@ceph.node-exporter.a.vm02.stdout:Mar 10 05:52:51 vm02 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:52:51.473 INFO:journalctl@ceph.mgr.y.vm02.stdout:Mar 10 05:52:51 vm02 bash[52264]: [10/Mar/2026:05:52:51] ENGINE Bus STOPPING 2026-03-10T05:52:51.474 INFO:journalctl@ceph.mgr.y.vm02.stdout:Mar 10 05:52:51 vm02 bash[52264]: [10/Mar/2026:05:52:51] ENGINE HTTP Server cherrypy._cpwsgi_server.CPWSGIServer(('::', 9283)) shut down 2026-03-10T05:52:51.474 INFO:journalctl@ceph.mgr.y.vm02.stdout:Mar 10 05:52:51 vm02 bash[52264]: [10/Mar/2026:05:52:51] ENGINE Bus STOPPED 2026-03-10T05:52:51.474 INFO:journalctl@ceph.mgr.y.vm02.stdout:Mar 10 05:52:51 vm02 bash[52264]: [10/Mar/2026:05:52:51] ENGINE Bus STARTING 2026-03-10T05:52:51.474 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:51 vm02 systemd[1]: Stopping Ceph mon.c for 107483ae-1c44-11f1-b530-c1172cd6122a... 2026-03-10T05:52:51.474 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:51 vm02 bash[22526]: debug 2026-03-10T05:52:51.243+0000 7f9c4a694700 -1 received signal: Terminated from /sbin/docker-init -- /usr/bin/ceph-mon -n mon.c -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-stderr=true --default-log-stderr-prefix=debug --default-mon-cluster-log-to-file=false --default-mon-cluster-log-to-stderr=true (PID: 1) UID: 0 2026-03-10T05:52:51.474 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:51 vm02 bash[22526]: debug 2026-03-10T05:52:51.243+0000 7f9c4a694700 -1 mon.c@1(peon) e3 *** Got Signal Terminated *** 2026-03-10T05:52:51.501 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:52:51 vm05 bash[17864]: audit 2026-03-10T05:52:50.154940+0000 mon.a (mon.0) 938 : audit [INF] from='mgr.24988 ' entity='mgr.y' 2026-03-10T05:52:51.501 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:52:51 vm05 bash[17864]: audit 2026-03-10T05:52:50.160416+0000 mon.a (mon.0) 939 : audit [INF] from='mgr.24988 ' entity='mgr.y' 2026-03-10T05:52:51.501 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:52:51 vm05 bash[17864]: audit 2026-03-10T05:52:50.162798+0000 mon.c (mon.1) 170 : audit [DBG] from='mgr.24988 192.168.123.102:0/1338236995' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:52:51.501 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:52:51 vm05 bash[17864]: audit 2026-03-10T05:52:50.163605+0000 mon.c (mon.1) 171 : audit [INF] from='mgr.24988 192.168.123.102:0/1338236995' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T05:52:51.501 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:52:51 vm05 bash[17864]: audit 2026-03-10T05:52:50.168010+0000 mon.a (mon.0) 940 : audit [INF] from='mgr.24988 ' entity='mgr.y' 2026-03-10T05:52:51.501 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:52:51 vm05 bash[17864]: cluster 2026-03-10T05:52:50.206323+0000 mgr.y (mgr.24988) 46 : cluster [DBG] pgmap v17: 161 pgs: 161 active+clean; 457 KiB data, 102 MiB used, 160 GiB / 160 GiB avail; 1.3 KiB/s rd, 1 op/s 2026-03-10T05:52:51.501 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:52:51 vm05 bash[17864]: audit 2026-03-10T05:52:50.206619+0000 mon.c (mon.1) 172 : audit [DBG] from='mgr.24988 192.168.123.102:0/1338236995' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T05:52:51.501 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:52:51 vm05 bash[17864]: audit 2026-03-10T05:52:50.208183+0000 mon.c (mon.1) 173 : audit [DBG] from='mgr.24988 192.168.123.102:0/1338236995' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T05:52:51.501 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:52:51 vm05 bash[17864]: cephadm 2026-03-10T05:52:50.208853+0000 mgr.y (mgr.24988) 47 : cephadm [INF] Upgrade: Setting container_image for all mgr 2026-03-10T05:52:51.501 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:52:51 vm05 bash[17864]: audit 2026-03-10T05:52:50.213968+0000 mon.a (mon.0) 941 : audit [INF] from='mgr.24988 ' entity='mgr.y' 2026-03-10T05:52:51.501 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:52:51 vm05 bash[17864]: audit 2026-03-10T05:52:50.215586+0000 mon.c (mon.1) 174 : audit [INF] from='mgr.24988 192.168.123.102:0/1338236995' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mgr.x"}]: dispatch 2026-03-10T05:52:51.501 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:52:51 vm05 bash[17864]: audit 2026-03-10T05:52:50.215813+0000 mon.a (mon.0) 942 : audit [INF] from='mgr.24988 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mgr.x"}]: dispatch 2026-03-10T05:52:51.501 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:52:51 vm05 bash[17864]: audit 2026-03-10T05:52:50.219852+0000 mon.a (mon.0) 943 : audit [INF] from='mgr.24988 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "mgr.x"}]': finished 2026-03-10T05:52:51.501 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:52:51 vm05 bash[17864]: audit 2026-03-10T05:52:50.221210+0000 mon.c (mon.1) 175 : audit [INF] from='mgr.24988 192.168.123.102:0/1338236995' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mgr.y"}]: dispatch 2026-03-10T05:52:51.501 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:52:51 vm05 bash[17864]: audit 2026-03-10T05:52:50.221416+0000 mon.a (mon.0) 944 : audit [INF] from='mgr.24988 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mgr.y"}]: dispatch 2026-03-10T05:52:51.501 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:52:51 vm05 bash[17864]: audit 2026-03-10T05:52:50.225240+0000 mon.a (mon.0) 945 : audit [INF] from='mgr.24988 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "mgr.y"}]': finished 2026-03-10T05:52:51.501 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:52:51 vm05 bash[17864]: audit 2026-03-10T05:52:50.226803+0000 mon.c (mon.1) 176 : audit [DBG] from='mgr.24988 192.168.123.102:0/1338236995' entity='mgr.y' cmd=[{"prefix": "quorum_status"}]: dispatch 2026-03-10T05:52:51.501 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:52:51 vm05 bash[17864]: audit 2026-03-10T05:52:50.227560+0000 mon.c (mon.1) 177 : audit [DBG] from='mgr.24988 192.168.123.102:0/1338236995' entity='mgr.y' cmd=[{"prefix": "mon ok-to-stop", "ids": ["c"]}]: dispatch 2026-03-10T05:52:51.501 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:52:51 vm05 bash[17864]: cephadm 2026-03-10T05:52:50.228131+0000 mgr.y (mgr.24988) 48 : cephadm [INF] Upgrade: It appears safe to stop mon.c 2026-03-10T05:52:51.501 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:52:51 vm05 bash[17864]: cephadm 2026-03-10T05:52:50.643454+0000 mgr.y (mgr.24988) 49 : cephadm [INF] Upgrade: Updating mon.c 2026-03-10T05:52:51.501 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:52:51 vm05 bash[17864]: audit 2026-03-10T05:52:50.650162+0000 mon.a (mon.0) 946 : audit [INF] from='mgr.24988 ' entity='mgr.y' 2026-03-10T05:52:51.501 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:52:51 vm05 bash[17864]: audit 2026-03-10T05:52:50.651927+0000 mon.c (mon.1) 178 : audit [INF] from='mgr.24988 192.168.123.102:0/1338236995' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-10T05:52:51.502 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:52:51 vm05 bash[17864]: audit 2026-03-10T05:52:50.652760+0000 mon.c (mon.1) 179 : audit [DBG] from='mgr.24988 192.168.123.102:0/1338236995' entity='mgr.y' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch 2026-03-10T05:52:51.502 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:52:51 vm05 bash[17864]: audit 2026-03-10T05:52:50.653559+0000 mon.c (mon.1) 180 : audit [DBG] from='mgr.24988 192.168.123.102:0/1338236995' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:52:51.502 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:52:51 vm05 bash[17864]: cephadm 2026-03-10T05:52:50.654451+0000 mgr.y (mgr.24988) 50 : cephadm [INF] Deploying daemon mon.c on vm02 2026-03-10T05:52:51.773 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:51 vm02 bash[55176]: ceph-107483ae-1c44-11f1-b530-c1172cd6122a-mon-c 2026-03-10T05:52:51.774 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:51 vm02 bash[55242]: Error response from daemon: No such container: ceph-107483ae-1c44-11f1-b530-c1172cd6122a-mon-c 2026-03-10T05:52:51.774 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:51 vm02 systemd[1]: ceph-107483ae-1c44-11f1-b530-c1172cd6122a@mon.c.service: Deactivated successfully. 2026-03-10T05:52:51.774 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:51 vm02 systemd[1]: Stopped Ceph mon.c for 107483ae-1c44-11f1-b530-c1172cd6122a. 2026-03-10T05:52:51.774 INFO:journalctl@ceph.mgr.y.vm02.stdout:Mar 10 05:52:51 vm02 bash[52264]: [10/Mar/2026:05:52:51] ENGINE Serving on http://:::9283 2026-03-10T05:52:51.774 INFO:journalctl@ceph.mgr.y.vm02.stdout:Mar 10 05:52:51 vm02 bash[52264]: [10/Mar/2026:05:52:51] ENGINE Bus STARTED 2026-03-10T05:52:52.084 INFO:journalctl@ceph.osd.0.vm02.stdout:Mar 10 05:52:51 vm02 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:52:52.084 INFO:journalctl@ceph.osd.1.vm02.stdout:Mar 10 05:52:51 vm02 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:52:52.084 INFO:journalctl@ceph.osd.3.vm02.stdout:Mar 10 05:52:51 vm02 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:52:52.085 INFO:journalctl@ceph.osd.2.vm02.stdout:Mar 10 05:52:51 vm02 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:52:52.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:52:51 vm02 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:52:52.085 INFO:journalctl@ceph.mgr.y.vm02.stdout:Mar 10 05:52:51 vm02 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:52:52.085 INFO:journalctl@ceph.alertmanager.a.vm02.stdout:Mar 10 05:52:51 vm02 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:52:52.085 INFO:journalctl@ceph.node-exporter.a.vm02.stdout:Mar 10 05:52:51 vm02 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:52:52.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:51 vm02 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:52:52.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:51 vm02 systemd[1]: Started Ceph mon.c for 107483ae-1c44-11f1-b530-c1172cd6122a. 2026-03-10T05:52:52.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:51 vm02 bash[55303]: debug 2026-03-10T05:52:51.955+0000 7f521b66ad80 0 set uid:gid to 167:167 (ceph:ceph) 2026-03-10T05:52:52.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:51 vm02 bash[55303]: debug 2026-03-10T05:52:51.955+0000 7f521b66ad80 0 ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable), process ceph-mon, pid 7 2026-03-10T05:52:52.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:51 vm02 bash[55303]: debug 2026-03-10T05:52:51.955+0000 7f521b66ad80 0 pidfile_write: ignore empty --pid-file 2026-03-10T05:52:52.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:51 vm02 bash[55303]: debug 2026-03-10T05:52:51.955+0000 7f521b66ad80 0 load: jerasure load: lrc 2026-03-10T05:52:52.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:51 vm02 bash[55303]: debug 2026-03-10T05:52:51.959+0000 7f521b66ad80 4 rocksdb: RocksDB version: 7.9.2 2026-03-10T05:52:52.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:51 vm02 bash[55303]: debug 2026-03-10T05:52:51.959+0000 7f521b66ad80 4 rocksdb: Git sha 0 2026-03-10T05:52:52.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:51 vm02 bash[55303]: debug 2026-03-10T05:52:51.959+0000 7f521b66ad80 4 rocksdb: Compile date 2026-02-25 18:11:04 2026-03-10T05:52:52.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:51 vm02 bash[55303]: debug 2026-03-10T05:52:51.959+0000 7f521b66ad80 4 rocksdb: DB SUMMARY 2026-03-10T05:52:52.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:51 vm02 bash[55303]: debug 2026-03-10T05:52:51.959+0000 7f521b66ad80 4 rocksdb: DB Session ID: X8RAATT4QS4OJCKO8A8H 2026-03-10T05:52:52.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:51 vm02 bash[55303]: debug 2026-03-10T05:52:51.959+0000 7f521b66ad80 4 rocksdb: CURRENT file: CURRENT 2026-03-10T05:52:52.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:51 vm02 bash[55303]: debug 2026-03-10T05:52:51.959+0000 7f521b66ad80 4 rocksdb: IDENTITY file: IDENTITY 2026-03-10T05:52:52.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:51 vm02 bash[55303]: debug 2026-03-10T05:52:51.959+0000 7f521b66ad80 4 rocksdb: MANIFEST file: MANIFEST-000009 size: 503 Bytes 2026-03-10T05:52:52.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:51 vm02 bash[55303]: debug 2026-03-10T05:52:51.959+0000 7f521b66ad80 4 rocksdb: SST files in /var/lib/ceph/mon/ceph-c/store.db dir, Total Num: 1, files: 000018.sst 2026-03-10T05:52:52.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:51 vm02 bash[55303]: debug 2026-03-10T05:52:51.959+0000 7f521b66ad80 4 rocksdb: Write Ahead Log file in /var/lib/ceph/mon/ceph-c/store.db: 000016.log size: 5253873 ; 2026-03-10T05:52:52.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:51 vm02 bash[55303]: debug 2026-03-10T05:52:51.959+0000 7f521b66ad80 4 rocksdb: Options.error_if_exists: 0 2026-03-10T05:52:52.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:51 vm02 bash[55303]: debug 2026-03-10T05:52:51.959+0000 7f521b66ad80 4 rocksdb: Options.create_if_missing: 0 2026-03-10T05:52:52.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:51 vm02 bash[55303]: debug 2026-03-10T05:52:51.959+0000 7f521b66ad80 4 rocksdb: Options.paranoid_checks: 1 2026-03-10T05:52:52.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:51 vm02 bash[55303]: debug 2026-03-10T05:52:51.959+0000 7f521b66ad80 4 rocksdb: Options.flush_verify_memtable_count: 1 2026-03-10T05:52:52.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:51 vm02 bash[55303]: debug 2026-03-10T05:52:51.959+0000 7f521b66ad80 4 rocksdb: Options.track_and_verify_wals_in_manifest: 0 2026-03-10T05:52:52.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:51 vm02 bash[55303]: debug 2026-03-10T05:52:51.959+0000 7f521b66ad80 4 rocksdb: Options.verify_sst_unique_id_in_manifest: 1 2026-03-10T05:52:52.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:51 vm02 bash[55303]: debug 2026-03-10T05:52:51.959+0000 7f521b66ad80 4 rocksdb: Options.env: 0x558fb57dfdc0 2026-03-10T05:52:52.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:51 vm02 bash[55303]: debug 2026-03-10T05:52:51.959+0000 7f521b66ad80 4 rocksdb: Options.fs: PosixFileSystem 2026-03-10T05:52:52.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:51 vm02 bash[55303]: debug 2026-03-10T05:52:51.959+0000 7f521b66ad80 4 rocksdb: Options.info_log: 0x558ff1d1b7e0 2026-03-10T05:52:52.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:51 vm02 bash[55303]: debug 2026-03-10T05:52:51.959+0000 7f521b66ad80 4 rocksdb: Options.max_file_opening_threads: 16 2026-03-10T05:52:52.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:51 vm02 bash[55303]: debug 2026-03-10T05:52:51.959+0000 7f521b66ad80 4 rocksdb: Options.statistics: (nil) 2026-03-10T05:52:52.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:51 vm02 bash[55303]: debug 2026-03-10T05:52:51.959+0000 7f521b66ad80 4 rocksdb: Options.use_fsync: 0 2026-03-10T05:52:52.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:51 vm02 bash[55303]: debug 2026-03-10T05:52:51.959+0000 7f521b66ad80 4 rocksdb: Options.max_log_file_size: 0 2026-03-10T05:52:52.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:51 vm02 bash[55303]: debug 2026-03-10T05:52:51.959+0000 7f521b66ad80 4 rocksdb: Options.max_manifest_file_size: 1073741824 2026-03-10T05:52:52.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:51 vm02 bash[55303]: debug 2026-03-10T05:52:51.959+0000 7f521b66ad80 4 rocksdb: Options.log_file_time_to_roll: 0 2026-03-10T05:52:52.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:51 vm02 bash[55303]: debug 2026-03-10T05:52:51.959+0000 7f521b66ad80 4 rocksdb: Options.keep_log_file_num: 1000 2026-03-10T05:52:52.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:51 vm02 bash[55303]: debug 2026-03-10T05:52:51.959+0000 7f521b66ad80 4 rocksdb: Options.recycle_log_file_num: 0 2026-03-10T05:52:52.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:51 vm02 bash[55303]: debug 2026-03-10T05:52:51.959+0000 7f521b66ad80 4 rocksdb: Options.allow_fallocate: 1 2026-03-10T05:52:52.086 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:51 vm02 bash[55303]: debug 2026-03-10T05:52:51.959+0000 7f521b66ad80 4 rocksdb: Options.allow_mmap_reads: 0 2026-03-10T05:52:52.086 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:51 vm02 bash[55303]: debug 2026-03-10T05:52:51.959+0000 7f521b66ad80 4 rocksdb: Options.allow_mmap_writes: 0 2026-03-10T05:52:52.086 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:51 vm02 bash[55303]: debug 2026-03-10T05:52:51.959+0000 7f521b66ad80 4 rocksdb: Options.use_direct_reads: 0 2026-03-10T05:52:52.086 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:51 vm02 bash[55303]: debug 2026-03-10T05:52:51.959+0000 7f521b66ad80 4 rocksdb: Options.use_direct_io_for_flush_and_compaction: 0 2026-03-10T05:52:52.086 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:51 vm02 bash[55303]: debug 2026-03-10T05:52:51.959+0000 7f521b66ad80 4 rocksdb: Options.create_missing_column_families: 0 2026-03-10T05:52:52.086 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:51 vm02 bash[55303]: debug 2026-03-10T05:52:51.959+0000 7f521b66ad80 4 rocksdb: Options.db_log_dir: 2026-03-10T05:52:52.086 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:51 vm02 bash[55303]: debug 2026-03-10T05:52:51.959+0000 7f521b66ad80 4 rocksdb: Options.wal_dir: 2026-03-10T05:52:52.086 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:51 vm02 bash[55303]: debug 2026-03-10T05:52:51.959+0000 7f521b66ad80 4 rocksdb: Options.table_cache_numshardbits: 6 2026-03-10T05:52:52.086 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:51 vm02 bash[55303]: debug 2026-03-10T05:52:51.959+0000 7f521b66ad80 4 rocksdb: Options.WAL_ttl_seconds: 0 2026-03-10T05:52:52.086 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:51 vm02 bash[55303]: debug 2026-03-10T05:52:51.959+0000 7f521b66ad80 4 rocksdb: Options.WAL_size_limit_MB: 0 2026-03-10T05:52:52.086 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:51 vm02 bash[55303]: debug 2026-03-10T05:52:51.959+0000 7f521b66ad80 4 rocksdb: Options.max_write_batch_group_size_bytes: 1048576 2026-03-10T05:52:52.086 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:51 vm02 bash[55303]: debug 2026-03-10T05:52:51.959+0000 7f521b66ad80 4 rocksdb: Options.manifest_preallocation_size: 4194304 2026-03-10T05:52:52.086 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:51 vm02 bash[55303]: debug 2026-03-10T05:52:51.959+0000 7f521b66ad80 4 rocksdb: Options.is_fd_close_on_exec: 1 2026-03-10T05:52:52.086 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:51 vm02 bash[55303]: debug 2026-03-10T05:52:51.959+0000 7f521b66ad80 4 rocksdb: Options.advise_random_on_open: 1 2026-03-10T05:52:52.086 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:51 vm02 bash[55303]: debug 2026-03-10T05:52:51.959+0000 7f521b66ad80 4 rocksdb: Options.db_write_buffer_size: 0 2026-03-10T05:52:52.086 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:51 vm02 bash[55303]: debug 2026-03-10T05:52:51.959+0000 7f521b66ad80 4 rocksdb: Options.write_buffer_manager: 0x558ff1d1f900 2026-03-10T05:52:52.086 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:51 vm02 bash[55303]: debug 2026-03-10T05:52:51.959+0000 7f521b66ad80 4 rocksdb: Options.access_hint_on_compaction_start: 1 2026-03-10T05:52:52.086 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:51 vm02 bash[55303]: debug 2026-03-10T05:52:51.959+0000 7f521b66ad80 4 rocksdb: Options.random_access_max_buffer_size: 1048576 2026-03-10T05:52:52.086 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:51 vm02 bash[55303]: debug 2026-03-10T05:52:51.959+0000 7f521b66ad80 4 rocksdb: Options.use_adaptive_mutex: 0 2026-03-10T05:52:52.086 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:51 vm02 bash[55303]: debug 2026-03-10T05:52:51.959+0000 7f521b66ad80 4 rocksdb: Options.rate_limiter: (nil) 2026-03-10T05:52:52.086 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:51 vm02 bash[55303]: debug 2026-03-10T05:52:51.959+0000 7f521b66ad80 4 rocksdb: Options.sst_file_manager.rate_bytes_per_sec: 0 2026-03-10T05:52:52.086 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:51 vm02 bash[55303]: debug 2026-03-10T05:52:51.959+0000 7f521b66ad80 4 rocksdb: Options.wal_recovery_mode: 2 2026-03-10T05:52:52.086 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:51 vm02 bash[55303]: debug 2026-03-10T05:52:51.959+0000 7f521b66ad80 4 rocksdb: Options.enable_thread_tracking: 0 2026-03-10T05:52:52.086 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:51 vm02 bash[55303]: debug 2026-03-10T05:52:51.959+0000 7f521b66ad80 4 rocksdb: Options.enable_pipelined_write: 0 2026-03-10T05:52:52.086 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:51 vm02 bash[55303]: debug 2026-03-10T05:52:51.959+0000 7f521b66ad80 4 rocksdb: Options.unordered_write: 0 2026-03-10T05:52:52.086 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:51 vm02 bash[55303]: debug 2026-03-10T05:52:51.959+0000 7f521b66ad80 4 rocksdb: Options.allow_concurrent_memtable_write: 1 2026-03-10T05:52:52.086 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:51 vm02 bash[55303]: debug 2026-03-10T05:52:51.959+0000 7f521b66ad80 4 rocksdb: Options.enable_write_thread_adaptive_yield: 1 2026-03-10T05:52:52.086 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:51 vm02 bash[55303]: debug 2026-03-10T05:52:51.959+0000 7f521b66ad80 4 rocksdb: Options.write_thread_max_yield_usec: 100 2026-03-10T05:52:52.086 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:51 vm02 bash[55303]: debug 2026-03-10T05:52:51.959+0000 7f521b66ad80 4 rocksdb: Options.write_thread_slow_yield_usec: 3 2026-03-10T05:52:52.086 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:51 vm02 bash[55303]: debug 2026-03-10T05:52:51.959+0000 7f521b66ad80 4 rocksdb: Options.row_cache: None 2026-03-10T05:52:52.086 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:51 vm02 bash[55303]: debug 2026-03-10T05:52:51.959+0000 7f521b66ad80 4 rocksdb: Options.wal_filter: None 2026-03-10T05:52:52.086 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:51 vm02 bash[55303]: debug 2026-03-10T05:52:51.959+0000 7f521b66ad80 4 rocksdb: Options.avoid_flush_during_recovery: 0 2026-03-10T05:52:52.086 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:51 vm02 bash[55303]: debug 2026-03-10T05:52:51.959+0000 7f521b66ad80 4 rocksdb: Options.allow_ingest_behind: 0 2026-03-10T05:52:52.086 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:51 vm02 bash[55303]: debug 2026-03-10T05:52:51.959+0000 7f521b66ad80 4 rocksdb: Options.two_write_queues: 0 2026-03-10T05:52:52.086 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:51 vm02 bash[55303]: debug 2026-03-10T05:52:51.959+0000 7f521b66ad80 4 rocksdb: Options.manual_wal_flush: 0 2026-03-10T05:52:52.086 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:51 vm02 bash[55303]: debug 2026-03-10T05:52:51.959+0000 7f521b66ad80 4 rocksdb: Options.wal_compression: 0 2026-03-10T05:52:52.086 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:51 vm02 bash[55303]: debug 2026-03-10T05:52:51.959+0000 7f521b66ad80 4 rocksdb: Options.atomic_flush: 0 2026-03-10T05:52:52.086 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:51 vm02 bash[55303]: debug 2026-03-10T05:52:51.959+0000 7f521b66ad80 4 rocksdb: Options.avoid_unnecessary_blocking_io: 0 2026-03-10T05:52:52.086 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:51 vm02 bash[55303]: debug 2026-03-10T05:52:51.959+0000 7f521b66ad80 4 rocksdb: Options.persist_stats_to_disk: 0 2026-03-10T05:52:52.086 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:51 vm02 bash[55303]: debug 2026-03-10T05:52:51.959+0000 7f521b66ad80 4 rocksdb: Options.write_dbid_to_manifest: 0 2026-03-10T05:52:52.086 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:51 vm02 bash[55303]: debug 2026-03-10T05:52:51.959+0000 7f521b66ad80 4 rocksdb: Options.log_readahead_size: 0 2026-03-10T05:52:52.086 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:51 vm02 bash[55303]: debug 2026-03-10T05:52:51.959+0000 7f521b66ad80 4 rocksdb: Options.file_checksum_gen_factory: Unknown 2026-03-10T05:52:52.086 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:51 vm02 bash[55303]: debug 2026-03-10T05:52:51.959+0000 7f521b66ad80 4 rocksdb: Options.best_efforts_recovery: 0 2026-03-10T05:52:52.087 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:51 vm02 bash[55303]: debug 2026-03-10T05:52:51.959+0000 7f521b66ad80 4 rocksdb: Options.max_bgerror_resume_count: 2147483647 2026-03-10T05:52:52.087 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:51 vm02 bash[55303]: debug 2026-03-10T05:52:51.959+0000 7f521b66ad80 4 rocksdb: Options.bgerror_resume_retry_interval: 1000000 2026-03-10T05:52:52.087 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:51 vm02 bash[55303]: debug 2026-03-10T05:52:51.959+0000 7f521b66ad80 4 rocksdb: Options.allow_data_in_errors: 0 2026-03-10T05:52:52.087 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:51 vm02 bash[55303]: debug 2026-03-10T05:52:51.959+0000 7f521b66ad80 4 rocksdb: Options.db_host_id: __hostname__ 2026-03-10T05:52:52.087 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:51 vm02 bash[55303]: debug 2026-03-10T05:52:51.959+0000 7f521b66ad80 4 rocksdb: Options.enforce_single_del_contracts: true 2026-03-10T05:52:52.087 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:51 vm02 bash[55303]: debug 2026-03-10T05:52:51.959+0000 7f521b66ad80 4 rocksdb: Options.max_background_jobs: 2 2026-03-10T05:52:52.087 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:51 vm02 bash[55303]: debug 2026-03-10T05:52:51.959+0000 7f521b66ad80 4 rocksdb: Options.max_background_compactions: -1 2026-03-10T05:52:52.087 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:51 vm02 bash[55303]: debug 2026-03-10T05:52:51.959+0000 7f521b66ad80 4 rocksdb: Options.max_subcompactions: 1 2026-03-10T05:52:52.087 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:51 vm02 bash[55303]: debug 2026-03-10T05:52:51.959+0000 7f521b66ad80 4 rocksdb: Options.avoid_flush_during_shutdown: 0 2026-03-10T05:52:52.087 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:51 vm02 bash[55303]: debug 2026-03-10T05:52:51.959+0000 7f521b66ad80 4 rocksdb: Options.writable_file_max_buffer_size: 1048576 2026-03-10T05:52:52.087 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:51 vm02 bash[55303]: debug 2026-03-10T05:52:51.959+0000 7f521b66ad80 4 rocksdb: Options.delayed_write_rate : 16777216 2026-03-10T05:52:52.087 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:51 vm02 bash[55303]: debug 2026-03-10T05:52:51.959+0000 7f521b66ad80 4 rocksdb: Options.max_total_wal_size: 0 2026-03-10T05:52:52.087 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:51 vm02 bash[55303]: debug 2026-03-10T05:52:51.959+0000 7f521b66ad80 4 rocksdb: Options.delete_obsolete_files_period_micros: 21600000000 2026-03-10T05:52:52.087 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:51 vm02 bash[55303]: debug 2026-03-10T05:52:51.959+0000 7f521b66ad80 4 rocksdb: Options.stats_dump_period_sec: 600 2026-03-10T05:52:52.087 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:51 vm02 bash[55303]: debug 2026-03-10T05:52:51.959+0000 7f521b66ad80 4 rocksdb: Options.stats_persist_period_sec: 600 2026-03-10T05:52:52.087 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:51 vm02 bash[55303]: debug 2026-03-10T05:52:51.959+0000 7f521b66ad80 4 rocksdb: Options.stats_history_buffer_size: 1048576 2026-03-10T05:52:52.087 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:51 vm02 bash[55303]: debug 2026-03-10T05:52:51.959+0000 7f521b66ad80 4 rocksdb: Options.max_open_files: -1 2026-03-10T05:52:52.087 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:51 vm02 bash[55303]: debug 2026-03-10T05:52:51.959+0000 7f521b66ad80 4 rocksdb: Options.bytes_per_sync: 0 2026-03-10T05:52:52.087 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:51 vm02 bash[55303]: debug 2026-03-10T05:52:51.959+0000 7f521b66ad80 4 rocksdb: Options.wal_bytes_per_sync: 0 2026-03-10T05:52:52.087 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:51 vm02 bash[55303]: debug 2026-03-10T05:52:51.959+0000 7f521b66ad80 4 rocksdb: Options.strict_bytes_per_sync: 0 2026-03-10T05:52:52.087 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:51 vm02 bash[55303]: debug 2026-03-10T05:52:51.959+0000 7f521b66ad80 4 rocksdb: Options.compaction_readahead_size: 0 2026-03-10T05:52:52.087 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:51 vm02 bash[55303]: debug 2026-03-10T05:52:51.959+0000 7f521b66ad80 4 rocksdb: Options.max_background_flushes: -1 2026-03-10T05:52:52.087 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:51 vm02 bash[55303]: debug 2026-03-10T05:52:51.959+0000 7f521b66ad80 4 rocksdb: Compression algorithms supported: 2026-03-10T05:52:52.087 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:51 vm02 bash[55303]: debug 2026-03-10T05:52:51.959+0000 7f521b66ad80 4 rocksdb: kZSTD supported: 0 2026-03-10T05:52:52.087 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:51 vm02 bash[55303]: debug 2026-03-10T05:52:51.959+0000 7f521b66ad80 4 rocksdb: kXpressCompression supported: 0 2026-03-10T05:52:52.087 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:51 vm02 bash[55303]: debug 2026-03-10T05:52:51.959+0000 7f521b66ad80 4 rocksdb: kBZip2Compression supported: 0 2026-03-10T05:52:52.087 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:51 vm02 bash[55303]: debug 2026-03-10T05:52:51.959+0000 7f521b66ad80 4 rocksdb: kZSTDNotFinalCompression supported: 0 2026-03-10T05:52:52.087 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:51 vm02 bash[55303]: debug 2026-03-10T05:52:51.959+0000 7f521b66ad80 4 rocksdb: kLZ4Compression supported: 1 2026-03-10T05:52:52.087 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:51 vm02 bash[55303]: debug 2026-03-10T05:52:51.959+0000 7f521b66ad80 4 rocksdb: kZlibCompression supported: 1 2026-03-10T05:52:52.087 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:51 vm02 bash[55303]: debug 2026-03-10T05:52:51.959+0000 7f521b66ad80 4 rocksdb: kLZ4HCCompression supported: 1 2026-03-10T05:52:52.087 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:51 vm02 bash[55303]: debug 2026-03-10T05:52:51.959+0000 7f521b66ad80 4 rocksdb: kSnappyCompression supported: 1 2026-03-10T05:52:52.087 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:51 vm02 bash[55303]: debug 2026-03-10T05:52:51.959+0000 7f521b66ad80 4 rocksdb: Fast CRC32 supported: Supported on x86 2026-03-10T05:52:52.087 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:51 vm02 bash[55303]: debug 2026-03-10T05:52:51.959+0000 7f521b66ad80 4 rocksdb: DMutex implementation: pthread_mutex_t 2026-03-10T05:52:52.087 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:51 vm02 bash[55303]: debug 2026-03-10T05:52:51.959+0000 7f521b66ad80 4 rocksdb: [db/version_set.cc:5527] Recovering from manifest file: /var/lib/ceph/mon/ceph-c/store.db/MANIFEST-000009 2026-03-10T05:52:52.087 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:51 vm02 bash[55303]: debug 2026-03-10T05:52:51.963+0000 7f521b66ad80 4 rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]: 2026-03-10T05:52:52.087 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:51 vm02 bash[55303]: debug 2026-03-10T05:52:51.963+0000 7f521b66ad80 4 rocksdb: Options.comparator: leveldb.BytewiseComparator 2026-03-10T05:52:52.087 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:51 vm02 bash[55303]: debug 2026-03-10T05:52:51.963+0000 7f521b66ad80 4 rocksdb: Options.merge_operator: 2026-03-10T05:52:52.087 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:51 vm02 bash[55303]: debug 2026-03-10T05:52:51.963+0000 7f521b66ad80 4 rocksdb: Options.compaction_filter: None 2026-03-10T05:52:52.087 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:51 vm02 bash[55303]: debug 2026-03-10T05:52:51.963+0000 7f521b66ad80 4 rocksdb: Options.compaction_filter_factory: None 2026-03-10T05:52:52.087 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:51 vm02 bash[55303]: debug 2026-03-10T05:52:51.963+0000 7f521b66ad80 4 rocksdb: Options.sst_partitioner_factory: None 2026-03-10T05:52:52.087 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:51 vm02 bash[55303]: debug 2026-03-10T05:52:51.963+0000 7f521b66ad80 4 rocksdb: Options.memtable_factory: SkipListFactory 2026-03-10T05:52:52.087 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:51 vm02 bash[55303]: debug 2026-03-10T05:52:51.963+0000 7f521b66ad80 4 rocksdb: Options.table_factory: BlockBasedTable 2026-03-10T05:52:52.087 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:51 vm02 bash[55303]: debug 2026-03-10T05:52:51.963+0000 7f521b66ad80 4 rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x558ff1d1a3c0) 2026-03-10T05:52:52.087 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:51 vm02 bash[55303]: cache_index_and_filter_blocks: 1 2026-03-10T05:52:52.087 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:51 vm02 bash[55303]: cache_index_and_filter_blocks_with_high_priority: 0 2026-03-10T05:52:52.087 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:51 vm02 bash[55303]: pin_l0_filter_and_index_blocks_in_cache: 0 2026-03-10T05:52:52.087 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:51 vm02 bash[55303]: pin_top_level_index_and_filter: 1 2026-03-10T05:52:52.087 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:51 vm02 bash[55303]: index_type: 0 2026-03-10T05:52:52.087 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:51 vm02 bash[55303]: data_block_index_type: 0 2026-03-10T05:52:52.087 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:51 vm02 bash[55303]: index_shortening: 1 2026-03-10T05:52:52.087 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:51 vm02 bash[55303]: data_block_hash_table_util_ratio: 0.750000 2026-03-10T05:52:52.087 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:51 vm02 bash[55303]: checksum: 4 2026-03-10T05:52:52.087 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:51 vm02 bash[55303]: no_block_cache: 0 2026-03-10T05:52:52.087 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:51 vm02 bash[55303]: block_cache: 0x558ff1d41350 2026-03-10T05:52:52.087 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:51 vm02 bash[55303]: block_cache_name: BinnedLRUCache 2026-03-10T05:52:52.087 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:51 vm02 bash[55303]: block_cache_options: 2026-03-10T05:52:52.087 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:51 vm02 bash[55303]: capacity : 536870912 2026-03-10T05:52:52.087 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:51 vm02 bash[55303]: num_shard_bits : 4 2026-03-10T05:52:52.087 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:51 vm02 bash[55303]: strict_capacity_limit : 0 2026-03-10T05:52:52.088 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:51 vm02 bash[55303]: high_pri_pool_ratio: 0.000 2026-03-10T05:52:52.088 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:51 vm02 bash[55303]: block_cache_compressed: (nil) 2026-03-10T05:52:52.088 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:51 vm02 bash[55303]: persistent_cache: (nil) 2026-03-10T05:52:52.088 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:51 vm02 bash[55303]: block_size: 4096 2026-03-10T05:52:52.088 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:51 vm02 bash[55303]: block_size_deviation: 10 2026-03-10T05:52:52.088 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:51 vm02 bash[55303]: block_restart_interval: 16 2026-03-10T05:52:52.088 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:51 vm02 bash[55303]: index_block_restart_interval: 1 2026-03-10T05:52:52.088 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:51 vm02 bash[55303]: metadata_block_size: 4096 2026-03-10T05:52:52.088 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:51 vm02 bash[55303]: partition_filters: 0 2026-03-10T05:52:52.088 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:51 vm02 bash[55303]: use_delta_encoding: 1 2026-03-10T05:52:52.088 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:51 vm02 bash[55303]: filter_policy: bloomfilter 2026-03-10T05:52:52.088 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:51 vm02 bash[55303]: whole_key_filtering: 1 2026-03-10T05:52:52.088 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:51 vm02 bash[55303]: verify_compression: 0 2026-03-10T05:52:52.088 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:51 vm02 bash[55303]: read_amp_bytes_per_bit: 0 2026-03-10T05:52:52.088 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:51 vm02 bash[55303]: format_version: 5 2026-03-10T05:52:52.088 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:51 vm02 bash[55303]: enable_index_compression: 1 2026-03-10T05:52:52.088 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:51 vm02 bash[55303]: block_align: 0 2026-03-10T05:52:52.088 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:51 vm02 bash[55303]: max_auto_readahead_size: 262144 2026-03-10T05:52:52.088 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:51 vm02 bash[55303]: prepopulate_block_cache: 0 2026-03-10T05:52:52.088 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:51 vm02 bash[55303]: initial_auto_readahead_size: 8192 2026-03-10T05:52:52.088 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:51 vm02 bash[55303]: num_file_reads_for_auto_readahead: 2 2026-03-10T05:52:52.088 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:51 vm02 bash[55303]: debug 2026-03-10T05:52:51.963+0000 7f521b66ad80 4 rocksdb: Options.write_buffer_size: 33554432 2026-03-10T05:52:52.088 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:51 vm02 bash[55303]: debug 2026-03-10T05:52:51.963+0000 7f521b66ad80 4 rocksdb: Options.max_write_buffer_number: 2 2026-03-10T05:52:52.088 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:51 vm02 bash[55303]: debug 2026-03-10T05:52:51.963+0000 7f521b66ad80 4 rocksdb: Options.compression: NoCompression 2026-03-10T05:52:52.088 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:51 vm02 bash[55303]: debug 2026-03-10T05:52:51.963+0000 7f521b66ad80 4 rocksdb: Options.bottommost_compression: Disabled 2026-03-10T05:52:52.088 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:51 vm02 bash[55303]: debug 2026-03-10T05:52:51.963+0000 7f521b66ad80 4 rocksdb: Options.prefix_extractor: nullptr 2026-03-10T05:52:52.088 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:51 vm02 bash[55303]: debug 2026-03-10T05:52:51.963+0000 7f521b66ad80 4 rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr 2026-03-10T05:52:52.088 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:51 vm02 bash[55303]: debug 2026-03-10T05:52:51.963+0000 7f521b66ad80 4 rocksdb: Options.num_levels: 7 2026-03-10T05:52:52.088 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:51 vm02 bash[55303]: debug 2026-03-10T05:52:51.963+0000 7f521b66ad80 4 rocksdb: Options.min_write_buffer_number_to_merge: 1 2026-03-10T05:52:52.088 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:51 vm02 bash[55303]: debug 2026-03-10T05:52:51.963+0000 7f521b66ad80 4 rocksdb: Options.max_write_buffer_number_to_maintain: 0 2026-03-10T05:52:52.088 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:51 vm02 bash[55303]: debug 2026-03-10T05:52:51.963+0000 7f521b66ad80 4 rocksdb: Options.max_write_buffer_size_to_maintain: 0 2026-03-10T05:52:52.088 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:51 vm02 bash[55303]: debug 2026-03-10T05:52:51.963+0000 7f521b66ad80 4 rocksdb: Options.bottommost_compression_opts.window_bits: -14 2026-03-10T05:52:52.088 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:51 vm02 bash[55303]: debug 2026-03-10T05:52:51.963+0000 7f521b66ad80 4 rocksdb: Options.bottommost_compression_opts.level: 32767 2026-03-10T05:52:52.088 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:51 vm02 bash[55303]: debug 2026-03-10T05:52:51.963+0000 7f521b66ad80 4 rocksdb: Options.bottommost_compression_opts.strategy: 0 2026-03-10T05:52:52.088 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:51 vm02 bash[55303]: debug 2026-03-10T05:52:51.963+0000 7f521b66ad80 4 rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 2026-03-10T05:52:52.088 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:51 vm02 bash[55303]: debug 2026-03-10T05:52:51.963+0000 7f521b66ad80 4 rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 2026-03-10T05:52:52.088 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:51 vm02 bash[55303]: debug 2026-03-10T05:52:51.963+0000 7f521b66ad80 4 rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 2026-03-10T05:52:52.088 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:51 vm02 bash[55303]: debug 2026-03-10T05:52:51.963+0000 7f521b66ad80 4 rocksdb: Options.bottommost_compression_opts.enabled: false 2026-03-10T05:52:52.088 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:51 vm02 bash[55303]: debug 2026-03-10T05:52:51.963+0000 7f521b66ad80 4 rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 2026-03-10T05:52:52.088 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:51 vm02 bash[55303]: debug 2026-03-10T05:52:51.963+0000 7f521b66ad80 4 rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true 2026-03-10T05:52:52.088 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:51 vm02 bash[55303]: debug 2026-03-10T05:52:51.963+0000 7f521b66ad80 4 rocksdb: Options.compression_opts.window_bits: -14 2026-03-10T05:52:52.088 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:51 vm02 bash[55303]: debug 2026-03-10T05:52:51.963+0000 7f521b66ad80 4 rocksdb: Options.compression_opts.level: 32767 2026-03-10T05:52:52.088 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:51 vm02 bash[55303]: debug 2026-03-10T05:52:51.963+0000 7f521b66ad80 4 rocksdb: Options.compression_opts.strategy: 0 2026-03-10T05:52:52.088 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:51 vm02 bash[55303]: debug 2026-03-10T05:52:51.963+0000 7f521b66ad80 4 rocksdb: Options.compression_opts.max_dict_bytes: 0 2026-03-10T05:52:52.088 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:51 vm02 bash[55303]: debug 2026-03-10T05:52:51.963+0000 7f521b66ad80 4 rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 2026-03-10T05:52:52.088 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:51 vm02 bash[55303]: debug 2026-03-10T05:52:51.963+0000 7f521b66ad80 4 rocksdb: Options.compression_opts.use_zstd_dict_trainer: true 2026-03-10T05:52:52.088 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:51 vm02 bash[55303]: debug 2026-03-10T05:52:51.963+0000 7f521b66ad80 4 rocksdb: Options.compression_opts.parallel_threads: 1 2026-03-10T05:52:52.088 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:51 vm02 bash[55303]: debug 2026-03-10T05:52:51.963+0000 7f521b66ad80 4 rocksdb: Options.compression_opts.enabled: false 2026-03-10T05:52:52.088 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:51 vm02 bash[55303]: debug 2026-03-10T05:52:51.963+0000 7f521b66ad80 4 rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 2026-03-10T05:52:52.088 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:51 vm02 bash[55303]: debug 2026-03-10T05:52:51.963+0000 7f521b66ad80 4 rocksdb: Options.level0_file_num_compaction_trigger: 4 2026-03-10T05:52:52.088 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:51 vm02 bash[55303]: debug 2026-03-10T05:52:51.963+0000 7f521b66ad80 4 rocksdb: Options.level0_slowdown_writes_trigger: 20 2026-03-10T05:52:52.088 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:51 vm02 bash[55303]: debug 2026-03-10T05:52:51.963+0000 7f521b66ad80 4 rocksdb: Options.level0_stop_writes_trigger: 36 2026-03-10T05:52:52.088 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:51 vm02 bash[55303]: debug 2026-03-10T05:52:51.963+0000 7f521b66ad80 4 rocksdb: Options.target_file_size_base: 67108864 2026-03-10T05:52:52.088 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:51 vm02 bash[55303]: debug 2026-03-10T05:52:51.963+0000 7f521b66ad80 4 rocksdb: Options.target_file_size_multiplier: 1 2026-03-10T05:52:52.088 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:51 vm02 bash[55303]: debug 2026-03-10T05:52:51.963+0000 7f521b66ad80 4 rocksdb: Options.max_bytes_for_level_base: 268435456 2026-03-10T05:52:52.088 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:51 vm02 bash[55303]: debug 2026-03-10T05:52:51.963+0000 7f521b66ad80 4 rocksdb: Options.level_compaction_dynamic_level_bytes: 1 2026-03-10T05:52:52.088 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:51 vm02 bash[55303]: debug 2026-03-10T05:52:51.963+0000 7f521b66ad80 4 rocksdb: Options.max_bytes_for_level_multiplier: 10.000000 2026-03-10T05:52:52.088 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:51 vm02 bash[55303]: debug 2026-03-10T05:52:51.963+0000 7f521b66ad80 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 2026-03-10T05:52:52.088 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:51 vm02 bash[55303]: debug 2026-03-10T05:52:51.963+0000 7f521b66ad80 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 2026-03-10T05:52:52.088 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:51 vm02 bash[55303]: debug 2026-03-10T05:52:51.963+0000 7f521b66ad80 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 2026-03-10T05:52:52.088 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:51 vm02 bash[55303]: debug 2026-03-10T05:52:51.963+0000 7f521b66ad80 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 2026-03-10T05:52:52.088 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:51 vm02 bash[55303]: debug 2026-03-10T05:52:51.963+0000 7f521b66ad80 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 2026-03-10T05:52:52.089 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:51 vm02 bash[55303]: debug 2026-03-10T05:52:51.963+0000 7f521b66ad80 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 2026-03-10T05:52:52.089 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:51 vm02 bash[55303]: debug 2026-03-10T05:52:51.963+0000 7f521b66ad80 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 2026-03-10T05:52:52.089 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:51 vm02 bash[55303]: debug 2026-03-10T05:52:51.963+0000 7f521b66ad80 4 rocksdb: Options.max_sequential_skip_in_iterations: 8 2026-03-10T05:52:52.089 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:51 vm02 bash[55303]: debug 2026-03-10T05:52:51.963+0000 7f521b66ad80 4 rocksdb: Options.max_compaction_bytes: 1677721600 2026-03-10T05:52:52.089 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:51 vm02 bash[55303]: debug 2026-03-10T05:52:51.963+0000 7f521b66ad80 4 rocksdb: Options.ignore_max_compaction_bytes_for_input: true 2026-03-10T05:52:52.089 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:51 vm02 bash[55303]: debug 2026-03-10T05:52:51.963+0000 7f521b66ad80 4 rocksdb: Options.arena_block_size: 1048576 2026-03-10T05:52:52.089 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:51 vm02 bash[55303]: debug 2026-03-10T05:52:51.963+0000 7f521b66ad80 4 rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 2026-03-10T05:52:52.089 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:51 vm02 bash[55303]: debug 2026-03-10T05:52:51.963+0000 7f521b66ad80 4 rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 2026-03-10T05:52:52.089 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:51 vm02 bash[55303]: debug 2026-03-10T05:52:51.963+0000 7f521b66ad80 4 rocksdb: Options.disable_auto_compactions: 0 2026-03-10T05:52:52.089 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:51 vm02 bash[55303]: debug 2026-03-10T05:52:51.963+0000 7f521b66ad80 4 rocksdb: Options.compaction_style: kCompactionStyleLevel 2026-03-10T05:52:52.089 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:51 vm02 bash[55303]: debug 2026-03-10T05:52:51.963+0000 7f521b66ad80 4 rocksdb: Options.compaction_pri: kMinOverlappingRatio 2026-03-10T05:52:52.089 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:51 vm02 bash[55303]: debug 2026-03-10T05:52:51.963+0000 7f521b66ad80 4 rocksdb: Options.compaction_options_universal.size_ratio: 1 2026-03-10T05:52:52.089 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:51 vm02 bash[55303]: debug 2026-03-10T05:52:51.963+0000 7f521b66ad80 4 rocksdb: Options.compaction_options_universal.min_merge_width: 2 2026-03-10T05:52:52.089 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:51 vm02 bash[55303]: debug 2026-03-10T05:52:51.963+0000 7f521b66ad80 4 rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 2026-03-10T05:52:52.089 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:51 vm02 bash[55303]: debug 2026-03-10T05:52:51.963+0000 7f521b66ad80 4 rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 2026-03-10T05:52:52.089 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:51 vm02 bash[55303]: debug 2026-03-10T05:52:51.963+0000 7f521b66ad80 4 rocksdb: Options.compaction_options_universal.compression_size_percent: -1 2026-03-10T05:52:52.089 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:51 vm02 bash[55303]: debug 2026-03-10T05:52:51.963+0000 7f521b66ad80 4 rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize 2026-03-10T05:52:52.089 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:51 vm02 bash[55303]: debug 2026-03-10T05:52:51.963+0000 7f521b66ad80 4 rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 2026-03-10T05:52:52.089 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:51 vm02 bash[55303]: debug 2026-03-10T05:52:51.963+0000 7f521b66ad80 4 rocksdb: Options.compaction_options_fifo.allow_compaction: 0 2026-03-10T05:52:52.089 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:51 vm02 bash[55303]: debug 2026-03-10T05:52:51.963+0000 7f521b66ad80 4 rocksdb: Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0); 2026-03-10T05:52:52.089 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:51 vm02 bash[55303]: debug 2026-03-10T05:52:51.963+0000 7f521b66ad80 4 rocksdb: Options.inplace_update_support: 0 2026-03-10T05:52:52.089 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:51 vm02 bash[55303]: debug 2026-03-10T05:52:51.963+0000 7f521b66ad80 4 rocksdb: Options.inplace_update_num_locks: 10000 2026-03-10T05:52:52.089 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:51 vm02 bash[55303]: debug 2026-03-10T05:52:51.963+0000 7f521b66ad80 4 rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 2026-03-10T05:52:52.089 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:51 vm02 bash[55303]: debug 2026-03-10T05:52:51.963+0000 7f521b66ad80 4 rocksdb: Options.memtable_whole_key_filtering: 0 2026-03-10T05:52:52.089 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:51 vm02 bash[55303]: debug 2026-03-10T05:52:51.963+0000 7f521b66ad80 4 rocksdb: Options.memtable_huge_page_size: 0 2026-03-10T05:52:52.089 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:51 vm02 bash[55303]: debug 2026-03-10T05:52:51.963+0000 7f521b66ad80 4 rocksdb: Options.bloom_locality: 0 2026-03-10T05:52:52.089 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:51 vm02 bash[55303]: debug 2026-03-10T05:52:51.963+0000 7f521b66ad80 4 rocksdb: Options.max_successive_merges: 0 2026-03-10T05:52:52.089 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:51 vm02 bash[55303]: debug 2026-03-10T05:52:51.963+0000 7f521b66ad80 4 rocksdb: Options.optimize_filters_for_hits: 0 2026-03-10T05:52:52.089 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:51 vm02 bash[55303]: debug 2026-03-10T05:52:51.963+0000 7f521b66ad80 4 rocksdb: Options.paranoid_file_checks: 0 2026-03-10T05:52:52.089 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:51 vm02 bash[55303]: debug 2026-03-10T05:52:51.963+0000 7f521b66ad80 4 rocksdb: Options.force_consistency_checks: 1 2026-03-10T05:52:52.089 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:51 vm02 bash[55303]: debug 2026-03-10T05:52:51.963+0000 7f521b66ad80 4 rocksdb: Options.report_bg_io_stats: 0 2026-03-10T05:52:52.089 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:51 vm02 bash[55303]: debug 2026-03-10T05:52:51.963+0000 7f521b66ad80 4 rocksdb: Options.ttl: 2592000 2026-03-10T05:52:52.089 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:51 vm02 bash[55303]: debug 2026-03-10T05:52:51.963+0000 7f521b66ad80 4 rocksdb: Options.periodic_compaction_seconds: 0 2026-03-10T05:52:52.089 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:51 vm02 bash[55303]: debug 2026-03-10T05:52:51.963+0000 7f521b66ad80 4 rocksdb: Options.preclude_last_level_data_seconds: 0 2026-03-10T05:52:52.089 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:51 vm02 bash[55303]: debug 2026-03-10T05:52:51.963+0000 7f521b66ad80 4 rocksdb: Options.preserve_internal_time_seconds: 0 2026-03-10T05:52:52.089 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:51 vm02 bash[55303]: debug 2026-03-10T05:52:51.963+0000 7f521b66ad80 4 rocksdb: Options.enable_blob_files: false 2026-03-10T05:52:52.089 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:51 vm02 bash[55303]: debug 2026-03-10T05:52:51.963+0000 7f521b66ad80 4 rocksdb: Options.min_blob_size: 0 2026-03-10T05:52:52.089 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:51 vm02 bash[55303]: debug 2026-03-10T05:52:51.963+0000 7f521b66ad80 4 rocksdb: Options.blob_file_size: 268435456 2026-03-10T05:52:52.089 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:51 vm02 bash[55303]: debug 2026-03-10T05:52:51.963+0000 7f521b66ad80 4 rocksdb: Options.blob_compression_type: NoCompression 2026-03-10T05:52:52.089 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:51 vm02 bash[55303]: debug 2026-03-10T05:52:51.963+0000 7f521b66ad80 4 rocksdb: Options.enable_blob_garbage_collection: false 2026-03-10T05:52:52.089 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:51 vm02 bash[55303]: debug 2026-03-10T05:52:51.963+0000 7f521b66ad80 4 rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 2026-03-10T05:52:52.089 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:51 vm02 bash[55303]: debug 2026-03-10T05:52:51.963+0000 7f521b66ad80 4 rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 2026-03-10T05:52:52.089 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:51 vm02 bash[55303]: debug 2026-03-10T05:52:51.963+0000 7f521b66ad80 4 rocksdb: Options.blob_compaction_readahead_size: 0 2026-03-10T05:52:52.089 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:51 vm02 bash[55303]: debug 2026-03-10T05:52:51.963+0000 7f521b66ad80 4 rocksdb: Options.blob_file_starting_level: 0 2026-03-10T05:52:52.089 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:51 vm02 bash[55303]: debug 2026-03-10T05:52:51.963+0000 7f521b66ad80 4 rocksdb: Options.experimental_mempurge_threshold: 0.000000 2026-03-10T05:52:52.089 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:51 vm02 bash[55303]: debug 2026-03-10T05:52:51.963+0000 7f521b66ad80 3 rocksdb: [table/block_based/block_based_table_reader.cc:721] At least one SST file opened without unique ID to verify: 18.sst 2026-03-10T05:52:52.089 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:51 vm02 bash[55303]: debug 2026-03-10T05:52:51.963+0000 7f521b66ad80 4 rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. 2026-03-10T05:52:52.089 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:51 vm02 bash[55303]: debug 2026-03-10T05:52:51.963+0000 7f521b66ad80 4 rocksdb: [db/version_set.cc:5566] Recovered from manifest file:/var/lib/ceph/mon/ceph-c/store.db/MANIFEST-000009 succeeded,manifest_file_number is 9, next_file_number is 20, last_sequence is 10285, log_number is 16,prev_log_number is 0,max_column_family is 0,min_log_number_to_keep is 0 2026-03-10T05:52:52.089 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:51 vm02 bash[55303]: debug 2026-03-10T05:52:51.963+0000 7f521b66ad80 4 rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 16 2026-03-10T05:52:52.090 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:51 vm02 bash[55303]: debug 2026-03-10T05:52:51.967+0000 7f521b66ad80 4 rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 70b423b5-063a-4607-a4dc-c109ddc7c618 2026-03-10T05:52:52.090 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:51 vm02 bash[55303]: debug 2026-03-10T05:52:51.967+0000 7f521b66ad80 4 rocksdb: EVENT_LOG_v1 {"time_micros": 1773121971970379, "job": 1, "event": "recovery_started", "wal_files": [16]} 2026-03-10T05:52:52.090 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:51 vm02 bash[55303]: debug 2026-03-10T05:52:51.967+0000 7f521b66ad80 4 rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #16 mode 2 2026-03-10T05:52:52.090 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:51 vm02 bash[55303]: debug 2026-03-10T05:52:51.979+0000 7f521b66ad80 4 rocksdb: EVENT_LOG_v1 {"time_micros": 1773121971984558, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 21, "file_size": 3201060, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 10290, "largest_seqno": 11692, "table_properties": {"data_size": 3194351, "index_size": 4197, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1477, "raw_key_size": 14083, "raw_average_key_size": 24, "raw_value_size": 3181253, "raw_average_value_size": 5447, "num_data_blocks": 194, "num_entries": 584, "num_filter_entries": 584, "num_deletions": 2, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1773121971, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "70b423b5-063a-4607-a4dc-c109ddc7c618", "db_session_id": "X8RAATT4QS4OJCKO8A8H", "orig_file_number": 21, "seqno_to_time_mapping": "N/A"}} 2026-03-10T05:52:52.090 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:51 vm02 bash[55303]: debug 2026-03-10T05:52:51.979+0000 7f521b66ad80 4 rocksdb: EVENT_LOG_v1 {"time_micros": 1773121971984962, "job": 1, "event": "recovery_finished"} 2026-03-10T05:52:52.090 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:51 vm02 bash[55303]: debug 2026-03-10T05:52:51.979+0000 7f521b66ad80 4 rocksdb: [db/version_set.cc:5047] Creating manifest 23 2026-03-10T05:52:52.090 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:51 vm02 bash[55303]: debug 2026-03-10T05:52:51.979+0000 7f521b66ad80 4 rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. 2026-03-10T05:52:52.090 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:51 vm02 bash[55303]: debug 2026-03-10T05:52:51.983+0000 7f521b66ad80 4 rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-c/store.db/000016.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 2026-03-10T05:52:52.090 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:51 vm02 bash[55303]: debug 2026-03-10T05:52:51.983+0000 7f521b66ad80 4 rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x558ff1d42e00 2026-03-10T05:52:52.090 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:51 vm02 bash[55303]: debug 2026-03-10T05:52:51.983+0000 7f521b66ad80 4 rocksdb: DB pointer 0x558ff1e4e000 2026-03-10T05:52:52.090 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:51 vm02 bash[55303]: debug 2026-03-10T05:52:51.983+0000 7f521b66ad80 0 starting mon.c rank 1 at public addrs [v2:192.168.123.102:3301/0,v1:192.168.123.102:6790/0] at bind addrs [v2:192.168.123.102:3301/0,v1:192.168.123.102:6790/0] mon_data /var/lib/ceph/mon/ceph-c fsid 107483ae-1c44-11f1-b530-c1172cd6122a 2026-03-10T05:52:52.090 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:51 vm02 bash[55303]: debug 2026-03-10T05:52:51.983+0000 7f521b66ad80 1 mon.c@-1(???) e3 preinit fsid 107483ae-1c44-11f1-b530-c1172cd6122a 2026-03-10T05:52:52.090 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:51 vm02 bash[55303]: debug 2026-03-10T05:52:51.987+0000 7f521b66ad80 0 mon.c@-1(???).mds e1 new map 2026-03-10T05:52:52.090 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:51 vm02 bash[55303]: debug 2026-03-10T05:52:51.987+0000 7f521b66ad80 0 mon.c@-1(???).mds e1 print_map 2026-03-10T05:52:52.090 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:51 vm02 bash[55303]: e1 2026-03-10T05:52:52.090 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:51 vm02 bash[55303]: btime 1970-01-01T00:00:00:000000+0000 2026-03-10T05:52:52.090 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:51 vm02 bash[55303]: enable_multiple, ever_enabled_multiple: 1,1 2026-03-10T05:52:52.090 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:51 vm02 bash[55303]: default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2} 2026-03-10T05:52:52.090 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:51 vm02 bash[55303]: legacy client fscid: -1 2026-03-10T05:52:52.090 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:51 vm02 bash[55303]: 2026-03-10T05:52:52.090 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:51 vm02 bash[55303]: No filesystems configured 2026-03-10T05:52:52.090 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:51 vm02 bash[55303]: debug 2026-03-10T05:52:51.987+0000 7f521b66ad80 0 mon.c@-1(???).osd e90 crush map has features 3314933000854323200, adjusting msgr requires 2026-03-10T05:52:52.090 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:51 vm02 bash[55303]: debug 2026-03-10T05:52:51.987+0000 7f521b66ad80 0 mon.c@-1(???).osd e90 crush map has features 432629239337189376, adjusting msgr requires 2026-03-10T05:52:52.090 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:51 vm02 bash[55303]: debug 2026-03-10T05:52:51.987+0000 7f521b66ad80 0 mon.c@-1(???).osd e90 crush map has features 432629239337189376, adjusting msgr requires 2026-03-10T05:52:52.090 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:51 vm02 bash[55303]: debug 2026-03-10T05:52:51.987+0000 7f521b66ad80 0 mon.c@-1(???).osd e90 crush map has features 432629239337189376, adjusting msgr requires 2026-03-10T05:52:52.090 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:51 vm02 bash[55303]: debug 2026-03-10T05:52:51.987+0000 7f521b66ad80 1 mon.c@-1(???).paxosservice(auth 1..21) refresh upgraded, format 0 -> 3 2026-03-10T05:52:53.334 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:52:53 vm02 bash[17462]: cluster 2026-03-10T05:52:52.016047+0000 mon.c (mon.1) 1 : cluster [INF] mon.c calling monitor election 2026-03-10T05:52:53.335 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:52:53 vm02 bash[17462]: cluster 2026-03-10T05:52:52.080533+0000 mon.a (mon.0) 947 : cluster [INF] mon.a calling monitor election 2026-03-10T05:52:53.335 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:52:53 vm02 bash[17462]: cluster 2026-03-10T05:52:52.085319+0000 mon.a (mon.0) 948 : cluster [INF] mon.a is new leader, mons a,c,b in quorum (ranks 0,1,2) 2026-03-10T05:52:53.335 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:52:53 vm02 bash[17462]: cluster 2026-03-10T05:52:52.091757+0000 mon.a (mon.0) 949 : cluster [DBG] monmap e3: 3 mons at {a=[v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0],b=[v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0],c=[v2:192.168.123.102:3301/0,v1:192.168.123.102:6790/0]} 2026-03-10T05:52:53.335 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:52:53 vm02 bash[17462]: cluster 2026-03-10T05:52:52.091959+0000 mon.a (mon.0) 950 : cluster [DBG] fsmap 2026-03-10T05:52:53.335 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:52:53 vm02 bash[17462]: cluster 2026-03-10T05:52:52.092051+0000 mon.a (mon.0) 951 : cluster [DBG] osdmap e90: 8 total, 8 up, 8 in 2026-03-10T05:52:53.335 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:52:53 vm02 bash[17462]: cluster 2026-03-10T05:52:52.092757+0000 mon.a (mon.0) 952 : cluster [DBG] mgrmap e32: y(active, since 30s), standbys: x 2026-03-10T05:52:53.335 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:52:53 vm02 bash[17462]: cluster 2026-03-10T05:52:52.099553+0000 mon.a (mon.0) 953 : cluster [INF] overall HEALTH_OK 2026-03-10T05:52:53.335 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:52:53 vm02 bash[17462]: audit 2026-03-10T05:52:52.102848+0000 mon.a (mon.0) 954 : audit [INF] from='mgr.24988 ' entity='' 2026-03-10T05:52:53.335 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:52:53 vm02 bash[17462]: audit 2026-03-10T05:52:52.108648+0000 mon.a (mon.0) 955 : audit [INF] from='mgr.24988 ' entity='mgr.y' 2026-03-10T05:52:53.335 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:52:53 vm02 bash[17462]: cluster 2026-03-10T05:52:52.206606+0000 mgr.y (mgr.24988) 51 : cluster [DBG] pgmap v18: 161 pgs: 161 active+clean; 457 KiB data, 102 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T05:52:53.335 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:52:53 vm02 bash[17462]: audit 2026-03-10T05:52:52.424815+0000 mon.b (mon.2) 90 : audit [DBG] from='mgr.24988 192.168.123.102:0/1338236995' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T05:52:53.335 INFO:journalctl@ceph.mgr.y.vm02.stdout:Mar 10 05:52:52 vm02 bash[52264]: ::ffff:192.168.123.105 - - [10/Mar/2026:05:52:52] "GET /metrics HTTP/1.1" 200 37727 "" "Prometheus/2.51.0" 2026-03-10T05:52:53.335 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:53 vm02 bash[55303]: cluster 2026-03-10T05:52:52.016047+0000 mon.c (mon.1) 1 : cluster [INF] mon.c calling monitor election 2026-03-10T05:52:53.335 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:53 vm02 bash[55303]: cluster 2026-03-10T05:52:52.016047+0000 mon.c (mon.1) 1 : cluster [INF] mon.c calling monitor election 2026-03-10T05:52:53.335 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:53 vm02 bash[55303]: cluster 2026-03-10T05:52:52.080533+0000 mon.a (mon.0) 947 : cluster [INF] mon.a calling monitor election 2026-03-10T05:52:53.335 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:53 vm02 bash[55303]: cluster 2026-03-10T05:52:52.080533+0000 mon.a (mon.0) 947 : cluster [INF] mon.a calling monitor election 2026-03-10T05:52:53.335 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:53 vm02 bash[55303]: cluster 2026-03-10T05:52:52.085319+0000 mon.a (mon.0) 948 : cluster [INF] mon.a is new leader, mons a,c,b in quorum (ranks 0,1,2) 2026-03-10T05:52:53.335 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:53 vm02 bash[55303]: cluster 2026-03-10T05:52:52.085319+0000 mon.a (mon.0) 948 : cluster [INF] mon.a is new leader, mons a,c,b in quorum (ranks 0,1,2) 2026-03-10T05:52:53.335 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:53 vm02 bash[55303]: cluster 2026-03-10T05:52:52.091757+0000 mon.a (mon.0) 949 : cluster [DBG] monmap e3: 3 mons at {a=[v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0],b=[v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0],c=[v2:192.168.123.102:3301/0,v1:192.168.123.102:6790/0]} 2026-03-10T05:52:53.335 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:53 vm02 bash[55303]: cluster 2026-03-10T05:52:52.091757+0000 mon.a (mon.0) 949 : cluster [DBG] monmap e3: 3 mons at {a=[v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0],b=[v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0],c=[v2:192.168.123.102:3301/0,v1:192.168.123.102:6790/0]} 2026-03-10T05:52:53.335 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:53 vm02 bash[55303]: cluster 2026-03-10T05:52:52.091959+0000 mon.a (mon.0) 950 : cluster [DBG] fsmap 2026-03-10T05:52:53.335 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:53 vm02 bash[55303]: cluster 2026-03-10T05:52:52.091959+0000 mon.a (mon.0) 950 : cluster [DBG] fsmap 2026-03-10T05:52:53.335 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:53 vm02 bash[55303]: cluster 2026-03-10T05:52:52.092051+0000 mon.a (mon.0) 951 : cluster [DBG] osdmap e90: 8 total, 8 up, 8 in 2026-03-10T05:52:53.335 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:53 vm02 bash[55303]: cluster 2026-03-10T05:52:52.092051+0000 mon.a (mon.0) 951 : cluster [DBG] osdmap e90: 8 total, 8 up, 8 in 2026-03-10T05:52:53.335 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:53 vm02 bash[55303]: cluster 2026-03-10T05:52:52.092757+0000 mon.a (mon.0) 952 : cluster [DBG] mgrmap e32: y(active, since 30s), standbys: x 2026-03-10T05:52:53.335 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:53 vm02 bash[55303]: cluster 2026-03-10T05:52:52.092757+0000 mon.a (mon.0) 952 : cluster [DBG] mgrmap e32: y(active, since 30s), standbys: x 2026-03-10T05:52:53.335 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:53 vm02 bash[55303]: cluster 2026-03-10T05:52:52.099553+0000 mon.a (mon.0) 953 : cluster [INF] overall HEALTH_OK 2026-03-10T05:52:53.335 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:53 vm02 bash[55303]: cluster 2026-03-10T05:52:52.099553+0000 mon.a (mon.0) 953 : cluster [INF] overall HEALTH_OK 2026-03-10T05:52:53.335 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:53 vm02 bash[55303]: audit 2026-03-10T05:52:52.102848+0000 mon.a (mon.0) 954 : audit [INF] from='mgr.24988 ' entity='' 2026-03-10T05:52:53.335 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:53 vm02 bash[55303]: audit 2026-03-10T05:52:52.102848+0000 mon.a (mon.0) 954 : audit [INF] from='mgr.24988 ' entity='' 2026-03-10T05:52:53.335 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:53 vm02 bash[55303]: audit 2026-03-10T05:52:52.108648+0000 mon.a (mon.0) 955 : audit [INF] from='mgr.24988 ' entity='mgr.y' 2026-03-10T05:52:53.335 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:53 vm02 bash[55303]: audit 2026-03-10T05:52:52.108648+0000 mon.a (mon.0) 955 : audit [INF] from='mgr.24988 ' entity='mgr.y' 2026-03-10T05:52:53.335 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:53 vm02 bash[55303]: cluster 2026-03-10T05:52:52.206606+0000 mgr.y (mgr.24988) 51 : cluster [DBG] pgmap v18: 161 pgs: 161 active+clean; 457 KiB data, 102 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T05:52:53.335 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:53 vm02 bash[55303]: cluster 2026-03-10T05:52:52.206606+0000 mgr.y (mgr.24988) 51 : cluster [DBG] pgmap v18: 161 pgs: 161 active+clean; 457 KiB data, 102 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T05:52:53.335 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:53 vm02 bash[55303]: audit 2026-03-10T05:52:52.424815+0000 mon.b (mon.2) 90 : audit [DBG] from='mgr.24988 192.168.123.102:0/1338236995' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T05:52:53.335 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:53 vm02 bash[55303]: audit 2026-03-10T05:52:52.424815+0000 mon.b (mon.2) 90 : audit [DBG] from='mgr.24988 192.168.123.102:0/1338236995' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T05:52:53.501 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:52:53 vm05 bash[17864]: cluster 2026-03-10T05:52:52.016047+0000 mon.c (mon.1) 1 : cluster [INF] mon.c calling monitor election 2026-03-10T05:52:53.501 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:52:53 vm05 bash[17864]: cluster 2026-03-10T05:52:52.080533+0000 mon.a (mon.0) 947 : cluster [INF] mon.a calling monitor election 2026-03-10T05:52:53.501 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:52:53 vm05 bash[17864]: cluster 2026-03-10T05:52:52.085319+0000 mon.a (mon.0) 948 : cluster [INF] mon.a is new leader, mons a,c,b in quorum (ranks 0,1,2) 2026-03-10T05:52:53.501 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:52:53 vm05 bash[17864]: cluster 2026-03-10T05:52:52.091757+0000 mon.a (mon.0) 949 : cluster [DBG] monmap e3: 3 mons at {a=[v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0],b=[v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0],c=[v2:192.168.123.102:3301/0,v1:192.168.123.102:6790/0]} 2026-03-10T05:52:53.501 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:52:53 vm05 bash[17864]: cluster 2026-03-10T05:52:52.091959+0000 mon.a (mon.0) 950 : cluster [DBG] fsmap 2026-03-10T05:52:53.501 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:52:53 vm05 bash[17864]: cluster 2026-03-10T05:52:52.092051+0000 mon.a (mon.0) 951 : cluster [DBG] osdmap e90: 8 total, 8 up, 8 in 2026-03-10T05:52:53.501 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:52:53 vm05 bash[17864]: cluster 2026-03-10T05:52:52.092757+0000 mon.a (mon.0) 952 : cluster [DBG] mgrmap e32: y(active, since 30s), standbys: x 2026-03-10T05:52:53.501 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:52:53 vm05 bash[17864]: cluster 2026-03-10T05:52:52.099553+0000 mon.a (mon.0) 953 : cluster [INF] overall HEALTH_OK 2026-03-10T05:52:53.501 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:52:53 vm05 bash[17864]: audit 2026-03-10T05:52:52.102848+0000 mon.a (mon.0) 954 : audit [INF] from='mgr.24988 ' entity='' 2026-03-10T05:52:53.501 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:52:53 vm05 bash[17864]: audit 2026-03-10T05:52:52.108648+0000 mon.a (mon.0) 955 : audit [INF] from='mgr.24988 ' entity='mgr.y' 2026-03-10T05:52:53.501 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:52:53 vm05 bash[17864]: cluster 2026-03-10T05:52:52.206606+0000 mgr.y (mgr.24988) 51 : cluster [DBG] pgmap v18: 161 pgs: 161 active+clean; 457 KiB data, 102 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T05:52:53.501 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:52:53 vm05 bash[17864]: audit 2026-03-10T05:52:52.424815+0000 mon.b (mon.2) 90 : audit [DBG] from='mgr.24988 192.168.123.102:0/1338236995' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T05:52:54.501 INFO:journalctl@ceph.prometheus.a.vm05.stdout:Mar 10 05:52:54 vm05 bash[41269]: ts=2026-03-10T05:52:54.147Z caller=group.go:483 level=warn name=CephOSDFlapping index=13 component="rule manager" file=/etc/prometheus/alerting/ceph_alerts.yml group=osd msg="Evaluating rule failed" rule="alert: CephOSDFlapping\nexpr: (rate(ceph_osd_up[5m]) * on (ceph_daemon) group_left (hostname) ceph_osd_metadata)\n * 60 > 1\nlabels:\n oid: 1.3.6.1.4.1.50495.1.2.1.4.4\n severity: warning\n type: ceph_default\nannotations:\n description: OSD {{ $labels.ceph_daemon }} on {{ $labels.hostname }} was marked\n down and back up {{ $value | humanize }} times once a minute for 5 minutes. This\n may indicate a network issue (latency, packet loss, MTU mismatch) on the cluster\n network, or the public network if no cluster network is deployed. Check the network\n stats on the listed host(s).\n documentation: https://docs.ceph.com/en/latest/rados/troubleshooting/troubleshooting-osd#flapping-osds\n summary: Network issues are causing OSDs to flap (mark each other down)\n" err="found duplicate series for the match group {ceph_daemon=\"osd.0\"} on the right hand-side of the operation: [{__name__=\"ceph_osd_metadata\", ceph_daemon=\"osd.0\", ceph_version=\"ceph version 17.2.0 (43e2e60a7559d3f46c9d53f1ca875fd499a1e35e) quincy (stable)\", cluster=\"107483ae-1c44-11f1-b530-c1172cd6122a\", cluster_addr=\"192.168.123.102\", device_class=\"hdd\", hostname=\"vm02\", instance=\"ceph_cluster\", job=\"ceph\", objectstore=\"bluestore\", public_addr=\"192.168.123.102\"}, {__name__=\"ceph_osd_metadata\", ceph_daemon=\"osd.0\", ceph_version=\"ceph version 17.2.0 (43e2e60a7559d3f46c9d53f1ca875fd499a1e35e) quincy (stable)\", cluster_addr=\"192.168.123.102\", device_class=\"hdd\", hostname=\"vm02\", instance=\"192.168.123.105:9283\", job=\"ceph\", objectstore=\"bluestore\", public_addr=\"192.168.123.102\"}];many-to-many matching not allowed: matching labels must be unique on one side" 2026-03-10T05:52:55.751 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:52:55 vm05 bash[17864]: cluster 2026-03-10T05:52:54.207089+0000 mgr.y (mgr.24988) 52 : cluster [DBG] pgmap v19: 161 pgs: 161 active+clean; 457 KiB data, 102 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T05:52:55.834 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:52:55 vm02 bash[17462]: cluster 2026-03-10T05:52:54.207089+0000 mgr.y (mgr.24988) 52 : cluster [DBG] pgmap v19: 161 pgs: 161 active+clean; 457 KiB data, 102 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T05:52:55.834 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:55 vm02 bash[55303]: cluster 2026-03-10T05:52:54.207089+0000 mgr.y (mgr.24988) 52 : cluster [DBG] pgmap v19: 161 pgs: 161 active+clean; 457 KiB data, 102 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T05:52:55.834 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:55 vm02 bash[55303]: cluster 2026-03-10T05:52:54.207089+0000 mgr.y (mgr.24988) 52 : cluster [DBG] pgmap v19: 161 pgs: 161 active+clean; 457 KiB data, 102 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T05:52:57.234 INFO:journalctl@ceph.prometheus.a.vm05.stdout:Mar 10 05:52:56 vm05 bash[41269]: ts=2026-03-10T05:52:56.948Z caller=group.go:483 level=warn name=CephNodeDiskspaceWarning index=4 component="rule manager" file=/etc/prometheus/alerting/ceph_alerts.yml group=nodes msg="Evaluating rule failed" rule="alert: CephNodeDiskspaceWarning\nexpr: predict_linear(node_filesystem_free_bytes{device=~\"/.*\"}[2d], 3600 * 24 * 5)\n * on (instance) group_left (nodename) node_uname_info < 0\nlabels:\n oid: 1.3.6.1.4.1.50495.1.2.1.8.4\n severity: warning\n type: ceph_default\nannotations:\n description: Mountpoint {{ $labels.mountpoint }} on {{ $labels.nodename }} will\n be full in less than 5 days based on the 48 hour trailing fill rate.\n summary: Host filesystem free space is getting low\n" err="found duplicate series for the match group {instance=\"vm05\"} on the right hand-side of the operation: [{__name__=\"node_uname_info\", cluster=\"107483ae-1c44-11f1-b530-c1172cd6122a\", domainname=\"(none)\", instance=\"vm05\", job=\"node\", machine=\"x86_64\", nodename=\"vm05\", release=\"5.15.0-1092-kvm\", sysname=\"Linux\", version=\"#97-Ubuntu SMP Fri Jan 23 15:00:24 UTC 2026\"}, {__name__=\"node_uname_info\", domainname=\"(none)\", instance=\"vm05\", job=\"node\", machine=\"x86_64\", nodename=\"vm05\", release=\"5.15.0-1092-kvm\", sysname=\"Linux\", version=\"#97-Ubuntu SMP Fri Jan 23 15:00:24 UTC 2026\"}];many-to-many matching not allowed: matching labels must be unique on one side" 2026-03-10T05:52:57.334 INFO:journalctl@ceph.mgr.y.vm02.stdout:Mar 10 05:52:56 vm02 bash[52264]: debug 2026-03-10T05:52:56.991+0000 7f94e91f9640 -1 mgr.server handle_report got status from non-daemon mon.c 2026-03-10T05:52:57.501 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:52:57 vm05 bash[17864]: cluster 2026-03-10T05:52:56.207423+0000 mgr.y (mgr.24988) 53 : cluster [DBG] pgmap v20: 161 pgs: 161 active+clean; 457 KiB data, 102 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T05:52:57.834 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:52:57 vm02 bash[17462]: cluster 2026-03-10T05:52:56.207423+0000 mgr.y (mgr.24988) 53 : cluster [DBG] pgmap v20: 161 pgs: 161 active+clean; 457 KiB data, 102 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T05:52:57.834 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:57 vm02 bash[55303]: cluster 2026-03-10T05:52:56.207423+0000 mgr.y (mgr.24988) 53 : cluster [DBG] pgmap v20: 161 pgs: 161 active+clean; 457 KiB data, 102 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T05:52:57.835 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:57 vm02 bash[55303]: cluster 2026-03-10T05:52:56.207423+0000 mgr.y (mgr.24988) 53 : cluster [DBG] pgmap v20: 161 pgs: 161 active+clean; 457 KiB data, 102 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T05:52:58.751 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:52:58 vm05 bash[17864]: audit 2026-03-10T05:52:56.860883+0000 mgr.y (mgr.24988) 54 : audit [DBG] from='client.25048 -' entity='client.iscsi.foo.vm02.mxbwmh' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T05:52:58.751 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:52:58 vm05 bash[17864]: audit 2026-03-10T05:52:57.427922+0000 mon.a (mon.0) 956 : audit [INF] from='mgr.24988 ' entity='mgr.y' 2026-03-10T05:52:58.751 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:52:58 vm05 bash[17864]: audit 2026-03-10T05:52:57.437228+0000 mon.a (mon.0) 957 : audit [INF] from='mgr.24988 ' entity='mgr.y' 2026-03-10T05:52:58.751 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:52:58 vm05 bash[17864]: audit 2026-03-10T05:52:57.510931+0000 mon.a (mon.0) 958 : audit [INF] from='mgr.24988 ' entity='mgr.y' 2026-03-10T05:52:58.751 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:52:58 vm05 bash[17864]: audit 2026-03-10T05:52:57.518147+0000 mon.a (mon.0) 959 : audit [INF] from='mgr.24988 ' entity='mgr.y' 2026-03-10T05:52:58.751 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:52:58 vm05 bash[17864]: audit 2026-03-10T05:52:58.067204+0000 mon.a (mon.0) 960 : audit [INF] from='mgr.24988 ' entity='mgr.y' 2026-03-10T05:52:58.751 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:52:58 vm05 bash[17864]: audit 2026-03-10T05:52:58.075926+0000 mon.a (mon.0) 961 : audit [INF] from='mgr.24988 ' entity='mgr.y' 2026-03-10T05:52:58.834 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:52:58 vm02 bash[17462]: audit 2026-03-10T05:52:56.860883+0000 mgr.y (mgr.24988) 54 : audit [DBG] from='client.25048 -' entity='client.iscsi.foo.vm02.mxbwmh' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T05:52:58.834 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:52:58 vm02 bash[17462]: audit 2026-03-10T05:52:57.427922+0000 mon.a (mon.0) 956 : audit [INF] from='mgr.24988 ' entity='mgr.y' 2026-03-10T05:52:58.834 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:52:58 vm02 bash[17462]: audit 2026-03-10T05:52:57.437228+0000 mon.a (mon.0) 957 : audit [INF] from='mgr.24988 ' entity='mgr.y' 2026-03-10T05:52:58.834 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:52:58 vm02 bash[17462]: audit 2026-03-10T05:52:57.510931+0000 mon.a (mon.0) 958 : audit [INF] from='mgr.24988 ' entity='mgr.y' 2026-03-10T05:52:58.834 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:52:58 vm02 bash[17462]: audit 2026-03-10T05:52:57.518147+0000 mon.a (mon.0) 959 : audit [INF] from='mgr.24988 ' entity='mgr.y' 2026-03-10T05:52:58.834 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:52:58 vm02 bash[17462]: audit 2026-03-10T05:52:58.067204+0000 mon.a (mon.0) 960 : audit [INF] from='mgr.24988 ' entity='mgr.y' 2026-03-10T05:52:58.834 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:52:58 vm02 bash[17462]: audit 2026-03-10T05:52:58.075926+0000 mon.a (mon.0) 961 : audit [INF] from='mgr.24988 ' entity='mgr.y' 2026-03-10T05:52:58.834 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:58 vm02 bash[55303]: audit 2026-03-10T05:52:56.860883+0000 mgr.y (mgr.24988) 54 : audit [DBG] from='client.25048 -' entity='client.iscsi.foo.vm02.mxbwmh' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T05:52:58.835 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:58 vm02 bash[55303]: audit 2026-03-10T05:52:56.860883+0000 mgr.y (mgr.24988) 54 : audit [DBG] from='client.25048 -' entity='client.iscsi.foo.vm02.mxbwmh' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T05:52:58.835 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:58 vm02 bash[55303]: audit 2026-03-10T05:52:57.427922+0000 mon.a (mon.0) 956 : audit [INF] from='mgr.24988 ' entity='mgr.y' 2026-03-10T05:52:58.835 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:58 vm02 bash[55303]: audit 2026-03-10T05:52:57.427922+0000 mon.a (mon.0) 956 : audit [INF] from='mgr.24988 ' entity='mgr.y' 2026-03-10T05:52:58.835 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:58 vm02 bash[55303]: audit 2026-03-10T05:52:57.437228+0000 mon.a (mon.0) 957 : audit [INF] from='mgr.24988 ' entity='mgr.y' 2026-03-10T05:52:58.835 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:58 vm02 bash[55303]: audit 2026-03-10T05:52:57.437228+0000 mon.a (mon.0) 957 : audit [INF] from='mgr.24988 ' entity='mgr.y' 2026-03-10T05:52:58.835 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:58 vm02 bash[55303]: audit 2026-03-10T05:52:57.510931+0000 mon.a (mon.0) 958 : audit [INF] from='mgr.24988 ' entity='mgr.y' 2026-03-10T05:52:58.835 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:58 vm02 bash[55303]: audit 2026-03-10T05:52:57.510931+0000 mon.a (mon.0) 958 : audit [INF] from='mgr.24988 ' entity='mgr.y' 2026-03-10T05:52:58.835 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:58 vm02 bash[55303]: audit 2026-03-10T05:52:57.518147+0000 mon.a (mon.0) 959 : audit [INF] from='mgr.24988 ' entity='mgr.y' 2026-03-10T05:52:58.835 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:58 vm02 bash[55303]: audit 2026-03-10T05:52:57.518147+0000 mon.a (mon.0) 959 : audit [INF] from='mgr.24988 ' entity='mgr.y' 2026-03-10T05:52:58.835 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:58 vm02 bash[55303]: audit 2026-03-10T05:52:58.067204+0000 mon.a (mon.0) 960 : audit [INF] from='mgr.24988 ' entity='mgr.y' 2026-03-10T05:52:58.835 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:58 vm02 bash[55303]: audit 2026-03-10T05:52:58.067204+0000 mon.a (mon.0) 960 : audit [INF] from='mgr.24988 ' entity='mgr.y' 2026-03-10T05:52:58.835 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:58 vm02 bash[55303]: audit 2026-03-10T05:52:58.075926+0000 mon.a (mon.0) 961 : audit [INF] from='mgr.24988 ' entity='mgr.y' 2026-03-10T05:52:58.835 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:58 vm02 bash[55303]: audit 2026-03-10T05:52:58.075926+0000 mon.a (mon.0) 961 : audit [INF] from='mgr.24988 ' entity='mgr.y' 2026-03-10T05:52:59.751 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:52:59 vm05 bash[17864]: cluster 2026-03-10T05:52:58.207724+0000 mgr.y (mgr.24988) 55 : cluster [DBG] pgmap v21: 161 pgs: 161 active+clean; 457 KiB data, 102 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T05:52:59.834 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:52:59 vm02 bash[17462]: cluster 2026-03-10T05:52:58.207724+0000 mgr.y (mgr.24988) 55 : cluster [DBG] pgmap v21: 161 pgs: 161 active+clean; 457 KiB data, 102 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T05:52:59.834 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:59 vm02 bash[55303]: cluster 2026-03-10T05:52:58.207724+0000 mgr.y (mgr.24988) 55 : cluster [DBG] pgmap v21: 161 pgs: 161 active+clean; 457 KiB data, 102 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T05:52:59.834 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:52:59 vm02 bash[55303]: cluster 2026-03-10T05:52:58.207724+0000 mgr.y (mgr.24988) 55 : cluster [DBG] pgmap v21: 161 pgs: 161 active+clean; 457 KiB data, 102 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T05:53:01.751 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:01 vm05 bash[17864]: cluster 2026-03-10T05:53:00.208223+0000 mgr.y (mgr.24988) 56 : cluster [DBG] pgmap v22: 161 pgs: 161 active+clean; 457 KiB data, 102 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T05:53:01.834 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:01 vm02 bash[17462]: cluster 2026-03-10T05:53:00.208223+0000 mgr.y (mgr.24988) 56 : cluster [DBG] pgmap v22: 161 pgs: 161 active+clean; 457 KiB data, 102 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T05:53:01.834 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:01 vm02 bash[55303]: cluster 2026-03-10T05:53:00.208223+0000 mgr.y (mgr.24988) 56 : cluster [DBG] pgmap v22: 161 pgs: 161 active+clean; 457 KiB data, 102 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T05:53:01.834 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:01 vm02 bash[55303]: cluster 2026-03-10T05:53:00.208223+0000 mgr.y (mgr.24988) 56 : cluster [DBG] pgmap v22: 161 pgs: 161 active+clean; 457 KiB data, 102 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T05:53:03.334 INFO:journalctl@ceph.mgr.y.vm02.stdout:Mar 10 05:53:02 vm02 bash[52264]: ::ffff:192.168.123.105 - - [10/Mar/2026:05:53:02] "GET /metrics HTTP/1.1" 200 37727 "" "Prometheus/2.51.0" 2026-03-10T05:53:03.751 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:03 vm05 bash[17864]: cluster 2026-03-10T05:53:02.208554+0000 mgr.y (mgr.24988) 57 : cluster [DBG] pgmap v23: 161 pgs: 161 active+clean; 457 KiB data, 102 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T05:53:03.834 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:03 vm02 bash[17462]: cluster 2026-03-10T05:53:02.208554+0000 mgr.y (mgr.24988) 57 : cluster [DBG] pgmap v23: 161 pgs: 161 active+clean; 457 KiB data, 102 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T05:53:03.835 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:03 vm02 bash[55303]: cluster 2026-03-10T05:53:02.208554+0000 mgr.y (mgr.24988) 57 : cluster [DBG] pgmap v23: 161 pgs: 161 active+clean; 457 KiB data, 102 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T05:53:03.835 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:03 vm02 bash[55303]: cluster 2026-03-10T05:53:02.208554+0000 mgr.y (mgr.24988) 57 : cluster [DBG] pgmap v23: 161 pgs: 161 active+clean; 457 KiB data, 102 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T05:53:04.501 INFO:journalctl@ceph.prometheus.a.vm05.stdout:Mar 10 05:53:04 vm05 bash[41269]: ts=2026-03-10T05:53:04.148Z caller=group.go:483 level=warn name=CephOSDFlapping index=13 component="rule manager" file=/etc/prometheus/alerting/ceph_alerts.yml group=osd msg="Evaluating rule failed" rule="alert: CephOSDFlapping\nexpr: (rate(ceph_osd_up[5m]) * on (ceph_daemon) group_left (hostname) ceph_osd_metadata)\n * 60 > 1\nlabels:\n oid: 1.3.6.1.4.1.50495.1.2.1.4.4\n severity: warning\n type: ceph_default\nannotations:\n description: OSD {{ $labels.ceph_daemon }} on {{ $labels.hostname }} was marked\n down and back up {{ $value | humanize }} times once a minute for 5 minutes. This\n may indicate a network issue (latency, packet loss, MTU mismatch) on the cluster\n network, or the public network if no cluster network is deployed. Check the network\n stats on the listed host(s).\n documentation: https://docs.ceph.com/en/latest/rados/troubleshooting/troubleshooting-osd#flapping-osds\n summary: Network issues are causing OSDs to flap (mark each other down)\n" err="found duplicate series for the match group {ceph_daemon=\"osd.0\"} on the right hand-side of the operation: [{__name__=\"ceph_osd_metadata\", ceph_daemon=\"osd.0\", ceph_version=\"ceph version 17.2.0 (43e2e60a7559d3f46c9d53f1ca875fd499a1e35e) quincy (stable)\", cluster=\"107483ae-1c44-11f1-b530-c1172cd6122a\", cluster_addr=\"192.168.123.102\", device_class=\"hdd\", hostname=\"vm02\", instance=\"ceph_cluster\", job=\"ceph\", objectstore=\"bluestore\", public_addr=\"192.168.123.102\"}, {__name__=\"ceph_osd_metadata\", ceph_daemon=\"osd.0\", ceph_version=\"ceph version 17.2.0 (43e2e60a7559d3f46c9d53f1ca875fd499a1e35e) quincy (stable)\", cluster_addr=\"192.168.123.102\", device_class=\"hdd\", hostname=\"vm02\", instance=\"192.168.123.105:9283\", job=\"ceph\", objectstore=\"bluestore\", public_addr=\"192.168.123.102\"}];many-to-many matching not allowed: matching labels must be unique on one side" 2026-03-10T05:53:05.635 INFO:journalctl@ceph.osd.0.vm02.stdout:Mar 10 05:53:05 vm02 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:53:05.635 INFO:journalctl@ceph.osd.3.vm02.stdout:Mar 10 05:53:05 vm02 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:53:05.635 INFO:journalctl@ceph.node-exporter.a.vm02.stdout:Mar 10 05:53:05 vm02 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:53:05.635 INFO:journalctl@ceph.osd.1.vm02.stdout:Mar 10 05:53:05 vm02 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:53:05.635 INFO:journalctl@ceph.osd.2.vm02.stdout:Mar 10 05:53:05 vm02 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:53:05.635 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:05 vm02 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:53:05.636 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:05 vm02 bash[17462]: cluster 2026-03-10T05:53:04.209045+0000 mgr.y (mgr.24988) 58 : cluster [DBG] pgmap v24: 161 pgs: 161 active+clean; 457 KiB data, 102 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T05:53:05.636 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:05 vm02 bash[17462]: audit 2026-03-10T05:53:04.632119+0000 mon.a (mon.0) 962 : audit [INF] from='mgr.24988 ' entity='mgr.y' 2026-03-10T05:53:05.636 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:05 vm02 bash[17462]: audit 2026-03-10T05:53:04.638627+0000 mon.a (mon.0) 963 : audit [INF] from='mgr.24988 ' entity='mgr.y' 2026-03-10T05:53:05.636 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:05 vm02 bash[17462]: audit 2026-03-10T05:53:04.643204+0000 mon.b (mon.2) 91 : audit [DBG] from='mgr.24988 192.168.123.102:0/1338236995' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:53:05.636 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:05 vm02 bash[17462]: audit 2026-03-10T05:53:04.643971+0000 mon.b (mon.2) 92 : audit [INF] from='mgr.24988 192.168.123.102:0/1338236995' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T05:53:05.636 INFO:journalctl@ceph.mgr.y.vm02.stdout:Mar 10 05:53:05 vm02 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:53:05.636 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:05 vm02 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:53:05.636 INFO:journalctl@ceph.alertmanager.a.vm02.stdout:Mar 10 05:53:05 vm02 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:53:05.934 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:05 vm02 bash[17462]: audit 2026-03-10T05:53:04.646150+0000 mon.a (mon.0) 964 : audit [INF] from='mgr.24988 ' entity='mgr.y' 2026-03-10T05:53:05.934 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:05 vm02 bash[17462]: audit 2026-03-10T05:53:04.687772+0000 mon.b (mon.2) 93 : audit [DBG] from='mgr.24988 192.168.123.102:0/1338236995' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T05:53:05.934 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:05 vm02 bash[17462]: cephadm 2026-03-10T05:53:04.688957+0000 mgr.y (mgr.24988) 59 : cephadm [INF] Upgrade: It appears safe to stop mon.a 2026-03-10T05:53:05.934 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:05 vm02 bash[17462]: audit 2026-03-10T05:53:04.689503+0000 mon.b (mon.2) 94 : audit [DBG] from='mgr.24988 192.168.123.102:0/1338236995' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T05:53:05.934 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:05 vm02 bash[17462]: audit 2026-03-10T05:53:04.690465+0000 mon.b (mon.2) 95 : audit [DBG] from='mgr.24988 192.168.123.102:0/1338236995' entity='mgr.y' cmd=[{"prefix": "quorum_status"}]: dispatch 2026-03-10T05:53:05.934 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:05 vm02 bash[17462]: audit 2026-03-10T05:53:04.691112+0000 mon.b (mon.2) 96 : audit [DBG] from='mgr.24988 192.168.123.102:0/1338236995' entity='mgr.y' cmd=[{"prefix": "mon ok-to-stop", "ids": ["a"]}]: dispatch 2026-03-10T05:53:05.934 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:05 vm02 bash[17462]: audit 2026-03-10T05:53:05.099392+0000 mon.a (mon.0) 965 : audit [INF] from='mgr.24988 ' entity='mgr.y' 2026-03-10T05:53:05.934 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:05 vm02 bash[17462]: audit 2026-03-10T05:53:05.104174+0000 mon.b (mon.2) 97 : audit [INF] from='mgr.24988 192.168.123.102:0/1338236995' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-10T05:53:05.934 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:05 vm02 bash[17462]: audit 2026-03-10T05:53:05.104826+0000 mon.b (mon.2) 98 : audit [DBG] from='mgr.24988 192.168.123.102:0/1338236995' entity='mgr.y' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch 2026-03-10T05:53:05.934 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:05 vm02 bash[17462]: audit 2026-03-10T05:53:05.105461+0000 mon.b (mon.2) 99 : audit [DBG] from='mgr.24988 192.168.123.102:0/1338236995' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:53:05.934 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:05 vm02 systemd[1]: Stopping Ceph mon.a for 107483ae-1c44-11f1-b530-c1172cd6122a... 2026-03-10T05:53:05.934 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:05 vm02 bash[17462]: debug 2026-03-10T05:53:05.687+0000 7f01c9320700 -1 received signal: Terminated from /sbin/docker-init -- /usr/bin/ceph-mon -n mon.a -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-stderr=true --default-log-stderr-prefix=debug --default-mon-cluster-log-to-file=false --default-mon-cluster-log-to-stderr=true (PID: 1) UID: 0 2026-03-10T05:53:05.934 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:05 vm02 bash[17462]: debug 2026-03-10T05:53:05.687+0000 7f01c9320700 -1 mon.a@0(leader) e3 *** Got Signal Terminated *** 2026-03-10T05:53:05.934 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:05 vm02 bash[56259]: ceph-107483ae-1c44-11f1-b530-c1172cd6122a-mon-a 2026-03-10T05:53:05.934 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:05 vm02 systemd[1]: ceph-107483ae-1c44-11f1-b530-c1172cd6122a@mon.a.service: Deactivated successfully. 2026-03-10T05:53:05.934 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:05 vm02 systemd[1]: Stopped Ceph mon.a for 107483ae-1c44-11f1-b530-c1172cd6122a. 2026-03-10T05:53:05.934 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:05 vm02 bash[55303]: cluster 2026-03-10T05:53:04.209045+0000 mgr.y (mgr.24988) 58 : cluster [DBG] pgmap v24: 161 pgs: 161 active+clean; 457 KiB data, 102 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T05:53:05.934 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:05 vm02 bash[55303]: cluster 2026-03-10T05:53:04.209045+0000 mgr.y (mgr.24988) 58 : cluster [DBG] pgmap v24: 161 pgs: 161 active+clean; 457 KiB data, 102 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T05:53:05.934 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:05 vm02 bash[55303]: audit 2026-03-10T05:53:04.632119+0000 mon.a (mon.0) 962 : audit [INF] from='mgr.24988 ' entity='mgr.y' 2026-03-10T05:53:05.934 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:05 vm02 bash[55303]: audit 2026-03-10T05:53:04.632119+0000 mon.a (mon.0) 962 : audit [INF] from='mgr.24988 ' entity='mgr.y' 2026-03-10T05:53:05.934 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:05 vm02 bash[55303]: audit 2026-03-10T05:53:04.638627+0000 mon.a (mon.0) 963 : audit [INF] from='mgr.24988 ' entity='mgr.y' 2026-03-10T05:53:05.934 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:05 vm02 bash[55303]: audit 2026-03-10T05:53:04.638627+0000 mon.a (mon.0) 963 : audit [INF] from='mgr.24988 ' entity='mgr.y' 2026-03-10T05:53:05.934 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:05 vm02 bash[55303]: audit 2026-03-10T05:53:04.643204+0000 mon.b (mon.2) 91 : audit [DBG] from='mgr.24988 192.168.123.102:0/1338236995' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:53:05.934 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:05 vm02 bash[55303]: audit 2026-03-10T05:53:04.643204+0000 mon.b (mon.2) 91 : audit [DBG] from='mgr.24988 192.168.123.102:0/1338236995' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:53:05.934 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:05 vm02 bash[55303]: audit 2026-03-10T05:53:04.643971+0000 mon.b (mon.2) 92 : audit [INF] from='mgr.24988 192.168.123.102:0/1338236995' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T05:53:05.934 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:05 vm02 bash[55303]: audit 2026-03-10T05:53:04.643971+0000 mon.b (mon.2) 92 : audit [INF] from='mgr.24988 192.168.123.102:0/1338236995' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T05:53:05.934 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:05 vm02 bash[55303]: audit 2026-03-10T05:53:04.646150+0000 mon.a (mon.0) 964 : audit [INF] from='mgr.24988 ' entity='mgr.y' 2026-03-10T05:53:05.934 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:05 vm02 bash[55303]: audit 2026-03-10T05:53:04.646150+0000 mon.a (mon.0) 964 : audit [INF] from='mgr.24988 ' entity='mgr.y' 2026-03-10T05:53:05.934 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:05 vm02 bash[55303]: audit 2026-03-10T05:53:04.687772+0000 mon.b (mon.2) 93 : audit [DBG] from='mgr.24988 192.168.123.102:0/1338236995' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T05:53:05.934 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:05 vm02 bash[55303]: audit 2026-03-10T05:53:04.687772+0000 mon.b (mon.2) 93 : audit [DBG] from='mgr.24988 192.168.123.102:0/1338236995' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T05:53:05.935 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:05 vm02 bash[55303]: cephadm 2026-03-10T05:53:04.688957+0000 mgr.y (mgr.24988) 59 : cephadm [INF] Upgrade: It appears safe to stop mon.a 2026-03-10T05:53:05.935 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:05 vm02 bash[55303]: cephadm 2026-03-10T05:53:04.688957+0000 mgr.y (mgr.24988) 59 : cephadm [INF] Upgrade: It appears safe to stop mon.a 2026-03-10T05:53:05.935 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:05 vm02 bash[55303]: audit 2026-03-10T05:53:04.689503+0000 mon.b (mon.2) 94 : audit [DBG] from='mgr.24988 192.168.123.102:0/1338236995' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T05:53:05.935 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:05 vm02 bash[55303]: audit 2026-03-10T05:53:04.689503+0000 mon.b (mon.2) 94 : audit [DBG] from='mgr.24988 192.168.123.102:0/1338236995' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T05:53:05.935 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:05 vm02 bash[55303]: audit 2026-03-10T05:53:04.690465+0000 mon.b (mon.2) 95 : audit [DBG] from='mgr.24988 192.168.123.102:0/1338236995' entity='mgr.y' cmd=[{"prefix": "quorum_status"}]: dispatch 2026-03-10T05:53:05.935 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:05 vm02 bash[55303]: audit 2026-03-10T05:53:04.690465+0000 mon.b (mon.2) 95 : audit [DBG] from='mgr.24988 192.168.123.102:0/1338236995' entity='mgr.y' cmd=[{"prefix": "quorum_status"}]: dispatch 2026-03-10T05:53:05.935 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:05 vm02 bash[55303]: audit 2026-03-10T05:53:04.691112+0000 mon.b (mon.2) 96 : audit [DBG] from='mgr.24988 192.168.123.102:0/1338236995' entity='mgr.y' cmd=[{"prefix": "mon ok-to-stop", "ids": ["a"]}]: dispatch 2026-03-10T05:53:05.935 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:05 vm02 bash[55303]: audit 2026-03-10T05:53:04.691112+0000 mon.b (mon.2) 96 : audit [DBG] from='mgr.24988 192.168.123.102:0/1338236995' entity='mgr.y' cmd=[{"prefix": "mon ok-to-stop", "ids": ["a"]}]: dispatch 2026-03-10T05:53:05.935 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:05 vm02 bash[55303]: audit 2026-03-10T05:53:05.099392+0000 mon.a (mon.0) 965 : audit [INF] from='mgr.24988 ' entity='mgr.y' 2026-03-10T05:53:05.935 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:05 vm02 bash[55303]: audit 2026-03-10T05:53:05.099392+0000 mon.a (mon.0) 965 : audit [INF] from='mgr.24988 ' entity='mgr.y' 2026-03-10T05:53:05.935 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:05 vm02 bash[55303]: audit 2026-03-10T05:53:05.104174+0000 mon.b (mon.2) 97 : audit [INF] from='mgr.24988 192.168.123.102:0/1338236995' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-10T05:53:05.935 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:05 vm02 bash[55303]: audit 2026-03-10T05:53:05.104174+0000 mon.b (mon.2) 97 : audit [INF] from='mgr.24988 192.168.123.102:0/1338236995' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-10T05:53:05.935 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:05 vm02 bash[55303]: audit 2026-03-10T05:53:05.104826+0000 mon.b (mon.2) 98 : audit [DBG] from='mgr.24988 192.168.123.102:0/1338236995' entity='mgr.y' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch 2026-03-10T05:53:05.935 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:05 vm02 bash[55303]: audit 2026-03-10T05:53:05.104826+0000 mon.b (mon.2) 98 : audit [DBG] from='mgr.24988 192.168.123.102:0/1338236995' entity='mgr.y' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch 2026-03-10T05:53:05.935 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:05 vm02 bash[55303]: audit 2026-03-10T05:53:05.105461+0000 mon.b (mon.2) 99 : audit [DBG] from='mgr.24988 192.168.123.102:0/1338236995' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:53:05.935 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:05 vm02 bash[55303]: audit 2026-03-10T05:53:05.105461+0000 mon.b (mon.2) 99 : audit [DBG] from='mgr.24988 192.168.123.102:0/1338236995' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:53:06.001 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:05 vm05 bash[17864]: cluster 2026-03-10T05:53:04.209045+0000 mgr.y (mgr.24988) 58 : cluster [DBG] pgmap v24: 161 pgs: 161 active+clean; 457 KiB data, 102 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T05:53:06.001 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:05 vm05 bash[17864]: audit 2026-03-10T05:53:04.632119+0000 mon.a (mon.0) 962 : audit [INF] from='mgr.24988 ' entity='mgr.y' 2026-03-10T05:53:06.001 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:05 vm05 bash[17864]: audit 2026-03-10T05:53:04.638627+0000 mon.a (mon.0) 963 : audit [INF] from='mgr.24988 ' entity='mgr.y' 2026-03-10T05:53:06.001 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:05 vm05 bash[17864]: audit 2026-03-10T05:53:04.643204+0000 mon.b (mon.2) 91 : audit [DBG] from='mgr.24988 192.168.123.102:0/1338236995' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:53:06.001 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:05 vm05 bash[17864]: audit 2026-03-10T05:53:04.643971+0000 mon.b (mon.2) 92 : audit [INF] from='mgr.24988 192.168.123.102:0/1338236995' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T05:53:06.001 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:05 vm05 bash[17864]: audit 2026-03-10T05:53:04.646150+0000 mon.a (mon.0) 964 : audit [INF] from='mgr.24988 ' entity='mgr.y' 2026-03-10T05:53:06.001 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:05 vm05 bash[17864]: audit 2026-03-10T05:53:04.687772+0000 mon.b (mon.2) 93 : audit [DBG] from='mgr.24988 192.168.123.102:0/1338236995' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T05:53:06.001 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:05 vm05 bash[17864]: cephadm 2026-03-10T05:53:04.688957+0000 mgr.y (mgr.24988) 59 : cephadm [INF] Upgrade: It appears safe to stop mon.a 2026-03-10T05:53:06.001 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:05 vm05 bash[17864]: audit 2026-03-10T05:53:04.689503+0000 mon.b (mon.2) 94 : audit [DBG] from='mgr.24988 192.168.123.102:0/1338236995' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T05:53:06.001 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:05 vm05 bash[17864]: audit 2026-03-10T05:53:04.690465+0000 mon.b (mon.2) 95 : audit [DBG] from='mgr.24988 192.168.123.102:0/1338236995' entity='mgr.y' cmd=[{"prefix": "quorum_status"}]: dispatch 2026-03-10T05:53:06.001 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:05 vm05 bash[17864]: audit 2026-03-10T05:53:04.691112+0000 mon.b (mon.2) 96 : audit [DBG] from='mgr.24988 192.168.123.102:0/1338236995' entity='mgr.y' cmd=[{"prefix": "mon ok-to-stop", "ids": ["a"]}]: dispatch 2026-03-10T05:53:06.001 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:05 vm05 bash[17864]: audit 2026-03-10T05:53:05.099392+0000 mon.a (mon.0) 965 : audit [INF] from='mgr.24988 ' entity='mgr.y' 2026-03-10T05:53:06.001 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:05 vm05 bash[17864]: audit 2026-03-10T05:53:05.104174+0000 mon.b (mon.2) 97 : audit [INF] from='mgr.24988 192.168.123.102:0/1338236995' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-10T05:53:06.001 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:05 vm05 bash[17864]: audit 2026-03-10T05:53:05.104826+0000 mon.b (mon.2) 98 : audit [DBG] from='mgr.24988 192.168.123.102:0/1338236995' entity='mgr.y' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch 2026-03-10T05:53:06.001 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:05 vm05 bash[17864]: audit 2026-03-10T05:53:05.105461+0000 mon.b (mon.2) 99 : audit [DBG] from='mgr.24988 192.168.123.102:0/1338236995' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:53:06.334 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:05 vm02 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:53:06.335 INFO:journalctl@ceph.osd.0.vm02.stdout:Mar 10 05:53:05 vm02 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:53:06.335 INFO:journalctl@ceph.osd.1.vm02.stdout:Mar 10 05:53:05 vm02 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:53:06.335 INFO:journalctl@ceph.osd.2.vm02.stdout:Mar 10 05:53:05 vm02 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:53:06.335 INFO:journalctl@ceph.alertmanager.a.vm02.stdout:Mar 10 05:53:05 vm02 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:53:06.335 INFO:journalctl@ceph.mgr.y.vm02.stdout:Mar 10 05:53:05 vm02 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:53:06.335 INFO:journalctl@ceph.node-exporter.a.vm02.stdout:Mar 10 05:53:05 vm02 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:53:06.335 INFO:journalctl@ceph.osd.3.vm02.stdout:Mar 10 05:53:05 vm02 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:53:06.335 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:05 vm02 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:53:06.335 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:06 vm02 systemd[1]: Started Ceph mon.a for 107483ae-1c44-11f1-b530-c1172cd6122a. 2026-03-10T05:53:06.335 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:06 vm02 bash[56371]: debug 2026-03-10T05:53:06.131+0000 7fc41fad2d80 0 set uid:gid to 167:167 (ceph:ceph) 2026-03-10T05:53:06.335 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:06 vm02 bash[56371]: debug 2026-03-10T05:53:06.131+0000 7fc41fad2d80 0 ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable), process ceph-mon, pid 7 2026-03-10T05:53:06.335 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:06 vm02 bash[56371]: debug 2026-03-10T05:53:06.131+0000 7fc41fad2d80 0 pidfile_write: ignore empty --pid-file 2026-03-10T05:53:06.335 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:06 vm02 bash[56371]: debug 2026-03-10T05:53:06.131+0000 7fc41fad2d80 0 load: jerasure load: lrc 2026-03-10T05:53:06.335 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:06 vm02 bash[56371]: debug 2026-03-10T05:53:06.131+0000 7fc41fad2d80 4 rocksdb: RocksDB version: 7.9.2 2026-03-10T05:53:06.335 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:06 vm02 bash[56371]: debug 2026-03-10T05:53:06.131+0000 7fc41fad2d80 4 rocksdb: Git sha 0 2026-03-10T05:53:06.335 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:06 vm02 bash[56371]: debug 2026-03-10T05:53:06.131+0000 7fc41fad2d80 4 rocksdb: Compile date 2026-02-25 18:11:04 2026-03-10T05:53:06.335 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:06 vm02 bash[56371]: debug 2026-03-10T05:53:06.131+0000 7fc41fad2d80 4 rocksdb: DB SUMMARY 2026-03-10T05:53:06.335 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:06 vm02 bash[56371]: debug 2026-03-10T05:53:06.131+0000 7fc41fad2d80 4 rocksdb: DB Session ID: WT7J3LM2X5PIAHY1Q1EL 2026-03-10T05:53:06.335 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:06 vm02 bash[56371]: debug 2026-03-10T05:53:06.131+0000 7fc41fad2d80 4 rocksdb: CURRENT file: CURRENT 2026-03-10T05:53:06.335 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:06 vm02 bash[56371]: debug 2026-03-10T05:53:06.131+0000 7fc41fad2d80 4 rocksdb: IDENTITY file: IDENTITY 2026-03-10T05:53:06.335 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:06 vm02 bash[56371]: debug 2026-03-10T05:53:06.131+0000 7fc41fad2d80 4 rocksdb: MANIFEST file: MANIFEST-000015 size: 579 Bytes 2026-03-10T05:53:06.335 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:06 vm02 bash[56371]: debug 2026-03-10T05:53:06.131+0000 7fc41fad2d80 4 rocksdb: SST files in /var/lib/ceph/mon/ceph-a/store.db dir, Total Num: 1, files: 000024.sst 2026-03-10T05:53:06.335 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:06 vm02 bash[56371]: debug 2026-03-10T05:53:06.131+0000 7fc41fad2d80 4 rocksdb: Write Ahead Log file in /var/lib/ceph/mon/ceph-a/store.db: 000022.log size: 4079559 ; 2026-03-10T05:53:06.335 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:06 vm02 bash[56371]: debug 2026-03-10T05:53:06.131+0000 7fc41fad2d80 4 rocksdb: Options.error_if_exists: 0 2026-03-10T05:53:06.335 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:06 vm02 bash[56371]: debug 2026-03-10T05:53:06.131+0000 7fc41fad2d80 4 rocksdb: Options.create_if_missing: 0 2026-03-10T05:53:06.335 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:06 vm02 bash[56371]: debug 2026-03-10T05:53:06.131+0000 7fc41fad2d80 4 rocksdb: Options.paranoid_checks: 1 2026-03-10T05:53:06.335 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:06 vm02 bash[56371]: debug 2026-03-10T05:53:06.131+0000 7fc41fad2d80 4 rocksdb: Options.flush_verify_memtable_count: 1 2026-03-10T05:53:06.335 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:06 vm02 bash[56371]: debug 2026-03-10T05:53:06.131+0000 7fc41fad2d80 4 rocksdb: Options.track_and_verify_wals_in_manifest: 0 2026-03-10T05:53:06.335 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:06 vm02 bash[56371]: debug 2026-03-10T05:53:06.131+0000 7fc41fad2d80 4 rocksdb: Options.verify_sst_unique_id_in_manifest: 1 2026-03-10T05:53:06.335 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:06 vm02 bash[56371]: debug 2026-03-10T05:53:06.131+0000 7fc41fad2d80 4 rocksdb: Options.env: 0x55ba30da3dc0 2026-03-10T05:53:06.336 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:06 vm02 bash[56371]: debug 2026-03-10T05:53:06.131+0000 7fc41fad2d80 4 rocksdb: Options.fs: PosixFileSystem 2026-03-10T05:53:06.336 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:06 vm02 bash[56371]: debug 2026-03-10T05:53:06.131+0000 7fc41fad2d80 4 rocksdb: Options.info_log: 0x55ba429757e0 2026-03-10T05:53:06.336 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:06 vm02 bash[56371]: debug 2026-03-10T05:53:06.131+0000 7fc41fad2d80 4 rocksdb: Options.max_file_opening_threads: 16 2026-03-10T05:53:06.336 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:06 vm02 bash[56371]: debug 2026-03-10T05:53:06.131+0000 7fc41fad2d80 4 rocksdb: Options.statistics: (nil) 2026-03-10T05:53:06.336 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:06 vm02 bash[56371]: debug 2026-03-10T05:53:06.131+0000 7fc41fad2d80 4 rocksdb: Options.use_fsync: 0 2026-03-10T05:53:06.336 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:06 vm02 bash[56371]: debug 2026-03-10T05:53:06.131+0000 7fc41fad2d80 4 rocksdb: Options.max_log_file_size: 0 2026-03-10T05:53:06.336 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:06 vm02 bash[56371]: debug 2026-03-10T05:53:06.131+0000 7fc41fad2d80 4 rocksdb: Options.max_manifest_file_size: 1073741824 2026-03-10T05:53:06.336 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:06 vm02 bash[56371]: debug 2026-03-10T05:53:06.131+0000 7fc41fad2d80 4 rocksdb: Options.log_file_time_to_roll: 0 2026-03-10T05:53:06.336 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:06 vm02 bash[56371]: debug 2026-03-10T05:53:06.131+0000 7fc41fad2d80 4 rocksdb: Options.keep_log_file_num: 1000 2026-03-10T05:53:06.336 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:06 vm02 bash[56371]: debug 2026-03-10T05:53:06.131+0000 7fc41fad2d80 4 rocksdb: Options.recycle_log_file_num: 0 2026-03-10T05:53:06.336 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:06 vm02 bash[56371]: debug 2026-03-10T05:53:06.131+0000 7fc41fad2d80 4 rocksdb: Options.allow_fallocate: 1 2026-03-10T05:53:06.336 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:06 vm02 bash[56371]: debug 2026-03-10T05:53:06.131+0000 7fc41fad2d80 4 rocksdb: Options.allow_mmap_reads: 0 2026-03-10T05:53:06.336 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:06 vm02 bash[56371]: debug 2026-03-10T05:53:06.131+0000 7fc41fad2d80 4 rocksdb: Options.allow_mmap_writes: 0 2026-03-10T05:53:06.336 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:06 vm02 bash[56371]: debug 2026-03-10T05:53:06.131+0000 7fc41fad2d80 4 rocksdb: Options.use_direct_reads: 0 2026-03-10T05:53:06.336 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:06 vm02 bash[56371]: debug 2026-03-10T05:53:06.131+0000 7fc41fad2d80 4 rocksdb: Options.use_direct_io_for_flush_and_compaction: 0 2026-03-10T05:53:06.336 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:06 vm02 bash[56371]: debug 2026-03-10T05:53:06.131+0000 7fc41fad2d80 4 rocksdb: Options.create_missing_column_families: 0 2026-03-10T05:53:06.336 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:06 vm02 bash[56371]: debug 2026-03-10T05:53:06.131+0000 7fc41fad2d80 4 rocksdb: Options.db_log_dir: 2026-03-10T05:53:06.336 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:06 vm02 bash[56371]: debug 2026-03-10T05:53:06.131+0000 7fc41fad2d80 4 rocksdb: Options.wal_dir: 2026-03-10T05:53:06.336 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:06 vm02 bash[56371]: debug 2026-03-10T05:53:06.131+0000 7fc41fad2d80 4 rocksdb: Options.table_cache_numshardbits: 6 2026-03-10T05:53:06.336 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:06 vm02 bash[56371]: debug 2026-03-10T05:53:06.131+0000 7fc41fad2d80 4 rocksdb: Options.WAL_ttl_seconds: 0 2026-03-10T05:53:06.336 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:06 vm02 bash[56371]: debug 2026-03-10T05:53:06.131+0000 7fc41fad2d80 4 rocksdb: Options.WAL_size_limit_MB: 0 2026-03-10T05:53:06.336 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:06 vm02 bash[56371]: debug 2026-03-10T05:53:06.131+0000 7fc41fad2d80 4 rocksdb: Options.max_write_batch_group_size_bytes: 1048576 2026-03-10T05:53:06.336 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:06 vm02 bash[56371]: debug 2026-03-10T05:53:06.131+0000 7fc41fad2d80 4 rocksdb: Options.manifest_preallocation_size: 4194304 2026-03-10T05:53:06.336 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:06 vm02 bash[56371]: debug 2026-03-10T05:53:06.131+0000 7fc41fad2d80 4 rocksdb: Options.is_fd_close_on_exec: 1 2026-03-10T05:53:06.336 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:06 vm02 bash[56371]: debug 2026-03-10T05:53:06.131+0000 7fc41fad2d80 4 rocksdb: Options.advise_random_on_open: 1 2026-03-10T05:53:06.336 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:06 vm02 bash[56371]: debug 2026-03-10T05:53:06.135+0000 7fc41fad2d80 4 rocksdb: Options.db_write_buffer_size: 0 2026-03-10T05:53:06.336 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:06 vm02 bash[56371]: debug 2026-03-10T05:53:06.135+0000 7fc41fad2d80 4 rocksdb: Options.write_buffer_manager: 0x55ba42979900 2026-03-10T05:53:06.336 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:06 vm02 bash[56371]: debug 2026-03-10T05:53:06.135+0000 7fc41fad2d80 4 rocksdb: Options.access_hint_on_compaction_start: 1 2026-03-10T05:53:06.336 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:06 vm02 bash[56371]: debug 2026-03-10T05:53:06.135+0000 7fc41fad2d80 4 rocksdb: Options.random_access_max_buffer_size: 1048576 2026-03-10T05:53:06.336 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:06 vm02 bash[56371]: debug 2026-03-10T05:53:06.135+0000 7fc41fad2d80 4 rocksdb: Options.use_adaptive_mutex: 0 2026-03-10T05:53:06.336 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:06 vm02 bash[56371]: debug 2026-03-10T05:53:06.135+0000 7fc41fad2d80 4 rocksdb: Options.rate_limiter: (nil) 2026-03-10T05:53:06.336 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:06 vm02 bash[56371]: debug 2026-03-10T05:53:06.135+0000 7fc41fad2d80 4 rocksdb: Options.sst_file_manager.rate_bytes_per_sec: 0 2026-03-10T05:53:06.336 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:06 vm02 bash[56371]: debug 2026-03-10T05:53:06.135+0000 7fc41fad2d80 4 rocksdb: Options.wal_recovery_mode: 2 2026-03-10T05:53:06.336 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:06 vm02 bash[56371]: debug 2026-03-10T05:53:06.135+0000 7fc41fad2d80 4 rocksdb: Options.enable_thread_tracking: 0 2026-03-10T05:53:06.336 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:06 vm02 bash[56371]: debug 2026-03-10T05:53:06.135+0000 7fc41fad2d80 4 rocksdb: Options.enable_pipelined_write: 0 2026-03-10T05:53:06.336 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:06 vm02 bash[56371]: debug 2026-03-10T05:53:06.135+0000 7fc41fad2d80 4 rocksdb: Options.unordered_write: 0 2026-03-10T05:53:06.336 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:06 vm02 bash[56371]: debug 2026-03-10T05:53:06.135+0000 7fc41fad2d80 4 rocksdb: Options.allow_concurrent_memtable_write: 1 2026-03-10T05:53:06.336 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:06 vm02 bash[56371]: debug 2026-03-10T05:53:06.135+0000 7fc41fad2d80 4 rocksdb: Options.enable_write_thread_adaptive_yield: 1 2026-03-10T05:53:06.336 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:06 vm02 bash[56371]: debug 2026-03-10T05:53:06.135+0000 7fc41fad2d80 4 rocksdb: Options.write_thread_max_yield_usec: 100 2026-03-10T05:53:06.336 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:06 vm02 bash[56371]: debug 2026-03-10T05:53:06.135+0000 7fc41fad2d80 4 rocksdb: Options.write_thread_slow_yield_usec: 3 2026-03-10T05:53:06.336 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:06 vm02 bash[56371]: debug 2026-03-10T05:53:06.135+0000 7fc41fad2d80 4 rocksdb: Options.row_cache: None 2026-03-10T05:53:06.336 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:06 vm02 bash[56371]: debug 2026-03-10T05:53:06.135+0000 7fc41fad2d80 4 rocksdb: Options.wal_filter: None 2026-03-10T05:53:06.336 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:06 vm02 bash[56371]: debug 2026-03-10T05:53:06.135+0000 7fc41fad2d80 4 rocksdb: Options.avoid_flush_during_recovery: 0 2026-03-10T05:53:06.336 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:06 vm02 bash[56371]: debug 2026-03-10T05:53:06.135+0000 7fc41fad2d80 4 rocksdb: Options.allow_ingest_behind: 0 2026-03-10T05:53:06.336 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:06 vm02 bash[56371]: debug 2026-03-10T05:53:06.135+0000 7fc41fad2d80 4 rocksdb: Options.two_write_queues: 0 2026-03-10T05:53:06.336 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:06 vm02 bash[56371]: debug 2026-03-10T05:53:06.135+0000 7fc41fad2d80 4 rocksdb: Options.manual_wal_flush: 0 2026-03-10T05:53:06.336 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:06 vm02 bash[56371]: debug 2026-03-10T05:53:06.135+0000 7fc41fad2d80 4 rocksdb: Options.wal_compression: 0 2026-03-10T05:53:06.336 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:06 vm02 bash[56371]: debug 2026-03-10T05:53:06.135+0000 7fc41fad2d80 4 rocksdb: Options.atomic_flush: 0 2026-03-10T05:53:06.336 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:06 vm02 bash[56371]: debug 2026-03-10T05:53:06.135+0000 7fc41fad2d80 4 rocksdb: Options.avoid_unnecessary_blocking_io: 0 2026-03-10T05:53:06.336 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:06 vm02 bash[56371]: debug 2026-03-10T05:53:06.135+0000 7fc41fad2d80 4 rocksdb: Options.persist_stats_to_disk: 0 2026-03-10T05:53:06.336 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:06 vm02 bash[56371]: debug 2026-03-10T05:53:06.135+0000 7fc41fad2d80 4 rocksdb: Options.write_dbid_to_manifest: 0 2026-03-10T05:53:06.336 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:06 vm02 bash[56371]: debug 2026-03-10T05:53:06.135+0000 7fc41fad2d80 4 rocksdb: Options.log_readahead_size: 0 2026-03-10T05:53:06.337 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:06 vm02 bash[56371]: debug 2026-03-10T05:53:06.135+0000 7fc41fad2d80 4 rocksdb: Options.file_checksum_gen_factory: Unknown 2026-03-10T05:53:06.337 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:06 vm02 bash[56371]: debug 2026-03-10T05:53:06.135+0000 7fc41fad2d80 4 rocksdb: Options.best_efforts_recovery: 0 2026-03-10T05:53:06.337 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:06 vm02 bash[56371]: debug 2026-03-10T05:53:06.135+0000 7fc41fad2d80 4 rocksdb: Options.max_bgerror_resume_count: 2147483647 2026-03-10T05:53:06.337 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:06 vm02 bash[56371]: debug 2026-03-10T05:53:06.135+0000 7fc41fad2d80 4 rocksdb: Options.bgerror_resume_retry_interval: 1000000 2026-03-10T05:53:06.337 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:06 vm02 bash[56371]: debug 2026-03-10T05:53:06.135+0000 7fc41fad2d80 4 rocksdb: Options.allow_data_in_errors: 0 2026-03-10T05:53:06.337 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:06 vm02 bash[56371]: debug 2026-03-10T05:53:06.135+0000 7fc41fad2d80 4 rocksdb: Options.db_host_id: __hostname__ 2026-03-10T05:53:06.337 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:06 vm02 bash[56371]: debug 2026-03-10T05:53:06.135+0000 7fc41fad2d80 4 rocksdb: Options.enforce_single_del_contracts: true 2026-03-10T05:53:06.337 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:06 vm02 bash[56371]: debug 2026-03-10T05:53:06.135+0000 7fc41fad2d80 4 rocksdb: Options.max_background_jobs: 2 2026-03-10T05:53:06.337 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:06 vm02 bash[56371]: debug 2026-03-10T05:53:06.135+0000 7fc41fad2d80 4 rocksdb: Options.max_background_compactions: -1 2026-03-10T05:53:06.337 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:06 vm02 bash[56371]: debug 2026-03-10T05:53:06.135+0000 7fc41fad2d80 4 rocksdb: Options.max_subcompactions: 1 2026-03-10T05:53:06.337 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:06 vm02 bash[56371]: debug 2026-03-10T05:53:06.135+0000 7fc41fad2d80 4 rocksdb: Options.avoid_flush_during_shutdown: 0 2026-03-10T05:53:06.337 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:06 vm02 bash[56371]: debug 2026-03-10T05:53:06.135+0000 7fc41fad2d80 4 rocksdb: Options.writable_file_max_buffer_size: 1048576 2026-03-10T05:53:06.337 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:06 vm02 bash[56371]: debug 2026-03-10T05:53:06.135+0000 7fc41fad2d80 4 rocksdb: Options.delayed_write_rate : 16777216 2026-03-10T05:53:06.337 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:06 vm02 bash[56371]: debug 2026-03-10T05:53:06.135+0000 7fc41fad2d80 4 rocksdb: Options.max_total_wal_size: 0 2026-03-10T05:53:06.337 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:06 vm02 bash[56371]: debug 2026-03-10T05:53:06.135+0000 7fc41fad2d80 4 rocksdb: Options.delete_obsolete_files_period_micros: 21600000000 2026-03-10T05:53:06.337 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:06 vm02 bash[56371]: debug 2026-03-10T05:53:06.135+0000 7fc41fad2d80 4 rocksdb: Options.stats_dump_period_sec: 600 2026-03-10T05:53:06.337 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:06 vm02 bash[56371]: debug 2026-03-10T05:53:06.135+0000 7fc41fad2d80 4 rocksdb: Options.stats_persist_period_sec: 600 2026-03-10T05:53:06.337 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:06 vm02 bash[56371]: debug 2026-03-10T05:53:06.135+0000 7fc41fad2d80 4 rocksdb: Options.stats_history_buffer_size: 1048576 2026-03-10T05:53:06.337 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:06 vm02 bash[56371]: debug 2026-03-10T05:53:06.135+0000 7fc41fad2d80 4 rocksdb: Options.max_open_files: -1 2026-03-10T05:53:06.337 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:06 vm02 bash[56371]: debug 2026-03-10T05:53:06.135+0000 7fc41fad2d80 4 rocksdb: Options.bytes_per_sync: 0 2026-03-10T05:53:06.337 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:06 vm02 bash[56371]: debug 2026-03-10T05:53:06.135+0000 7fc41fad2d80 4 rocksdb: Options.wal_bytes_per_sync: 0 2026-03-10T05:53:06.337 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:06 vm02 bash[56371]: debug 2026-03-10T05:53:06.135+0000 7fc41fad2d80 4 rocksdb: Options.strict_bytes_per_sync: 0 2026-03-10T05:53:06.337 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:06 vm02 bash[56371]: debug 2026-03-10T05:53:06.135+0000 7fc41fad2d80 4 rocksdb: Options.compaction_readahead_size: 0 2026-03-10T05:53:06.337 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:06 vm02 bash[56371]: debug 2026-03-10T05:53:06.135+0000 7fc41fad2d80 4 rocksdb: Options.max_background_flushes: -1 2026-03-10T05:53:06.337 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:06 vm02 bash[56371]: debug 2026-03-10T05:53:06.135+0000 7fc41fad2d80 4 rocksdb: Compression algorithms supported: 2026-03-10T05:53:06.337 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:06 vm02 bash[56371]: debug 2026-03-10T05:53:06.135+0000 7fc41fad2d80 4 rocksdb: kZSTD supported: 0 2026-03-10T05:53:06.337 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:06 vm02 bash[56371]: debug 2026-03-10T05:53:06.135+0000 7fc41fad2d80 4 rocksdb: kXpressCompression supported: 0 2026-03-10T05:53:06.337 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:06 vm02 bash[56371]: debug 2026-03-10T05:53:06.135+0000 7fc41fad2d80 4 rocksdb: kBZip2Compression supported: 0 2026-03-10T05:53:06.337 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:06 vm02 bash[56371]: debug 2026-03-10T05:53:06.135+0000 7fc41fad2d80 4 rocksdb: kZSTDNotFinalCompression supported: 0 2026-03-10T05:53:06.337 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:06 vm02 bash[56371]: debug 2026-03-10T05:53:06.135+0000 7fc41fad2d80 4 rocksdb: kLZ4Compression supported: 1 2026-03-10T05:53:06.337 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:06 vm02 bash[56371]: debug 2026-03-10T05:53:06.135+0000 7fc41fad2d80 4 rocksdb: kZlibCompression supported: 1 2026-03-10T05:53:06.337 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:06 vm02 bash[56371]: debug 2026-03-10T05:53:06.135+0000 7fc41fad2d80 4 rocksdb: kLZ4HCCompression supported: 1 2026-03-10T05:53:06.337 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:06 vm02 bash[56371]: debug 2026-03-10T05:53:06.135+0000 7fc41fad2d80 4 rocksdb: kSnappyCompression supported: 1 2026-03-10T05:53:06.337 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:06 vm02 bash[56371]: debug 2026-03-10T05:53:06.135+0000 7fc41fad2d80 4 rocksdb: Fast CRC32 supported: Supported on x86 2026-03-10T05:53:06.337 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:06 vm02 bash[56371]: debug 2026-03-10T05:53:06.135+0000 7fc41fad2d80 4 rocksdb: DMutex implementation: pthread_mutex_t 2026-03-10T05:53:06.337 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:06 vm02 bash[56371]: debug 2026-03-10T05:53:06.135+0000 7fc41fad2d80 4 rocksdb: [db/version_set.cc:5527] Recovering from manifest file: /var/lib/ceph/mon/ceph-a/store.db/MANIFEST-000015 2026-03-10T05:53:06.337 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:06 vm02 bash[56371]: debug 2026-03-10T05:53:06.135+0000 7fc41fad2d80 4 rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]: 2026-03-10T05:53:06.337 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:06 vm02 bash[56371]: debug 2026-03-10T05:53:06.135+0000 7fc41fad2d80 4 rocksdb: Options.comparator: leveldb.BytewiseComparator 2026-03-10T05:53:06.337 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:06 vm02 bash[56371]: debug 2026-03-10T05:53:06.135+0000 7fc41fad2d80 4 rocksdb: Options.merge_operator: 2026-03-10T05:53:06.337 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:06 vm02 bash[56371]: debug 2026-03-10T05:53:06.135+0000 7fc41fad2d80 4 rocksdb: Options.compaction_filter: None 2026-03-10T05:53:06.337 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:06 vm02 bash[56371]: debug 2026-03-10T05:53:06.135+0000 7fc41fad2d80 4 rocksdb: Options.compaction_filter_factory: None 2026-03-10T05:53:06.337 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:06 vm02 bash[56371]: debug 2026-03-10T05:53:06.135+0000 7fc41fad2d80 4 rocksdb: Options.sst_partitioner_factory: None 2026-03-10T05:53:06.337 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:06 vm02 bash[56371]: debug 2026-03-10T05:53:06.135+0000 7fc41fad2d80 4 rocksdb: Options.memtable_factory: SkipListFactory 2026-03-10T05:53:06.337 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:06 vm02 bash[56371]: debug 2026-03-10T05:53:06.135+0000 7fc41fad2d80 4 rocksdb: Options.table_factory: BlockBasedTable 2026-03-10T05:53:06.337 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:06 vm02 bash[56371]: debug 2026-03-10T05:53:06.135+0000 7fc41fad2d80 4 rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55ba429743c0) 2026-03-10T05:53:06.337 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:06 vm02 bash[56371]: cache_index_and_filter_blocks: 1 2026-03-10T05:53:06.337 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:06 vm02 bash[56371]: cache_index_and_filter_blocks_with_high_priority: 0 2026-03-10T05:53:06.337 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:06 vm02 bash[56371]: pin_l0_filter_and_index_blocks_in_cache: 0 2026-03-10T05:53:06.337 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:06 vm02 bash[56371]: pin_top_level_index_and_filter: 1 2026-03-10T05:53:06.337 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:06 vm02 bash[56371]: index_type: 0 2026-03-10T05:53:06.337 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:06 vm02 bash[56371]: data_block_index_type: 0 2026-03-10T05:53:06.337 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:06 vm02 bash[56371]: index_shortening: 1 2026-03-10T05:53:06.337 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:06 vm02 bash[56371]: data_block_hash_table_util_ratio: 0.750000 2026-03-10T05:53:06.337 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:06 vm02 bash[56371]: checksum: 4 2026-03-10T05:53:06.337 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:06 vm02 bash[56371]: no_block_cache: 0 2026-03-10T05:53:06.337 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:06 vm02 bash[56371]: block_cache: 0x55ba4299b350 2026-03-10T05:53:06.337 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:06 vm02 bash[56371]: block_cache_name: BinnedLRUCache 2026-03-10T05:53:06.337 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:06 vm02 bash[56371]: block_cache_options: 2026-03-10T05:53:06.337 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:06 vm02 bash[56371]: capacity : 536870912 2026-03-10T05:53:06.337 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:06 vm02 bash[56371]: num_shard_bits : 4 2026-03-10T05:53:06.337 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:06 vm02 bash[56371]: strict_capacity_limit : 0 2026-03-10T05:53:06.337 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:06 vm02 bash[56371]: high_pri_pool_ratio: 0.000 2026-03-10T05:53:06.337 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:06 vm02 bash[56371]: block_cache_compressed: (nil) 2026-03-10T05:53:06.337 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:06 vm02 bash[56371]: persistent_cache: (nil) 2026-03-10T05:53:06.337 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:06 vm02 bash[56371]: block_size: 4096 2026-03-10T05:53:06.337 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:06 vm02 bash[56371]: block_size_deviation: 10 2026-03-10T05:53:06.337 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:06 vm02 bash[56371]: block_restart_interval: 16 2026-03-10T05:53:06.338 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:06 vm02 bash[56371]: index_block_restart_interval: 1 2026-03-10T05:53:06.338 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:06 vm02 bash[56371]: metadata_block_size: 4096 2026-03-10T05:53:06.338 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:06 vm02 bash[56371]: partition_filters: 0 2026-03-10T05:53:06.338 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:06 vm02 bash[56371]: use_delta_encoding: 1 2026-03-10T05:53:06.338 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:06 vm02 bash[56371]: filter_policy: bloomfilter 2026-03-10T05:53:06.338 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:06 vm02 bash[56371]: whole_key_filtering: 1 2026-03-10T05:53:06.338 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:06 vm02 bash[56371]: verify_compression: 0 2026-03-10T05:53:06.338 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:06 vm02 bash[56371]: read_amp_bytes_per_bit: 0 2026-03-10T05:53:06.338 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:06 vm02 bash[56371]: format_version: 5 2026-03-10T05:53:06.338 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:06 vm02 bash[56371]: enable_index_compression: 1 2026-03-10T05:53:06.338 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:06 vm02 bash[56371]: block_align: 0 2026-03-10T05:53:06.338 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:06 vm02 bash[56371]: max_auto_readahead_size: 262144 2026-03-10T05:53:06.338 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:06 vm02 bash[56371]: prepopulate_block_cache: 0 2026-03-10T05:53:06.338 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:06 vm02 bash[56371]: initial_auto_readahead_size: 8192 2026-03-10T05:53:06.338 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:06 vm02 bash[56371]: num_file_reads_for_auto_readahead: 2 2026-03-10T05:53:06.338 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:06 vm02 bash[56371]: debug 2026-03-10T05:53:06.135+0000 7fc41fad2d80 4 rocksdb: Options.write_buffer_size: 33554432 2026-03-10T05:53:06.338 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:06 vm02 bash[56371]: debug 2026-03-10T05:53:06.135+0000 7fc41fad2d80 4 rocksdb: Options.max_write_buffer_number: 2 2026-03-10T05:53:06.338 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:06 vm02 bash[56371]: debug 2026-03-10T05:53:06.135+0000 7fc41fad2d80 4 rocksdb: Options.compression: NoCompression 2026-03-10T05:53:06.338 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:06 vm02 bash[56371]: debug 2026-03-10T05:53:06.135+0000 7fc41fad2d80 4 rocksdb: Options.bottommost_compression: Disabled 2026-03-10T05:53:06.338 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:06 vm02 bash[56371]: debug 2026-03-10T05:53:06.135+0000 7fc41fad2d80 4 rocksdb: Options.prefix_extractor: nullptr 2026-03-10T05:53:06.338 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:06 vm02 bash[56371]: debug 2026-03-10T05:53:06.135+0000 7fc41fad2d80 4 rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr 2026-03-10T05:53:06.338 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:06 vm02 bash[56371]: debug 2026-03-10T05:53:06.135+0000 7fc41fad2d80 4 rocksdb: Options.num_levels: 7 2026-03-10T05:53:06.338 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:06 vm02 bash[56371]: debug 2026-03-10T05:53:06.135+0000 7fc41fad2d80 4 rocksdb: Options.min_write_buffer_number_to_merge: 1 2026-03-10T05:53:06.338 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:06 vm02 bash[56371]: debug 2026-03-10T05:53:06.135+0000 7fc41fad2d80 4 rocksdb: Options.max_write_buffer_number_to_maintain: 0 2026-03-10T05:53:06.338 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:06 vm02 bash[56371]: debug 2026-03-10T05:53:06.135+0000 7fc41fad2d80 4 rocksdb: Options.max_write_buffer_size_to_maintain: 0 2026-03-10T05:53:06.338 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:06 vm02 bash[56371]: debug 2026-03-10T05:53:06.135+0000 7fc41fad2d80 4 rocksdb: Options.bottommost_compression_opts.window_bits: -14 2026-03-10T05:53:06.338 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:06 vm02 bash[56371]: debug 2026-03-10T05:53:06.135+0000 7fc41fad2d80 4 rocksdb: Options.bottommost_compression_opts.level: 32767 2026-03-10T05:53:06.338 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:06 vm02 bash[56371]: debug 2026-03-10T05:53:06.135+0000 7fc41fad2d80 4 rocksdb: Options.bottommost_compression_opts.strategy: 0 2026-03-10T05:53:06.338 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:06 vm02 bash[56371]: debug 2026-03-10T05:53:06.135+0000 7fc41fad2d80 4 rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 2026-03-10T05:53:06.338 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:06 vm02 bash[56371]: debug 2026-03-10T05:53:06.135+0000 7fc41fad2d80 4 rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 2026-03-10T05:53:06.338 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:06 vm02 bash[56371]: debug 2026-03-10T05:53:06.135+0000 7fc41fad2d80 4 rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 2026-03-10T05:53:06.338 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:06 vm02 bash[56371]: debug 2026-03-10T05:53:06.135+0000 7fc41fad2d80 4 rocksdb: Options.bottommost_compression_opts.enabled: false 2026-03-10T05:53:06.338 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:06 vm02 bash[56371]: debug 2026-03-10T05:53:06.135+0000 7fc41fad2d80 4 rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 2026-03-10T05:53:06.338 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:06 vm02 bash[56371]: debug 2026-03-10T05:53:06.135+0000 7fc41fad2d80 4 rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true 2026-03-10T05:53:06.338 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:06 vm02 bash[56371]: debug 2026-03-10T05:53:06.135+0000 7fc41fad2d80 4 rocksdb: Options.compression_opts.window_bits: -14 2026-03-10T05:53:06.338 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:06 vm02 bash[56371]: debug 2026-03-10T05:53:06.135+0000 7fc41fad2d80 4 rocksdb: Options.compression_opts.level: 32767 2026-03-10T05:53:06.338 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:06 vm02 bash[56371]: debug 2026-03-10T05:53:06.135+0000 7fc41fad2d80 4 rocksdb: Options.compression_opts.strategy: 0 2026-03-10T05:53:06.338 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:06 vm02 bash[56371]: debug 2026-03-10T05:53:06.135+0000 7fc41fad2d80 4 rocksdb: Options.compression_opts.max_dict_bytes: 0 2026-03-10T05:53:06.338 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:06 vm02 bash[56371]: debug 2026-03-10T05:53:06.135+0000 7fc41fad2d80 4 rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 2026-03-10T05:53:06.338 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:06 vm02 bash[56371]: debug 2026-03-10T05:53:06.135+0000 7fc41fad2d80 4 rocksdb: Options.compression_opts.use_zstd_dict_trainer: true 2026-03-10T05:53:06.338 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:06 vm02 bash[56371]: debug 2026-03-10T05:53:06.135+0000 7fc41fad2d80 4 rocksdb: Options.compression_opts.parallel_threads: 1 2026-03-10T05:53:06.338 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:06 vm02 bash[56371]: debug 2026-03-10T05:53:06.135+0000 7fc41fad2d80 4 rocksdb: Options.compression_opts.enabled: false 2026-03-10T05:53:06.338 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:06 vm02 bash[56371]: debug 2026-03-10T05:53:06.135+0000 7fc41fad2d80 4 rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 2026-03-10T05:53:06.338 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:06 vm02 bash[56371]: debug 2026-03-10T05:53:06.135+0000 7fc41fad2d80 4 rocksdb: Options.level0_file_num_compaction_trigger: 4 2026-03-10T05:53:06.338 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:06 vm02 bash[56371]: debug 2026-03-10T05:53:06.135+0000 7fc41fad2d80 4 rocksdb: Options.level0_slowdown_writes_trigger: 20 2026-03-10T05:53:06.338 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:06 vm02 bash[56371]: debug 2026-03-10T05:53:06.135+0000 7fc41fad2d80 4 rocksdb: Options.level0_stop_writes_trigger: 36 2026-03-10T05:53:06.338 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:06 vm02 bash[56371]: debug 2026-03-10T05:53:06.135+0000 7fc41fad2d80 4 rocksdb: Options.target_file_size_base: 67108864 2026-03-10T05:53:06.338 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:06 vm02 bash[56371]: debug 2026-03-10T05:53:06.135+0000 7fc41fad2d80 4 rocksdb: Options.target_file_size_multiplier: 1 2026-03-10T05:53:06.338 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:06 vm02 bash[56371]: debug 2026-03-10T05:53:06.135+0000 7fc41fad2d80 4 rocksdb: Options.max_bytes_for_level_base: 268435456 2026-03-10T05:53:06.338 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:06 vm02 bash[56371]: debug 2026-03-10T05:53:06.135+0000 7fc41fad2d80 4 rocksdb: Options.level_compaction_dynamic_level_bytes: 1 2026-03-10T05:53:06.338 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:06 vm02 bash[56371]: debug 2026-03-10T05:53:06.135+0000 7fc41fad2d80 4 rocksdb: Options.max_bytes_for_level_multiplier: 10.000000 2026-03-10T05:53:06.338 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:06 vm02 bash[56371]: debug 2026-03-10T05:53:06.135+0000 7fc41fad2d80 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 2026-03-10T05:53:06.338 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:06 vm02 bash[56371]: debug 2026-03-10T05:53:06.135+0000 7fc41fad2d80 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 2026-03-10T05:53:06.338 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:06 vm02 bash[56371]: debug 2026-03-10T05:53:06.135+0000 7fc41fad2d80 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 2026-03-10T05:53:06.338 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:06 vm02 bash[56371]: debug 2026-03-10T05:53:06.135+0000 7fc41fad2d80 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 2026-03-10T05:53:06.338 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:06 vm02 bash[56371]: debug 2026-03-10T05:53:06.135+0000 7fc41fad2d80 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 2026-03-10T05:53:06.338 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:06 vm02 bash[56371]: debug 2026-03-10T05:53:06.135+0000 7fc41fad2d80 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 2026-03-10T05:53:06.338 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:06 vm02 bash[56371]: debug 2026-03-10T05:53:06.135+0000 7fc41fad2d80 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 2026-03-10T05:53:06.338 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:06 vm02 bash[56371]: debug 2026-03-10T05:53:06.135+0000 7fc41fad2d80 4 rocksdb: Options.max_sequential_skip_in_iterations: 8 2026-03-10T05:53:06.338 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:06 vm02 bash[56371]: debug 2026-03-10T05:53:06.135+0000 7fc41fad2d80 4 rocksdb: Options.max_compaction_bytes: 1677721600 2026-03-10T05:53:06.338 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:06 vm02 bash[56371]: debug 2026-03-10T05:53:06.135+0000 7fc41fad2d80 4 rocksdb: Options.ignore_max_compaction_bytes_for_input: true 2026-03-10T05:53:06.338 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:06 vm02 bash[56371]: debug 2026-03-10T05:53:06.135+0000 7fc41fad2d80 4 rocksdb: Options.arena_block_size: 1048576 2026-03-10T05:53:06.338 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:06 vm02 bash[56371]: debug 2026-03-10T05:53:06.135+0000 7fc41fad2d80 4 rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 2026-03-10T05:53:06.338 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:06 vm02 bash[56371]: debug 2026-03-10T05:53:06.135+0000 7fc41fad2d80 4 rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 2026-03-10T05:53:06.338 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:06 vm02 bash[56371]: debug 2026-03-10T05:53:06.135+0000 7fc41fad2d80 4 rocksdb: Options.disable_auto_compactions: 0 2026-03-10T05:53:06.338 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:06 vm02 bash[56371]: debug 2026-03-10T05:53:06.135+0000 7fc41fad2d80 4 rocksdb: Options.compaction_style: kCompactionStyleLevel 2026-03-10T05:53:06.339 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:06 vm02 bash[56371]: debug 2026-03-10T05:53:06.135+0000 7fc41fad2d80 4 rocksdb: Options.compaction_pri: kMinOverlappingRatio 2026-03-10T05:53:06.339 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:06 vm02 bash[56371]: debug 2026-03-10T05:53:06.135+0000 7fc41fad2d80 4 rocksdb: Options.compaction_options_universal.size_ratio: 1 2026-03-10T05:53:06.339 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:06 vm02 bash[56371]: debug 2026-03-10T05:53:06.135+0000 7fc41fad2d80 4 rocksdb: Options.compaction_options_universal.min_merge_width: 2 2026-03-10T05:53:06.339 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:06 vm02 bash[56371]: debug 2026-03-10T05:53:06.135+0000 7fc41fad2d80 4 rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 2026-03-10T05:53:06.339 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:06 vm02 bash[56371]: debug 2026-03-10T05:53:06.135+0000 7fc41fad2d80 4 rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 2026-03-10T05:53:06.339 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:06 vm02 bash[56371]: debug 2026-03-10T05:53:06.135+0000 7fc41fad2d80 4 rocksdb: Options.compaction_options_universal.compression_size_percent: -1 2026-03-10T05:53:06.339 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:06 vm02 bash[56371]: debug 2026-03-10T05:53:06.135+0000 7fc41fad2d80 4 rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize 2026-03-10T05:53:06.339 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:06 vm02 bash[56371]: debug 2026-03-10T05:53:06.135+0000 7fc41fad2d80 4 rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 2026-03-10T05:53:06.339 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:06 vm02 bash[56371]: debug 2026-03-10T05:53:06.135+0000 7fc41fad2d80 4 rocksdb: Options.compaction_options_fifo.allow_compaction: 0 2026-03-10T05:53:06.339 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:06 vm02 bash[56371]: debug 2026-03-10T05:53:06.135+0000 7fc41fad2d80 4 rocksdb: Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0); 2026-03-10T05:53:06.339 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:06 vm02 bash[56371]: debug 2026-03-10T05:53:06.135+0000 7fc41fad2d80 4 rocksdb: Options.inplace_update_support: 0 2026-03-10T05:53:06.339 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:06 vm02 bash[56371]: debug 2026-03-10T05:53:06.135+0000 7fc41fad2d80 4 rocksdb: Options.inplace_update_num_locks: 10000 2026-03-10T05:53:06.339 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:06 vm02 bash[56371]: debug 2026-03-10T05:53:06.135+0000 7fc41fad2d80 4 rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 2026-03-10T05:53:06.339 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:06 vm02 bash[56371]: debug 2026-03-10T05:53:06.135+0000 7fc41fad2d80 4 rocksdb: Options.memtable_whole_key_filtering: 0 2026-03-10T05:53:06.339 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:06 vm02 bash[56371]: debug 2026-03-10T05:53:06.135+0000 7fc41fad2d80 4 rocksdb: Options.memtable_huge_page_size: 0 2026-03-10T05:53:06.339 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:06 vm02 bash[56371]: debug 2026-03-10T05:53:06.135+0000 7fc41fad2d80 4 rocksdb: Options.bloom_locality: 0 2026-03-10T05:53:06.339 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:06 vm02 bash[56371]: debug 2026-03-10T05:53:06.135+0000 7fc41fad2d80 4 rocksdb: Options.max_successive_merges: 0 2026-03-10T05:53:06.339 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:06 vm02 bash[56371]: debug 2026-03-10T05:53:06.135+0000 7fc41fad2d80 4 rocksdb: Options.optimize_filters_for_hits: 0 2026-03-10T05:53:06.339 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:06 vm02 bash[56371]: debug 2026-03-10T05:53:06.135+0000 7fc41fad2d80 4 rocksdb: Options.paranoid_file_checks: 0 2026-03-10T05:53:06.339 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:06 vm02 bash[56371]: debug 2026-03-10T05:53:06.135+0000 7fc41fad2d80 4 rocksdb: Options.force_consistency_checks: 1 2026-03-10T05:53:06.339 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:06 vm02 bash[56371]: debug 2026-03-10T05:53:06.135+0000 7fc41fad2d80 4 rocksdb: Options.report_bg_io_stats: 0 2026-03-10T05:53:06.339 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:06 vm02 bash[56371]: debug 2026-03-10T05:53:06.135+0000 7fc41fad2d80 4 rocksdb: Options.ttl: 2592000 2026-03-10T05:53:06.339 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:06 vm02 bash[56371]: debug 2026-03-10T05:53:06.135+0000 7fc41fad2d80 4 rocksdb: Options.periodic_compaction_seconds: 0 2026-03-10T05:53:06.339 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:06 vm02 bash[56371]: debug 2026-03-10T05:53:06.135+0000 7fc41fad2d80 4 rocksdb: Options.preclude_last_level_data_seconds: 0 2026-03-10T05:53:06.339 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:06 vm02 bash[56371]: debug 2026-03-10T05:53:06.135+0000 7fc41fad2d80 4 rocksdb: Options.preserve_internal_time_seconds: 0 2026-03-10T05:53:06.339 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:06 vm02 bash[56371]: debug 2026-03-10T05:53:06.135+0000 7fc41fad2d80 4 rocksdb: Options.enable_blob_files: false 2026-03-10T05:53:06.339 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:06 vm02 bash[56371]: debug 2026-03-10T05:53:06.135+0000 7fc41fad2d80 4 rocksdb: Options.min_blob_size: 0 2026-03-10T05:53:06.339 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:06 vm02 bash[56371]: debug 2026-03-10T05:53:06.135+0000 7fc41fad2d80 4 rocksdb: Options.blob_file_size: 268435456 2026-03-10T05:53:06.339 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:06 vm02 bash[56371]: debug 2026-03-10T05:53:06.135+0000 7fc41fad2d80 4 rocksdb: Options.blob_compression_type: NoCompression 2026-03-10T05:53:06.339 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:06 vm02 bash[56371]: debug 2026-03-10T05:53:06.135+0000 7fc41fad2d80 4 rocksdb: Options.enable_blob_garbage_collection: false 2026-03-10T05:53:06.339 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:06 vm02 bash[56371]: debug 2026-03-10T05:53:06.135+0000 7fc41fad2d80 4 rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 2026-03-10T05:53:06.339 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:06 vm02 bash[56371]: debug 2026-03-10T05:53:06.135+0000 7fc41fad2d80 4 rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 2026-03-10T05:53:06.339 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:06 vm02 bash[56371]: debug 2026-03-10T05:53:06.135+0000 7fc41fad2d80 4 rocksdb: Options.blob_compaction_readahead_size: 0 2026-03-10T05:53:06.339 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:06 vm02 bash[56371]: debug 2026-03-10T05:53:06.135+0000 7fc41fad2d80 4 rocksdb: Options.blob_file_starting_level: 0 2026-03-10T05:53:06.339 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:06 vm02 bash[56371]: debug 2026-03-10T05:53:06.135+0000 7fc41fad2d80 4 rocksdb: Options.experimental_mempurge_threshold: 0.000000 2026-03-10T05:53:06.339 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:06 vm02 bash[56371]: debug 2026-03-10T05:53:06.139+0000 7fc4198a4640 3 rocksdb: [table/block_based/block_based_table_reader.cc:721] At least one SST file opened without unique ID to verify: 24.sst 2026-03-10T05:53:06.339 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:06 vm02 bash[56371]: debug 2026-03-10T05:53:06.143+0000 7fc41fad2d80 4 rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. 2026-03-10T05:53:06.339 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:06 vm02 bash[56371]: debug 2026-03-10T05:53:06.143+0000 7fc41fad2d80 4 rocksdb: [db/version_set.cc:5566] Recovered from manifest file:/var/lib/ceph/mon/ceph-a/store.db/MANIFEST-000015 succeeded,manifest_file_number is 15, next_file_number is 26, last_sequence is 9665, log_number is 22,prev_log_number is 0,max_column_family is 0,min_log_number_to_keep is 0 2026-03-10T05:53:06.339 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:06 vm02 bash[56371]: debug 2026-03-10T05:53:06.143+0000 7fc41fad2d80 4 rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 22 2026-03-10T05:53:06.339 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:06 vm02 bash[56371]: debug 2026-03-10T05:53:06.143+0000 7fc41fad2d80 4 rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: c6329304-c2c1-42c6-a241-f0f851194597 2026-03-10T05:53:06.339 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:06 vm02 bash[56371]: debug 2026-03-10T05:53:06.143+0000 7fc41fad2d80 4 rocksdb: EVENT_LOG_v1 {"time_micros": 1773121986146423, "job": 1, "event": "recovery_started", "wal_files": [22]} 2026-03-10T05:53:06.339 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:06 vm02 bash[56371]: debug 2026-03-10T05:53:06.143+0000 7fc41fad2d80 4 rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #22 mode 2 2026-03-10T05:53:06.339 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:06 vm02 bash[56371]: debug 2026-03-10T05:53:06.155+0000 7fc41fad2d80 4 rocksdb: EVENT_LOG_v1 {"time_micros": 1773121986160953, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 27, "file_size": 3677955, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 9666, "largest_seqno": 11224, "table_properties": {"data_size": 3669975, "index_size": 5147, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1797, "raw_key_size": 16578, "raw_average_key_size": 23, "raw_value_size": 3654295, "raw_average_value_size": 5250, "num_data_blocks": 239, "num_entries": 696, "num_filter_entries": 696, "num_deletions": 2, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1773121986, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c6329304-c2c1-42c6-a241-f0f851194597", "db_session_id": "WT7J3LM2X5PIAHY1Q1EL", "orig_file_number": 27, "seqno_to_time_mapping": "N/A"}} 2026-03-10T05:53:06.339 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:06 vm02 bash[56371]: debug 2026-03-10T05:53:06.155+0000 7fc41fad2d80 4 rocksdb: EVENT_LOG_v1 {"time_micros": 1773121986161260, "job": 1, "event": "recovery_finished"} 2026-03-10T05:53:06.339 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:06 vm02 bash[56371]: debug 2026-03-10T05:53:06.155+0000 7fc41fad2d80 4 rocksdb: [db/version_set.cc:5047] Creating manifest 29 2026-03-10T05:53:06.339 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:06 vm02 bash[56371]: debug 2026-03-10T05:53:06.155+0000 7fc41fad2d80 4 rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. 2026-03-10T05:53:06.339 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:06 vm02 bash[56371]: debug 2026-03-10T05:53:06.159+0000 7fc41fad2d80 4 rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-a/store.db/000022.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 2026-03-10T05:53:06.339 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:06 vm02 bash[56371]: debug 2026-03-10T05:53:06.159+0000 7fc41fad2d80 4 rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x55ba4299ce00 2026-03-10T05:53:06.339 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:06 vm02 bash[56371]: debug 2026-03-10T05:53:06.159+0000 7fc41fad2d80 4 rocksdb: DB pointer 0x55ba42aa8000 2026-03-10T05:53:07.251 INFO:journalctl@ceph.prometheus.a.vm05.stdout:Mar 10 05:53:06 vm05 bash[41269]: ts=2026-03-10T05:53:06.948Z caller=group.go:483 level=warn name=CephNodeDiskspaceWarning index=4 component="rule manager" file=/etc/prometheus/alerting/ceph_alerts.yml group=nodes msg="Evaluating rule failed" rule="alert: CephNodeDiskspaceWarning\nexpr: predict_linear(node_filesystem_free_bytes{device=~\"/.*\"}[2d], 3600 * 24 * 5)\n * on (instance) group_left (nodename) node_uname_info < 0\nlabels:\n oid: 1.3.6.1.4.1.50495.1.2.1.8.4\n severity: warning\n type: ceph_default\nannotations:\n description: Mountpoint {{ $labels.mountpoint }} on {{ $labels.nodename }} will\n be full in less than 5 days based on the 48 hour trailing fill rate.\n summary: Host filesystem free space is getting low\n" err="found duplicate series for the match group {instance=\"vm05\"} on the right hand-side of the operation: [{__name__=\"node_uname_info\", cluster=\"107483ae-1c44-11f1-b530-c1172cd6122a\", domainname=\"(none)\", instance=\"vm05\", job=\"node\", machine=\"x86_64\", nodename=\"vm05\", release=\"5.15.0-1092-kvm\", sysname=\"Linux\", version=\"#97-Ubuntu SMP Fri Jan 23 15:00:24 UTC 2026\"}, {__name__=\"node_uname_info\", domainname=\"(none)\", instance=\"vm05\", job=\"node\", machine=\"x86_64\", nodename=\"vm05\", release=\"5.15.0-1092-kvm\", sysname=\"Linux\", version=\"#97-Ubuntu SMP Fri Jan 23 15:00:24 UTC 2026\"}];many-to-many matching not allowed: matching labels must be unique on one side" 2026-03-10T05:53:08.001 INFO:journalctl@ceph.mgr.x.vm05.stdout:Mar 10 05:53:07 vm05 bash[41654]: ignoring --setuser ceph since I am not root 2026-03-10T05:53:08.001 INFO:journalctl@ceph.mgr.x.vm05.stdout:Mar 10 05:53:07 vm05 bash[41654]: ignoring --setgroup ceph since I am not root 2026-03-10T05:53:08.001 INFO:journalctl@ceph.mgr.x.vm05.stdout:Mar 10 05:53:07 vm05 bash[41654]: debug 2026-03-10T05:53:07.721+0000 7f82de757140 -1 mgr[py] Module status has missing NOTIFY_TYPES member 2026-03-10T05:53:08.001 INFO:journalctl@ceph.mgr.x.vm05.stdout:Mar 10 05:53:07 vm05 bash[41654]: debug 2026-03-10T05:53:07.753+0000 7f82de757140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member 2026-03-10T05:53:08.001 INFO:journalctl@ceph.mgr.x.vm05.stdout:Mar 10 05:53:07 vm05 bash[41654]: debug 2026-03-10T05:53:07.865+0000 7f82de757140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member 2026-03-10T05:53:08.001 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:07 vm05 bash[17864]: cephadm 2026-03-10T05:53:05.093964+0000 mgr.y (mgr.24988) 60 : cephadm [INF] Upgrade: Updating mon.a 2026-03-10T05:53:08.001 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:07 vm05 bash[17864]: cephadm 2026-03-10T05:53:05.103506+0000 mgr.y (mgr.24988) 61 : cephadm [INF] Deploying daemon mon.a on vm02 2026-03-10T05:53:08.001 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:07 vm05 bash[17864]: cluster 2026-03-10T05:53:06.209363+0000 mgr.y (mgr.24988) 62 : cluster [DBG] pgmap v25: 161 pgs: 161 active+clean; 457 KiB data, 102 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T05:53:08.001 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:07 vm05 bash[17864]: cluster 2026-03-10T05:53:06.375265+0000 mon.a (mon.0) 1 : cluster [INF] mon.a calling monitor election 2026-03-10T05:53:08.001 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:07 vm05 bash[17864]: audit 2026-03-10T05:53:07.425170+0000 mon.b (mon.2) 100 : audit [DBG] from='mgr.24988 192.168.123.102:0/1338236995' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T05:53:08.001 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:07 vm05 bash[17864]: cluster 2026-03-10T05:53:07.585040+0000 mon.a (mon.0) 2 : cluster [INF] mon.a is new leader, mons a,c,b in quorum (ranks 0,1,2) 2026-03-10T05:53:08.001 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:07 vm05 bash[17864]: cluster 2026-03-10T05:53:07.588860+0000 mon.a (mon.0) 3 : cluster [DBG] monmap epoch 3 2026-03-10T05:53:08.001 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:07 vm05 bash[17864]: cluster 2026-03-10T05:53:07.588869+0000 mon.a (mon.0) 4 : cluster [DBG] fsid 107483ae-1c44-11f1-b530-c1172cd6122a 2026-03-10T05:53:08.001 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:07 vm05 bash[17864]: cluster 2026-03-10T05:53:07.588872+0000 mon.a (mon.0) 5 : cluster [DBG] last_changed 2026-03-10T05:44:30.716574+0000 2026-03-10T05:53:08.001 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:07 vm05 bash[17864]: cluster 2026-03-10T05:53:07.588875+0000 mon.a (mon.0) 6 : cluster [DBG] created 2026-03-10T05:43:50.866640+0000 2026-03-10T05:53:08.001 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:07 vm05 bash[17864]: cluster 2026-03-10T05:53:07.588879+0000 mon.a (mon.0) 7 : cluster [DBG] min_mon_release 17 (quincy) 2026-03-10T05:53:08.001 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:07 vm05 bash[17864]: cluster 2026-03-10T05:53:07.588882+0000 mon.a (mon.0) 8 : cluster [DBG] election_strategy: 1 2026-03-10T05:53:08.001 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:07 vm05 bash[17864]: cluster 2026-03-10T05:53:07.588885+0000 mon.a (mon.0) 9 : cluster [DBG] 0: [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] mon.a 2026-03-10T05:53:08.001 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:07 vm05 bash[17864]: cluster 2026-03-10T05:53:07.588888+0000 mon.a (mon.0) 10 : cluster [DBG] 1: [v2:192.168.123.102:3301/0,v1:192.168.123.102:6790/0] mon.c 2026-03-10T05:53:08.001 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:07 vm05 bash[17864]: cluster 2026-03-10T05:53:07.588892+0000 mon.a (mon.0) 11 : cluster [DBG] 2: [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] mon.b 2026-03-10T05:53:08.001 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:07 vm05 bash[17864]: cluster 2026-03-10T05:53:07.589211+0000 mon.a (mon.0) 12 : cluster [DBG] fsmap 2026-03-10T05:53:08.001 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:07 vm05 bash[17864]: cluster 2026-03-10T05:53:07.589442+0000 mon.a (mon.0) 13 : cluster [DBG] osdmap e90: 8 total, 8 up, 8 in 2026-03-10T05:53:08.001 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:07 vm05 bash[17864]: cluster 2026-03-10T05:53:07.590438+0000 mon.a (mon.0) 14 : cluster [DBG] mgrmap e32: y(active, since 46s), standbys: x 2026-03-10T05:53:08.002 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:07 vm05 bash[17864]: cluster 2026-03-10T05:53:07.591217+0000 mon.a (mon.0) 15 : cluster [INF] overall HEALTH_OK 2026-03-10T05:53:08.002 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:07 vm05 bash[17864]: audit 2026-03-10T05:53:07.610999+0000 mon.a (mon.0) 16 : audit [INF] from='mgr.24988 ' entity='' 2026-03-10T05:53:08.002 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:07 vm05 bash[17864]: cluster 2026-03-10T05:53:07.617281+0000 mon.a (mon.0) 17 : cluster [DBG] mgrmap e33: y(active, since 46s), standbys: x 2026-03-10T05:53:08.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:07 vm02 bash[56371]: cephadm 2026-03-10T05:53:05.093964+0000 mgr.y (mgr.24988) 60 : cephadm [INF] Upgrade: Updating mon.a 2026-03-10T05:53:08.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:07 vm02 bash[56371]: cephadm 2026-03-10T05:53:05.093964+0000 mgr.y (mgr.24988) 60 : cephadm [INF] Upgrade: Updating mon.a 2026-03-10T05:53:08.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:07 vm02 bash[56371]: cephadm 2026-03-10T05:53:05.103506+0000 mgr.y (mgr.24988) 61 : cephadm [INF] Deploying daemon mon.a on vm02 2026-03-10T05:53:08.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:07 vm02 bash[56371]: cephadm 2026-03-10T05:53:05.103506+0000 mgr.y (mgr.24988) 61 : cephadm [INF] Deploying daemon mon.a on vm02 2026-03-10T05:53:08.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:07 vm02 bash[56371]: cluster 2026-03-10T05:53:06.209363+0000 mgr.y (mgr.24988) 62 : cluster [DBG] pgmap v25: 161 pgs: 161 active+clean; 457 KiB data, 102 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T05:53:08.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:07 vm02 bash[56371]: cluster 2026-03-10T05:53:06.209363+0000 mgr.y (mgr.24988) 62 : cluster [DBG] pgmap v25: 161 pgs: 161 active+clean; 457 KiB data, 102 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T05:53:08.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:07 vm02 bash[56371]: cluster 2026-03-10T05:53:06.375265+0000 mon.a (mon.0) 1 : cluster [INF] mon.a calling monitor election 2026-03-10T05:53:08.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:07 vm02 bash[56371]: cluster 2026-03-10T05:53:06.375265+0000 mon.a (mon.0) 1 : cluster [INF] mon.a calling monitor election 2026-03-10T05:53:08.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:07 vm02 bash[56371]: audit 2026-03-10T05:53:07.425170+0000 mon.b (mon.2) 100 : audit [DBG] from='mgr.24988 192.168.123.102:0/1338236995' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T05:53:08.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:07 vm02 bash[56371]: audit 2026-03-10T05:53:07.425170+0000 mon.b (mon.2) 100 : audit [DBG] from='mgr.24988 192.168.123.102:0/1338236995' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T05:53:08.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:07 vm02 bash[56371]: cluster 2026-03-10T05:53:07.585040+0000 mon.a (mon.0) 2 : cluster [INF] mon.a is new leader, mons a,c,b in quorum (ranks 0,1,2) 2026-03-10T05:53:08.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:07 vm02 bash[56371]: cluster 2026-03-10T05:53:07.585040+0000 mon.a (mon.0) 2 : cluster [INF] mon.a is new leader, mons a,c,b in quorum (ranks 0,1,2) 2026-03-10T05:53:08.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:07 vm02 bash[56371]: cluster 2026-03-10T05:53:07.588860+0000 mon.a (mon.0) 3 : cluster [DBG] monmap epoch 3 2026-03-10T05:53:08.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:07 vm02 bash[56371]: cluster 2026-03-10T05:53:07.588860+0000 mon.a (mon.0) 3 : cluster [DBG] monmap epoch 3 2026-03-10T05:53:08.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:07 vm02 bash[56371]: cluster 2026-03-10T05:53:07.588869+0000 mon.a (mon.0) 4 : cluster [DBG] fsid 107483ae-1c44-11f1-b530-c1172cd6122a 2026-03-10T05:53:08.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:07 vm02 bash[56371]: cluster 2026-03-10T05:53:07.588869+0000 mon.a (mon.0) 4 : cluster [DBG] fsid 107483ae-1c44-11f1-b530-c1172cd6122a 2026-03-10T05:53:08.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:07 vm02 bash[56371]: cluster 2026-03-10T05:53:07.588872+0000 mon.a (mon.0) 5 : cluster [DBG] last_changed 2026-03-10T05:44:30.716574+0000 2026-03-10T05:53:08.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:07 vm02 bash[56371]: cluster 2026-03-10T05:53:07.588872+0000 mon.a (mon.0) 5 : cluster [DBG] last_changed 2026-03-10T05:44:30.716574+0000 2026-03-10T05:53:08.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:07 vm02 bash[56371]: cluster 2026-03-10T05:53:07.588875+0000 mon.a (mon.0) 6 : cluster [DBG] created 2026-03-10T05:43:50.866640+0000 2026-03-10T05:53:08.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:07 vm02 bash[56371]: cluster 2026-03-10T05:53:07.588875+0000 mon.a (mon.0) 6 : cluster [DBG] created 2026-03-10T05:43:50.866640+0000 2026-03-10T05:53:08.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:07 vm02 bash[56371]: cluster 2026-03-10T05:53:07.588879+0000 mon.a (mon.0) 7 : cluster [DBG] min_mon_release 17 (quincy) 2026-03-10T05:53:08.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:07 vm02 bash[56371]: cluster 2026-03-10T05:53:07.588879+0000 mon.a (mon.0) 7 : cluster [DBG] min_mon_release 17 (quincy) 2026-03-10T05:53:08.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:07 vm02 bash[56371]: cluster 2026-03-10T05:53:07.588882+0000 mon.a (mon.0) 8 : cluster [DBG] election_strategy: 1 2026-03-10T05:53:08.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:07 vm02 bash[56371]: cluster 2026-03-10T05:53:07.588882+0000 mon.a (mon.0) 8 : cluster [DBG] election_strategy: 1 2026-03-10T05:53:08.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:07 vm02 bash[56371]: cluster 2026-03-10T05:53:07.588885+0000 mon.a (mon.0) 9 : cluster [DBG] 0: [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] mon.a 2026-03-10T05:53:08.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:07 vm02 bash[56371]: cluster 2026-03-10T05:53:07.588885+0000 mon.a (mon.0) 9 : cluster [DBG] 0: [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] mon.a 2026-03-10T05:53:08.086 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:07 vm02 bash[56371]: cluster 2026-03-10T05:53:07.588888+0000 mon.a (mon.0) 10 : cluster [DBG] 1: [v2:192.168.123.102:3301/0,v1:192.168.123.102:6790/0] mon.c 2026-03-10T05:53:08.086 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:07 vm02 bash[56371]: cluster 2026-03-10T05:53:07.588888+0000 mon.a (mon.0) 10 : cluster [DBG] 1: [v2:192.168.123.102:3301/0,v1:192.168.123.102:6790/0] mon.c 2026-03-10T05:53:08.086 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:07 vm02 bash[56371]: cluster 2026-03-10T05:53:07.588892+0000 mon.a (mon.0) 11 : cluster [DBG] 2: [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] mon.b 2026-03-10T05:53:08.086 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:07 vm02 bash[56371]: cluster 2026-03-10T05:53:07.588892+0000 mon.a (mon.0) 11 : cluster [DBG] 2: [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] mon.b 2026-03-10T05:53:08.086 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:07 vm02 bash[56371]: cluster 2026-03-10T05:53:07.589211+0000 mon.a (mon.0) 12 : cluster [DBG] fsmap 2026-03-10T05:53:08.086 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:07 vm02 bash[56371]: cluster 2026-03-10T05:53:07.589211+0000 mon.a (mon.0) 12 : cluster [DBG] fsmap 2026-03-10T05:53:08.086 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:07 vm02 bash[56371]: cluster 2026-03-10T05:53:07.589442+0000 mon.a (mon.0) 13 : cluster [DBG] osdmap e90: 8 total, 8 up, 8 in 2026-03-10T05:53:08.086 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:07 vm02 bash[56371]: cluster 2026-03-10T05:53:07.589442+0000 mon.a (mon.0) 13 : cluster [DBG] osdmap e90: 8 total, 8 up, 8 in 2026-03-10T05:53:08.086 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:07 vm02 bash[56371]: cluster 2026-03-10T05:53:07.590438+0000 mon.a (mon.0) 14 : cluster [DBG] mgrmap e32: y(active, since 46s), standbys: x 2026-03-10T05:53:08.086 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:07 vm02 bash[56371]: cluster 2026-03-10T05:53:07.590438+0000 mon.a (mon.0) 14 : cluster [DBG] mgrmap e32: y(active, since 46s), standbys: x 2026-03-10T05:53:08.086 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:07 vm02 bash[56371]: cluster 2026-03-10T05:53:07.591217+0000 mon.a (mon.0) 15 : cluster [INF] overall HEALTH_OK 2026-03-10T05:53:08.086 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:07 vm02 bash[56371]: cluster 2026-03-10T05:53:07.591217+0000 mon.a (mon.0) 15 : cluster [INF] overall HEALTH_OK 2026-03-10T05:53:08.086 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:07 vm02 bash[56371]: audit 2026-03-10T05:53:07.610999+0000 mon.a (mon.0) 16 : audit [INF] from='mgr.24988 ' entity='' 2026-03-10T05:53:08.086 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:07 vm02 bash[56371]: audit 2026-03-10T05:53:07.610999+0000 mon.a (mon.0) 16 : audit [INF] from='mgr.24988 ' entity='' 2026-03-10T05:53:08.086 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:07 vm02 bash[56371]: cluster 2026-03-10T05:53:07.617281+0000 mon.a (mon.0) 17 : cluster [DBG] mgrmap e33: y(active, since 46s), standbys: x 2026-03-10T05:53:08.086 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:07 vm02 bash[56371]: cluster 2026-03-10T05:53:07.617281+0000 mon.a (mon.0) 17 : cluster [DBG] mgrmap e33: y(active, since 46s), standbys: x 2026-03-10T05:53:08.086 INFO:journalctl@ceph.mgr.y.vm02.stdout:Mar 10 05:53:07 vm02 bash[52264]: ignoring --setuser ceph since I am not root 2026-03-10T05:53:08.086 INFO:journalctl@ceph.mgr.y.vm02.stdout:Mar 10 05:53:07 vm02 bash[52264]: ignoring --setgroup ceph since I am not root 2026-03-10T05:53:08.086 INFO:journalctl@ceph.mgr.y.vm02.stdout:Mar 10 05:53:07 vm02 bash[52264]: debug 2026-03-10T05:53:07.675+0000 7fd67cb82640 1 -- 192.168.123.102:0/1372716566 <== mon.1 v2:192.168.123.102:3301/0 4 ==== auth_reply(proto 2 0 (0) Success) ==== 194+0+0 (secure 0 0 0) 0x55679757f4a0 con 0x556797581400 2026-03-10T05:53:08.086 INFO:journalctl@ceph.mgr.y.vm02.stdout:Mar 10 05:53:07 vm02 bash[52264]: debug 2026-03-10T05:53:07.735+0000 7fd67f3df140 -1 mgr[py] Module status has missing NOTIFY_TYPES member 2026-03-10T05:53:08.086 INFO:journalctl@ceph.mgr.y.vm02.stdout:Mar 10 05:53:07 vm02 bash[52264]: debug 2026-03-10T05:53:07.767+0000 7fd67f3df140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member 2026-03-10T05:53:08.086 INFO:journalctl@ceph.mgr.y.vm02.stdout:Mar 10 05:53:07 vm02 bash[52264]: debug 2026-03-10T05:53:07.883+0000 7fd67f3df140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member 2026-03-10T05:53:08.086 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:07 vm02 bash[55303]: cephadm 2026-03-10T05:53:05.093964+0000 mgr.y (mgr.24988) 60 : cephadm [INF] Upgrade: Updating mon.a 2026-03-10T05:53:08.086 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:07 vm02 bash[55303]: cephadm 2026-03-10T05:53:05.093964+0000 mgr.y (mgr.24988) 60 : cephadm [INF] Upgrade: Updating mon.a 2026-03-10T05:53:08.086 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:07 vm02 bash[55303]: cephadm 2026-03-10T05:53:05.103506+0000 mgr.y (mgr.24988) 61 : cephadm [INF] Deploying daemon mon.a on vm02 2026-03-10T05:53:08.086 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:07 vm02 bash[55303]: cephadm 2026-03-10T05:53:05.103506+0000 mgr.y (mgr.24988) 61 : cephadm [INF] Deploying daemon mon.a on vm02 2026-03-10T05:53:08.086 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:07 vm02 bash[55303]: cluster 2026-03-10T05:53:06.209363+0000 mgr.y (mgr.24988) 62 : cluster [DBG] pgmap v25: 161 pgs: 161 active+clean; 457 KiB data, 102 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T05:53:08.086 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:07 vm02 bash[55303]: cluster 2026-03-10T05:53:06.209363+0000 mgr.y (mgr.24988) 62 : cluster [DBG] pgmap v25: 161 pgs: 161 active+clean; 457 KiB data, 102 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T05:53:08.086 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:07 vm02 bash[55303]: cluster 2026-03-10T05:53:06.375265+0000 mon.a (mon.0) 1 : cluster [INF] mon.a calling monitor election 2026-03-10T05:53:08.086 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:07 vm02 bash[55303]: cluster 2026-03-10T05:53:06.375265+0000 mon.a (mon.0) 1 : cluster [INF] mon.a calling monitor election 2026-03-10T05:53:08.086 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:07 vm02 bash[55303]: audit 2026-03-10T05:53:07.425170+0000 mon.b (mon.2) 100 : audit [DBG] from='mgr.24988 192.168.123.102:0/1338236995' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T05:53:08.086 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:07 vm02 bash[55303]: audit 2026-03-10T05:53:07.425170+0000 mon.b (mon.2) 100 : audit [DBG] from='mgr.24988 192.168.123.102:0/1338236995' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T05:53:08.086 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:07 vm02 bash[55303]: cluster 2026-03-10T05:53:07.585040+0000 mon.a (mon.0) 2 : cluster [INF] mon.a is new leader, mons a,c,b in quorum (ranks 0,1,2) 2026-03-10T05:53:08.086 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:07 vm02 bash[55303]: cluster 2026-03-10T05:53:07.585040+0000 mon.a (mon.0) 2 : cluster [INF] mon.a is new leader, mons a,c,b in quorum (ranks 0,1,2) 2026-03-10T05:53:08.086 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:07 vm02 bash[55303]: cluster 2026-03-10T05:53:07.588860+0000 mon.a (mon.0) 3 : cluster [DBG] monmap epoch 3 2026-03-10T05:53:08.086 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:07 vm02 bash[55303]: cluster 2026-03-10T05:53:07.588860+0000 mon.a (mon.0) 3 : cluster [DBG] monmap epoch 3 2026-03-10T05:53:08.086 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:07 vm02 bash[55303]: cluster 2026-03-10T05:53:07.588869+0000 mon.a (mon.0) 4 : cluster [DBG] fsid 107483ae-1c44-11f1-b530-c1172cd6122a 2026-03-10T05:53:08.086 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:07 vm02 bash[55303]: cluster 2026-03-10T05:53:07.588869+0000 mon.a (mon.0) 4 : cluster [DBG] fsid 107483ae-1c44-11f1-b530-c1172cd6122a 2026-03-10T05:53:08.086 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:07 vm02 bash[55303]: cluster 2026-03-10T05:53:07.588872+0000 mon.a (mon.0) 5 : cluster [DBG] last_changed 2026-03-10T05:44:30.716574+0000 2026-03-10T05:53:08.086 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:07 vm02 bash[55303]: cluster 2026-03-10T05:53:07.588872+0000 mon.a (mon.0) 5 : cluster [DBG] last_changed 2026-03-10T05:44:30.716574+0000 2026-03-10T05:53:08.086 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:07 vm02 bash[55303]: cluster 2026-03-10T05:53:07.588875+0000 mon.a (mon.0) 6 : cluster [DBG] created 2026-03-10T05:43:50.866640+0000 2026-03-10T05:53:08.086 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:07 vm02 bash[55303]: cluster 2026-03-10T05:53:07.588875+0000 mon.a (mon.0) 6 : cluster [DBG] created 2026-03-10T05:43:50.866640+0000 2026-03-10T05:53:08.086 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:07 vm02 bash[55303]: cluster 2026-03-10T05:53:07.588879+0000 mon.a (mon.0) 7 : cluster [DBG] min_mon_release 17 (quincy) 2026-03-10T05:53:08.086 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:07 vm02 bash[55303]: cluster 2026-03-10T05:53:07.588879+0000 mon.a (mon.0) 7 : cluster [DBG] min_mon_release 17 (quincy) 2026-03-10T05:53:08.086 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:07 vm02 bash[55303]: cluster 2026-03-10T05:53:07.588882+0000 mon.a (mon.0) 8 : cluster [DBG] election_strategy: 1 2026-03-10T05:53:08.086 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:07 vm02 bash[55303]: cluster 2026-03-10T05:53:07.588882+0000 mon.a (mon.0) 8 : cluster [DBG] election_strategy: 1 2026-03-10T05:53:08.087 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:07 vm02 bash[55303]: cluster 2026-03-10T05:53:07.588885+0000 mon.a (mon.0) 9 : cluster [DBG] 0: [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] mon.a 2026-03-10T05:53:08.087 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:07 vm02 bash[55303]: cluster 2026-03-10T05:53:07.588885+0000 mon.a (mon.0) 9 : cluster [DBG] 0: [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] mon.a 2026-03-10T05:53:08.087 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:07 vm02 bash[55303]: cluster 2026-03-10T05:53:07.588888+0000 mon.a (mon.0) 10 : cluster [DBG] 1: [v2:192.168.123.102:3301/0,v1:192.168.123.102:6790/0] mon.c 2026-03-10T05:53:08.087 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:07 vm02 bash[55303]: cluster 2026-03-10T05:53:07.588888+0000 mon.a (mon.0) 10 : cluster [DBG] 1: [v2:192.168.123.102:3301/0,v1:192.168.123.102:6790/0] mon.c 2026-03-10T05:53:08.087 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:07 vm02 bash[55303]: cluster 2026-03-10T05:53:07.588892+0000 mon.a (mon.0) 11 : cluster [DBG] 2: [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] mon.b 2026-03-10T05:53:08.087 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:07 vm02 bash[55303]: cluster 2026-03-10T05:53:07.588892+0000 mon.a (mon.0) 11 : cluster [DBG] 2: [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] mon.b 2026-03-10T05:53:08.087 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:07 vm02 bash[55303]: cluster 2026-03-10T05:53:07.589211+0000 mon.a (mon.0) 12 : cluster [DBG] fsmap 2026-03-10T05:53:08.087 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:07 vm02 bash[55303]: cluster 2026-03-10T05:53:07.589211+0000 mon.a (mon.0) 12 : cluster [DBG] fsmap 2026-03-10T05:53:08.087 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:07 vm02 bash[55303]: cluster 2026-03-10T05:53:07.589442+0000 mon.a (mon.0) 13 : cluster [DBG] osdmap e90: 8 total, 8 up, 8 in 2026-03-10T05:53:08.087 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:07 vm02 bash[55303]: cluster 2026-03-10T05:53:07.589442+0000 mon.a (mon.0) 13 : cluster [DBG] osdmap e90: 8 total, 8 up, 8 in 2026-03-10T05:53:08.087 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:07 vm02 bash[55303]: cluster 2026-03-10T05:53:07.590438+0000 mon.a (mon.0) 14 : cluster [DBG] mgrmap e32: y(active, since 46s), standbys: x 2026-03-10T05:53:08.087 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:07 vm02 bash[55303]: cluster 2026-03-10T05:53:07.590438+0000 mon.a (mon.0) 14 : cluster [DBG] mgrmap e32: y(active, since 46s), standbys: x 2026-03-10T05:53:08.087 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:07 vm02 bash[55303]: cluster 2026-03-10T05:53:07.591217+0000 mon.a (mon.0) 15 : cluster [INF] overall HEALTH_OK 2026-03-10T05:53:08.087 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:07 vm02 bash[55303]: cluster 2026-03-10T05:53:07.591217+0000 mon.a (mon.0) 15 : cluster [INF] overall HEALTH_OK 2026-03-10T05:53:08.087 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:07 vm02 bash[55303]: audit 2026-03-10T05:53:07.610999+0000 mon.a (mon.0) 16 : audit [INF] from='mgr.24988 ' entity='' 2026-03-10T05:53:08.087 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:07 vm02 bash[55303]: audit 2026-03-10T05:53:07.610999+0000 mon.a (mon.0) 16 : audit [INF] from='mgr.24988 ' entity='' 2026-03-10T05:53:08.087 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:07 vm02 bash[55303]: cluster 2026-03-10T05:53:07.617281+0000 mon.a (mon.0) 17 : cluster [DBG] mgrmap e33: y(active, since 46s), standbys: x 2026-03-10T05:53:08.087 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:07 vm02 bash[55303]: cluster 2026-03-10T05:53:07.617281+0000 mon.a (mon.0) 17 : cluster [DBG] mgrmap e33: y(active, since 46s), standbys: x 2026-03-10T05:53:08.501 INFO:journalctl@ceph.mgr.x.vm05.stdout:Mar 10 05:53:08 vm05 bash[41654]: debug 2026-03-10T05:53:08.133+0000 7f82de757140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member 2026-03-10T05:53:08.580 INFO:journalctl@ceph.mgr.y.vm02.stdout:Mar 10 05:53:08 vm02 bash[52264]: debug 2026-03-10T05:53:08.147+0000 7fd67f3df140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member 2026-03-10T05:53:08.834 INFO:journalctl@ceph.mgr.y.vm02.stdout:Mar 10 05:53:08 vm02 bash[52264]: debug 2026-03-10T05:53:08.575+0000 7fd67f3df140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member 2026-03-10T05:53:08.835 INFO:journalctl@ceph.mgr.y.vm02.stdout:Mar 10 05:53:08 vm02 bash[52264]: debug 2026-03-10T05:53:08.667+0000 7fd67f3df140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member 2026-03-10T05:53:08.835 INFO:journalctl@ceph.mgr.y.vm02.stdout:Mar 10 05:53:08 vm02 bash[52264]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode. 2026-03-10T05:53:08.835 INFO:journalctl@ceph.mgr.y.vm02.stdout:Mar 10 05:53:08 vm02 bash[52264]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve. 2026-03-10T05:53:08.835 INFO:journalctl@ceph.mgr.y.vm02.stdout:Mar 10 05:53:08 vm02 bash[52264]: from numpy import show_config as show_numpy_config 2026-03-10T05:53:08.835 INFO:journalctl@ceph.mgr.y.vm02.stdout:Mar 10 05:53:08 vm02 bash[52264]: debug 2026-03-10T05:53:08.787+0000 7fd67f3df140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member 2026-03-10T05:53:08.913 INFO:journalctl@ceph.mgr.x.vm05.stdout:Mar 10 05:53:08 vm05 bash[41654]: debug 2026-03-10T05:53:08.557+0000 7f82de757140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member 2026-03-10T05:53:08.913 INFO:journalctl@ceph.mgr.x.vm05.stdout:Mar 10 05:53:08 vm05 bash[41654]: debug 2026-03-10T05:53:08.649+0000 7f82de757140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member 2026-03-10T05:53:08.913 INFO:journalctl@ceph.mgr.x.vm05.stdout:Mar 10 05:53:08 vm05 bash[41654]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode. 2026-03-10T05:53:08.913 INFO:journalctl@ceph.mgr.x.vm05.stdout:Mar 10 05:53:08 vm05 bash[41654]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve. 2026-03-10T05:53:08.913 INFO:journalctl@ceph.mgr.x.vm05.stdout:Mar 10 05:53:08 vm05 bash[41654]: from numpy import show_config as show_numpy_config 2026-03-10T05:53:08.913 INFO:journalctl@ceph.mgr.x.vm05.stdout:Mar 10 05:53:08 vm05 bash[41654]: debug 2026-03-10T05:53:08.769+0000 7f82de757140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member 2026-03-10T05:53:09.251 INFO:journalctl@ceph.mgr.x.vm05.stdout:Mar 10 05:53:08 vm05 bash[41654]: debug 2026-03-10T05:53:08.909+0000 7f82de757140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member 2026-03-10T05:53:09.251 INFO:journalctl@ceph.mgr.x.vm05.stdout:Mar 10 05:53:08 vm05 bash[41654]: debug 2026-03-10T05:53:08.949+0000 7f82de757140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member 2026-03-10T05:53:09.251 INFO:journalctl@ceph.mgr.x.vm05.stdout:Mar 10 05:53:08 vm05 bash[41654]: debug 2026-03-10T05:53:08.985+0000 7f82de757140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member 2026-03-10T05:53:09.251 INFO:journalctl@ceph.mgr.x.vm05.stdout:Mar 10 05:53:09 vm05 bash[41654]: debug 2026-03-10T05:53:09.025+0000 7f82de757140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member 2026-03-10T05:53:09.251 INFO:journalctl@ceph.mgr.x.vm05.stdout:Mar 10 05:53:09 vm05 bash[41654]: debug 2026-03-10T05:53:09.073+0000 7f82de757140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member 2026-03-10T05:53:09.335 INFO:journalctl@ceph.mgr.y.vm02.stdout:Mar 10 05:53:08 vm02 bash[52264]: debug 2026-03-10T05:53:08.935+0000 7fd67f3df140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member 2026-03-10T05:53:09.335 INFO:journalctl@ceph.mgr.y.vm02.stdout:Mar 10 05:53:08 vm02 bash[52264]: debug 2026-03-10T05:53:08.979+0000 7fd67f3df140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member 2026-03-10T05:53:09.335 INFO:journalctl@ceph.mgr.y.vm02.stdout:Mar 10 05:53:09 vm02 bash[52264]: debug 2026-03-10T05:53:09.019+0000 7fd67f3df140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member 2026-03-10T05:53:09.335 INFO:journalctl@ceph.mgr.y.vm02.stdout:Mar 10 05:53:09 vm02 bash[52264]: debug 2026-03-10T05:53:09.067+0000 7fd67f3df140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member 2026-03-10T05:53:09.335 INFO:journalctl@ceph.mgr.y.vm02.stdout:Mar 10 05:53:09 vm02 bash[52264]: debug 2026-03-10T05:53:09.123+0000 7fd67f3df140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member 2026-03-10T05:53:09.769 INFO:journalctl@ceph.mgr.x.vm05.stdout:Mar 10 05:53:09 vm05 bash[41654]: debug 2026-03-10T05:53:09.513+0000 7f82de757140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member 2026-03-10T05:53:09.769 INFO:journalctl@ceph.mgr.x.vm05.stdout:Mar 10 05:53:09 vm05 bash[41654]: debug 2026-03-10T05:53:09.549+0000 7f82de757140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member 2026-03-10T05:53:09.769 INFO:journalctl@ceph.mgr.x.vm05.stdout:Mar 10 05:53:09 vm05 bash[41654]: debug 2026-03-10T05:53:09.585+0000 7f82de757140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member 2026-03-10T05:53:09.769 INFO:journalctl@ceph.mgr.x.vm05.stdout:Mar 10 05:53:09 vm05 bash[41654]: debug 2026-03-10T05:53:09.725+0000 7f82de757140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member 2026-03-10T05:53:09.769 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:09 vm05 bash[17864]: cluster 2026-03-10T05:53:08.609805+0000 mon.a (mon.0) 18 : cluster [DBG] mgrmap e34: y(active, since 47s), standbys: x 2026-03-10T05:53:09.834 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:09 vm02 bash[56371]: cluster 2026-03-10T05:53:08.609805+0000 mon.a (mon.0) 18 : cluster [DBG] mgrmap e34: y(active, since 47s), standbys: x 2026-03-10T05:53:09.834 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:09 vm02 bash[56371]: cluster 2026-03-10T05:53:08.609805+0000 mon.a (mon.0) 18 : cluster [DBG] mgrmap e34: y(active, since 47s), standbys: x 2026-03-10T05:53:09.834 INFO:journalctl@ceph.mgr.y.vm02.stdout:Mar 10 05:53:09 vm02 bash[52264]: debug 2026-03-10T05:53:09.575+0000 7fd67f3df140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member 2026-03-10T05:53:09.834 INFO:journalctl@ceph.mgr.y.vm02.stdout:Mar 10 05:53:09 vm02 bash[52264]: debug 2026-03-10T05:53:09.611+0000 7fd67f3df140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member 2026-03-10T05:53:09.834 INFO:journalctl@ceph.mgr.y.vm02.stdout:Mar 10 05:53:09 vm02 bash[52264]: debug 2026-03-10T05:53:09.651+0000 7fd67f3df140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member 2026-03-10T05:53:09.835 INFO:journalctl@ceph.mgr.y.vm02.stdout:Mar 10 05:53:09 vm02 bash[52264]: debug 2026-03-10T05:53:09.795+0000 7fd67f3df140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member 2026-03-10T05:53:09.835 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:09 vm02 bash[55303]: cluster 2026-03-10T05:53:08.609805+0000 mon.a (mon.0) 18 : cluster [DBG] mgrmap e34: y(active, since 47s), standbys: x 2026-03-10T05:53:09.835 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:09 vm02 bash[55303]: cluster 2026-03-10T05:53:08.609805+0000 mon.a (mon.0) 18 : cluster [DBG] mgrmap e34: y(active, since 47s), standbys: x 2026-03-10T05:53:10.070 INFO:journalctl@ceph.mgr.x.vm05.stdout:Mar 10 05:53:09 vm05 bash[41654]: debug 2026-03-10T05:53:09.765+0000 7f82de757140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member 2026-03-10T05:53:10.070 INFO:journalctl@ceph.mgr.x.vm05.stdout:Mar 10 05:53:09 vm05 bash[41654]: debug 2026-03-10T05:53:09.805+0000 7f82de757140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member 2026-03-10T05:53:10.070 INFO:journalctl@ceph.mgr.x.vm05.stdout:Mar 10 05:53:09 vm05 bash[41654]: debug 2026-03-10T05:53:09.913+0000 7f82de757140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member 2026-03-10T05:53:10.146 INFO:journalctl@ceph.mgr.y.vm02.stdout:Mar 10 05:53:09 vm02 bash[52264]: debug 2026-03-10T05:53:09.835+0000 7fd67f3df140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member 2026-03-10T05:53:10.146 INFO:journalctl@ceph.mgr.y.vm02.stdout:Mar 10 05:53:09 vm02 bash[52264]: debug 2026-03-10T05:53:09.875+0000 7fd67f3df140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member 2026-03-10T05:53:10.146 INFO:journalctl@ceph.mgr.y.vm02.stdout:Mar 10 05:53:09 vm02 bash[52264]: debug 2026-03-10T05:53:09.983+0000 7fd67f3df140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member 2026-03-10T05:53:10.322 INFO:journalctl@ceph.mgr.x.vm05.stdout:Mar 10 05:53:10 vm05 bash[41654]: debug 2026-03-10T05:53:10.065+0000 7f82de757140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member 2026-03-10T05:53:10.322 INFO:journalctl@ceph.mgr.x.vm05.stdout:Mar 10 05:53:10 vm05 bash[41654]: debug 2026-03-10T05:53:10.241+0000 7f82de757140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member 2026-03-10T05:53:10.322 INFO:journalctl@ceph.mgr.x.vm05.stdout:Mar 10 05:53:10 vm05 bash[41654]: debug 2026-03-10T05:53:10.277+0000 7f82de757140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member 2026-03-10T05:53:10.539 INFO:journalctl@ceph.mgr.y.vm02.stdout:Mar 10 05:53:10 vm02 bash[52264]: debug 2026-03-10T05:53:10.143+0000 7fd67f3df140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member 2026-03-10T05:53:10.539 INFO:journalctl@ceph.mgr.y.vm02.stdout:Mar 10 05:53:10 vm02 bash[52264]: debug 2026-03-10T05:53:10.315+0000 7fd67f3df140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member 2026-03-10T05:53:10.539 INFO:journalctl@ceph.mgr.y.vm02.stdout:Mar 10 05:53:10 vm02 bash[52264]: debug 2026-03-10T05:53:10.347+0000 7fd67f3df140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member 2026-03-10T05:53:10.539 INFO:journalctl@ceph.mgr.y.vm02.stdout:Mar 10 05:53:10 vm02 bash[52264]: debug 2026-03-10T05:53:10.391+0000 7fd67f3df140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member 2026-03-10T05:53:10.695 INFO:journalctl@ceph.mgr.x.vm05.stdout:Mar 10 05:53:10 vm05 bash[41654]: debug 2026-03-10T05:53:10.317+0000 7f82de757140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member 2026-03-10T05:53:10.695 INFO:journalctl@ceph.mgr.x.vm05.stdout:Mar 10 05:53:10 vm05 bash[41654]: debug 2026-03-10T05:53:10.461+0000 7f82de757140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member 2026-03-10T05:53:10.799 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:10 vm02 bash[56371]: cluster 2026-03-10T05:53:10.696472+0000 mon.a (mon.0) 19 : cluster [DBG] Standby manager daemon x restarted 2026-03-10T05:53:10.799 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:10 vm02 bash[56371]: cluster 2026-03-10T05:53:10.696472+0000 mon.a (mon.0) 19 : cluster [DBG] Standby manager daemon x restarted 2026-03-10T05:53:10.799 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:10 vm02 bash[56371]: cluster 2026-03-10T05:53:10.696595+0000 mon.a (mon.0) 20 : cluster [DBG] Standby manager daemon x started 2026-03-10T05:53:10.799 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:10 vm02 bash[56371]: cluster 2026-03-10T05:53:10.696595+0000 mon.a (mon.0) 20 : cluster [DBG] Standby manager daemon x started 2026-03-10T05:53:10.799 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:10 vm02 bash[56371]: audit 2026-03-10T05:53:10.700965+0000 mon.b (mon.2) 101 : audit [DBG] from='mgr.? 192.168.123.105:0/532490300' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/x/crt"}]: dispatch 2026-03-10T05:53:10.799 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:10 vm02 bash[56371]: audit 2026-03-10T05:53:10.700965+0000 mon.b (mon.2) 101 : audit [DBG] from='mgr.? 192.168.123.105:0/532490300' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/x/crt"}]: dispatch 2026-03-10T05:53:10.799 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:10 vm02 bash[56371]: audit 2026-03-10T05:53:10.701494+0000 mon.b (mon.2) 102 : audit [DBG] from='mgr.? 192.168.123.105:0/532490300' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/crt"}]: dispatch 2026-03-10T05:53:10.799 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:10 vm02 bash[56371]: audit 2026-03-10T05:53:10.701494+0000 mon.b (mon.2) 102 : audit [DBG] from='mgr.? 192.168.123.105:0/532490300' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/crt"}]: dispatch 2026-03-10T05:53:10.799 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:10 vm02 bash[56371]: audit 2026-03-10T05:53:10.702453+0000 mon.b (mon.2) 103 : audit [DBG] from='mgr.? 192.168.123.105:0/532490300' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/x/key"}]: dispatch 2026-03-10T05:53:10.799 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:10 vm02 bash[56371]: audit 2026-03-10T05:53:10.702453+0000 mon.b (mon.2) 103 : audit [DBG] from='mgr.? 192.168.123.105:0/532490300' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/x/key"}]: dispatch 2026-03-10T05:53:10.799 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:10 vm02 bash[56371]: audit 2026-03-10T05:53:10.702744+0000 mon.b (mon.2) 104 : audit [DBG] from='mgr.? 192.168.123.105:0/532490300' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/key"}]: dispatch 2026-03-10T05:53:10.799 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:10 vm02 bash[56371]: audit 2026-03-10T05:53:10.702744+0000 mon.b (mon.2) 104 : audit [DBG] from='mgr.? 192.168.123.105:0/532490300' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/key"}]: dispatch 2026-03-10T05:53:10.799 INFO:journalctl@ceph.mgr.y.vm02.stdout:Mar 10 05:53:10 vm02 bash[52264]: debug 2026-03-10T05:53:10.535+0000 7fd67f3df140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member 2026-03-10T05:53:10.799 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:10 vm02 bash[55303]: cluster 2026-03-10T05:53:10.696472+0000 mon.a (mon.0) 19 : cluster [DBG] Standby manager daemon x restarted 2026-03-10T05:53:10.799 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:10 vm02 bash[55303]: cluster 2026-03-10T05:53:10.696472+0000 mon.a (mon.0) 19 : cluster [DBG] Standby manager daemon x restarted 2026-03-10T05:53:10.799 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:10 vm02 bash[55303]: cluster 2026-03-10T05:53:10.696595+0000 mon.a (mon.0) 20 : cluster [DBG] Standby manager daemon x started 2026-03-10T05:53:10.799 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:10 vm02 bash[55303]: cluster 2026-03-10T05:53:10.696595+0000 mon.a (mon.0) 20 : cluster [DBG] Standby manager daemon x started 2026-03-10T05:53:10.799 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:10 vm02 bash[55303]: audit 2026-03-10T05:53:10.700965+0000 mon.b (mon.2) 101 : audit [DBG] from='mgr.? 192.168.123.105:0/532490300' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/x/crt"}]: dispatch 2026-03-10T05:53:10.799 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:10 vm02 bash[55303]: audit 2026-03-10T05:53:10.700965+0000 mon.b (mon.2) 101 : audit [DBG] from='mgr.? 192.168.123.105:0/532490300' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/x/crt"}]: dispatch 2026-03-10T05:53:10.799 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:10 vm02 bash[55303]: audit 2026-03-10T05:53:10.701494+0000 mon.b (mon.2) 102 : audit [DBG] from='mgr.? 192.168.123.105:0/532490300' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/crt"}]: dispatch 2026-03-10T05:53:10.799 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:10 vm02 bash[55303]: audit 2026-03-10T05:53:10.701494+0000 mon.b (mon.2) 102 : audit [DBG] from='mgr.? 192.168.123.105:0/532490300' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/crt"}]: dispatch 2026-03-10T05:53:10.799 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:10 vm02 bash[55303]: audit 2026-03-10T05:53:10.702453+0000 mon.b (mon.2) 103 : audit [DBG] from='mgr.? 192.168.123.105:0/532490300' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/x/key"}]: dispatch 2026-03-10T05:53:10.799 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:10 vm02 bash[55303]: audit 2026-03-10T05:53:10.702453+0000 mon.b (mon.2) 103 : audit [DBG] from='mgr.? 192.168.123.105:0/532490300' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/x/key"}]: dispatch 2026-03-10T05:53:10.799 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:10 vm02 bash[55303]: audit 2026-03-10T05:53:10.702744+0000 mon.b (mon.2) 104 : audit [DBG] from='mgr.? 192.168.123.105:0/532490300' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/key"}]: dispatch 2026-03-10T05:53:10.799 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:10 vm02 bash[55303]: audit 2026-03-10T05:53:10.702744+0000 mon.b (mon.2) 104 : audit [DBG] from='mgr.? 192.168.123.105:0/532490300' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/key"}]: dispatch 2026-03-10T05:53:11.001 INFO:journalctl@ceph.mgr.x.vm05.stdout:Mar 10 05:53:10 vm05 bash[41654]: debug 2026-03-10T05:53:10.689+0000 7f82de757140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member 2026-03-10T05:53:11.001 INFO:journalctl@ceph.mgr.x.vm05.stdout:Mar 10 05:53:10 vm05 bash[41654]: [10/Mar/2026:05:53:10] ENGINE Bus STARTING 2026-03-10T05:53:11.001 INFO:journalctl@ceph.mgr.x.vm05.stdout:Mar 10 05:53:10 vm05 bash[41654]: CherryPy Checker: 2026-03-10T05:53:11.001 INFO:journalctl@ceph.mgr.x.vm05.stdout:Mar 10 05:53:10 vm05 bash[41654]: The Application mounted at '' has an empty config. 2026-03-10T05:53:11.001 INFO:journalctl@ceph.mgr.x.vm05.stdout:Mar 10 05:53:10 vm05 bash[41654]: [10/Mar/2026:05:53:10] ENGINE Serving on http://:::9283 2026-03-10T05:53:11.001 INFO:journalctl@ceph.mgr.x.vm05.stdout:Mar 10 05:53:10 vm05 bash[41654]: [10/Mar/2026:05:53:10] ENGINE Bus STARTED 2026-03-10T05:53:11.001 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:10 vm05 bash[17864]: cluster 2026-03-10T05:53:10.696472+0000 mon.a (mon.0) 19 : cluster [DBG] Standby manager daemon x restarted 2026-03-10T05:53:11.001 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:10 vm05 bash[17864]: cluster 2026-03-10T05:53:10.696595+0000 mon.a (mon.0) 20 : cluster [DBG] Standby manager daemon x started 2026-03-10T05:53:11.001 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:10 vm05 bash[17864]: audit 2026-03-10T05:53:10.700965+0000 mon.b (mon.2) 101 : audit [DBG] from='mgr.? 192.168.123.105:0/532490300' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/x/crt"}]: dispatch 2026-03-10T05:53:11.001 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:10 vm05 bash[17864]: audit 2026-03-10T05:53:10.701494+0000 mon.b (mon.2) 102 : audit [DBG] from='mgr.? 192.168.123.105:0/532490300' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/crt"}]: dispatch 2026-03-10T05:53:11.001 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:10 vm05 bash[17864]: audit 2026-03-10T05:53:10.702453+0000 mon.b (mon.2) 103 : audit [DBG] from='mgr.? 192.168.123.105:0/532490300' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/x/key"}]: dispatch 2026-03-10T05:53:11.001 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:10 vm05 bash[17864]: audit 2026-03-10T05:53:10.702744+0000 mon.b (mon.2) 104 : audit [DBG] from='mgr.? 192.168.123.105:0/532490300' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/key"}]: dispatch 2026-03-10T05:53:11.059 INFO:journalctl@ceph.mgr.y.vm02.stdout:Mar 10 05:53:10 vm02 bash[52264]: debug 2026-03-10T05:53:10.795+0000 7fd67f3df140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member 2026-03-10T05:53:11.059 INFO:journalctl@ceph.mgr.y.vm02.stdout:Mar 10 05:53:10 vm02 bash[52264]: [10/Mar/2026:05:53:10] ENGINE Bus STARTING 2026-03-10T05:53:11.059 INFO:journalctl@ceph.mgr.y.vm02.stdout:Mar 10 05:53:10 vm02 bash[52264]: CherryPy Checker: 2026-03-10T05:53:11.059 INFO:journalctl@ceph.mgr.y.vm02.stdout:Mar 10 05:53:10 vm02 bash[52264]: The Application mounted at '' has an empty config. 2026-03-10T05:53:11.059 INFO:journalctl@ceph.mgr.y.vm02.stdout:Mar 10 05:53:11 vm02 bash[52264]: [10/Mar/2026:05:53:11] ENGINE Serving on http://:::9283 2026-03-10T05:53:11.059 INFO:journalctl@ceph.mgr.y.vm02.stdout:Mar 10 05:53:11 vm02 bash[52264]: [10/Mar/2026:05:53:11] ENGINE Bus STARTED 2026-03-10T05:53:12.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:11 vm02 bash[56371]: cluster 2026-03-10T05:53:10.753690+0000 mon.a (mon.0) 21 : cluster [DBG] mgrmap e35: y(active, since 49s), standbys: x 2026-03-10T05:53:12.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:11 vm02 bash[56371]: cluster 2026-03-10T05:53:10.753690+0000 mon.a (mon.0) 21 : cluster [DBG] mgrmap e35: y(active, since 49s), standbys: x 2026-03-10T05:53:12.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:11 vm02 bash[56371]: cluster 2026-03-10T05:53:10.801558+0000 mon.a (mon.0) 22 : cluster [INF] Active manager daemon y restarted 2026-03-10T05:53:12.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:11 vm02 bash[56371]: cluster 2026-03-10T05:53:10.801558+0000 mon.a (mon.0) 22 : cluster [INF] Active manager daemon y restarted 2026-03-10T05:53:12.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:11 vm02 bash[56371]: cluster 2026-03-10T05:53:10.801866+0000 mon.a (mon.0) 23 : cluster [INF] Activating manager daemon y 2026-03-10T05:53:12.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:11 vm02 bash[56371]: cluster 2026-03-10T05:53:10.801866+0000 mon.a (mon.0) 23 : cluster [INF] Activating manager daemon y 2026-03-10T05:53:12.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:11 vm02 bash[56371]: cluster 2026-03-10T05:53:10.822591+0000 mon.a (mon.0) 24 : cluster [DBG] osdmap e91: 8 total, 8 up, 8 in 2026-03-10T05:53:12.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:11 vm02 bash[56371]: cluster 2026-03-10T05:53:10.822591+0000 mon.a (mon.0) 24 : cluster [DBG] osdmap e91: 8 total, 8 up, 8 in 2026-03-10T05:53:12.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:11 vm02 bash[56371]: cluster 2026-03-10T05:53:10.823149+0000 mon.a (mon.0) 25 : cluster [DBG] mgrmap e36: y(active, starting, since 0.0214071s), standbys: x 2026-03-10T05:53:12.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:11 vm02 bash[56371]: cluster 2026-03-10T05:53:10.823149+0000 mon.a (mon.0) 25 : cluster [DBG] mgrmap e36: y(active, starting, since 0.0214071s), standbys: x 2026-03-10T05:53:12.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:11 vm02 bash[56371]: audit 2026-03-10T05:53:10.826090+0000 mon.b (mon.2) 105 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-10T05:53:12.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:11 vm02 bash[56371]: audit 2026-03-10T05:53:10.826090+0000 mon.b (mon.2) 105 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-10T05:53:12.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:11 vm02 bash[56371]: audit 2026-03-10T05:53:10.826193+0000 mon.b (mon.2) 106 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T05:53:12.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:11 vm02 bash[56371]: audit 2026-03-10T05:53:10.826193+0000 mon.b (mon.2) 106 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T05:53:12.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:11 vm02 bash[56371]: audit 2026-03-10T05:53:10.826312+0000 mon.b (mon.2) 107 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T05:53:12.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:11 vm02 bash[56371]: audit 2026-03-10T05:53:10.826312+0000 mon.b (mon.2) 107 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T05:53:12.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:11 vm02 bash[56371]: audit 2026-03-10T05:53:10.828109+0000 mon.b (mon.2) 108 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "mgr metadata", "who": "y", "id": "y"}]: dispatch 2026-03-10T05:53:12.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:11 vm02 bash[56371]: audit 2026-03-10T05:53:10.828109+0000 mon.b (mon.2) 108 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "mgr metadata", "who": "y", "id": "y"}]: dispatch 2026-03-10T05:53:12.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:11 vm02 bash[56371]: audit 2026-03-10T05:53:10.828211+0000 mon.b (mon.2) 109 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "mgr metadata", "who": "x", "id": "x"}]: dispatch 2026-03-10T05:53:12.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:11 vm02 bash[56371]: audit 2026-03-10T05:53:10.828211+0000 mon.b (mon.2) 109 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "mgr metadata", "who": "x", "id": "x"}]: dispatch 2026-03-10T05:53:12.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:11 vm02 bash[56371]: audit 2026-03-10T05:53:10.828321+0000 mon.b (mon.2) 110 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T05:53:12.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:11 vm02 bash[56371]: audit 2026-03-10T05:53:10.828321+0000 mon.b (mon.2) 110 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T05:53:12.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:11 vm02 bash[56371]: audit 2026-03-10T05:53:10.828490+0000 mon.b (mon.2) 111 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T05:53:12.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:11 vm02 bash[56371]: audit 2026-03-10T05:53:10.828490+0000 mon.b (mon.2) 111 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T05:53:12.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:11 vm02 bash[56371]: audit 2026-03-10T05:53:10.828671+0000 mon.b (mon.2) 112 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T05:53:12.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:11 vm02 bash[56371]: audit 2026-03-10T05:53:10.828671+0000 mon.b (mon.2) 112 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T05:53:12.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:11 vm02 bash[56371]: audit 2026-03-10T05:53:10.828800+0000 mon.b (mon.2) 113 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-10T05:53:12.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:11 vm02 bash[56371]: audit 2026-03-10T05:53:10.828800+0000 mon.b (mon.2) 113 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-10T05:53:12.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:11 vm02 bash[56371]: audit 2026-03-10T05:53:10.828948+0000 mon.b (mon.2) 114 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-10T05:53:12.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:11 vm02 bash[56371]: audit 2026-03-10T05:53:10.828948+0000 mon.b (mon.2) 114 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-10T05:53:12.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:11 vm02 bash[56371]: audit 2026-03-10T05:53:10.829102+0000 mon.b (mon.2) 115 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-10T05:53:12.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:11 vm02 bash[56371]: audit 2026-03-10T05:53:10.829102+0000 mon.b (mon.2) 115 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-10T05:53:12.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:11 vm02 bash[56371]: audit 2026-03-10T05:53:10.829253+0000 mon.b (mon.2) 116 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-10T05:53:12.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:11 vm02 bash[56371]: audit 2026-03-10T05:53:10.829253+0000 mon.b (mon.2) 116 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-10T05:53:12.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:11 vm02 bash[56371]: audit 2026-03-10T05:53:10.829402+0000 mon.b (mon.2) 117 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-10T05:53:12.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:11 vm02 bash[56371]: audit 2026-03-10T05:53:10.829402+0000 mon.b (mon.2) 117 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-10T05:53:12.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:11 vm02 bash[56371]: audit 2026-03-10T05:53:10.829580+0000 mon.b (mon.2) 118 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "mds metadata"}]: dispatch 2026-03-10T05:53:12.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:11 vm02 bash[56371]: audit 2026-03-10T05:53:10.829580+0000 mon.b (mon.2) 118 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "mds metadata"}]: dispatch 2026-03-10T05:53:12.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:11 vm02 bash[56371]: audit 2026-03-10T05:53:10.829710+0000 mon.b (mon.2) 119 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "osd metadata"}]: dispatch 2026-03-10T05:53:12.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:11 vm02 bash[56371]: audit 2026-03-10T05:53:10.829710+0000 mon.b (mon.2) 119 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "osd metadata"}]: dispatch 2026-03-10T05:53:12.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:11 vm02 bash[56371]: audit 2026-03-10T05:53:10.830003+0000 mon.b (mon.2) 120 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "mon metadata"}]: dispatch 2026-03-10T05:53:12.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:11 vm02 bash[56371]: audit 2026-03-10T05:53:10.830003+0000 mon.b (mon.2) 120 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "mon metadata"}]: dispatch 2026-03-10T05:53:12.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:11 vm02 bash[56371]: cluster 2026-03-10T05:53:10.835765+0000 mon.a (mon.0) 26 : cluster [INF] Manager daemon y is now available 2026-03-10T05:53:12.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:11 vm02 bash[56371]: cluster 2026-03-10T05:53:10.835765+0000 mon.a (mon.0) 26 : cluster [INF] Manager daemon y is now available 2026-03-10T05:53:12.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:11 vm02 bash[56371]: audit 2026-03-10T05:53:10.879713+0000 mon.a (mon.0) 27 : audit [INF] from='mgr.24992 ' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/mirror_snapshot_schedule"}]: dispatch 2026-03-10T05:53:12.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:11 vm02 bash[56371]: audit 2026-03-10T05:53:10.879713+0000 mon.a (mon.0) 27 : audit [INF] from='mgr.24992 ' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/mirror_snapshot_schedule"}]: dispatch 2026-03-10T05:53:12.086 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:11 vm02 bash[56371]: audit 2026-03-10T05:53:10.881937+0000 mon.b (mon.2) 121 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T05:53:12.086 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:11 vm02 bash[56371]: audit 2026-03-10T05:53:10.881937+0000 mon.b (mon.2) 121 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T05:53:12.086 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:11 vm02 bash[56371]: audit 2026-03-10T05:53:10.882091+0000 mon.b (mon.2) 122 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/mirror_snapshot_schedule"}]: dispatch 2026-03-10T05:53:12.086 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:11 vm02 bash[56371]: audit 2026-03-10T05:53:10.882091+0000 mon.b (mon.2) 122 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/mirror_snapshot_schedule"}]: dispatch 2026-03-10T05:53:12.086 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:11 vm02 bash[56371]: audit 2026-03-10T05:53:10.935640+0000 mon.a (mon.0) 28 : audit [INF] from='mgr.24992 ' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/trash_purge_schedule"}]: dispatch 2026-03-10T05:53:12.086 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:11 vm02 bash[56371]: audit 2026-03-10T05:53:10.935640+0000 mon.a (mon.0) 28 : audit [INF] from='mgr.24992 ' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/trash_purge_schedule"}]: dispatch 2026-03-10T05:53:12.086 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:11 vm02 bash[56371]: audit 2026-03-10T05:53:10.937993+0000 mon.b (mon.2) 123 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/trash_purge_schedule"}]: dispatch 2026-03-10T05:53:12.086 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:11 vm02 bash[56371]: audit 2026-03-10T05:53:10.937993+0000 mon.b (mon.2) 123 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/trash_purge_schedule"}]: dispatch 2026-03-10T05:53:12.086 INFO:journalctl@ceph.mgr.y.vm02.stdout:Mar 10 05:53:11 vm02 bash[52264]: debug 2026-03-10T05:53:11.827+0000 7fd64b74b640 -1 mgr.server handle_report got status from non-daemon mon.a 2026-03-10T05:53:12.086 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:11 vm02 bash[55303]: cluster 2026-03-10T05:53:10.753690+0000 mon.a (mon.0) 21 : cluster [DBG] mgrmap e35: y(active, since 49s), standbys: x 2026-03-10T05:53:12.086 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:11 vm02 bash[55303]: cluster 2026-03-10T05:53:10.753690+0000 mon.a (mon.0) 21 : cluster [DBG] mgrmap e35: y(active, since 49s), standbys: x 2026-03-10T05:53:12.086 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:11 vm02 bash[55303]: cluster 2026-03-10T05:53:10.801558+0000 mon.a (mon.0) 22 : cluster [INF] Active manager daemon y restarted 2026-03-10T05:53:12.086 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:11 vm02 bash[55303]: cluster 2026-03-10T05:53:10.801558+0000 mon.a (mon.0) 22 : cluster [INF] Active manager daemon y restarted 2026-03-10T05:53:12.086 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:11 vm02 bash[55303]: cluster 2026-03-10T05:53:10.801866+0000 mon.a (mon.0) 23 : cluster [INF] Activating manager daemon y 2026-03-10T05:53:12.086 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:11 vm02 bash[55303]: cluster 2026-03-10T05:53:10.801866+0000 mon.a (mon.0) 23 : cluster [INF] Activating manager daemon y 2026-03-10T05:53:12.086 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:11 vm02 bash[55303]: cluster 2026-03-10T05:53:10.822591+0000 mon.a (mon.0) 24 : cluster [DBG] osdmap e91: 8 total, 8 up, 8 in 2026-03-10T05:53:12.086 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:11 vm02 bash[55303]: cluster 2026-03-10T05:53:10.822591+0000 mon.a (mon.0) 24 : cluster [DBG] osdmap e91: 8 total, 8 up, 8 in 2026-03-10T05:53:12.086 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:11 vm02 bash[55303]: cluster 2026-03-10T05:53:10.823149+0000 mon.a (mon.0) 25 : cluster [DBG] mgrmap e36: y(active, starting, since 0.0214071s), standbys: x 2026-03-10T05:53:12.086 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:11 vm02 bash[55303]: cluster 2026-03-10T05:53:10.823149+0000 mon.a (mon.0) 25 : cluster [DBG] mgrmap e36: y(active, starting, since 0.0214071s), standbys: x 2026-03-10T05:53:12.086 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:11 vm02 bash[55303]: audit 2026-03-10T05:53:10.826090+0000 mon.b (mon.2) 105 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-10T05:53:12.086 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:11 vm02 bash[55303]: audit 2026-03-10T05:53:10.826090+0000 mon.b (mon.2) 105 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-10T05:53:12.086 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:11 vm02 bash[55303]: audit 2026-03-10T05:53:10.826193+0000 mon.b (mon.2) 106 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T05:53:12.086 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:11 vm02 bash[55303]: audit 2026-03-10T05:53:10.826193+0000 mon.b (mon.2) 106 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T05:53:12.086 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:11 vm02 bash[55303]: audit 2026-03-10T05:53:10.826312+0000 mon.b (mon.2) 107 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T05:53:12.086 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:11 vm02 bash[55303]: audit 2026-03-10T05:53:10.826312+0000 mon.b (mon.2) 107 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T05:53:12.086 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:11 vm02 bash[55303]: audit 2026-03-10T05:53:10.828109+0000 mon.b (mon.2) 108 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "mgr metadata", "who": "y", "id": "y"}]: dispatch 2026-03-10T05:53:12.086 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:11 vm02 bash[55303]: audit 2026-03-10T05:53:10.828109+0000 mon.b (mon.2) 108 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "mgr metadata", "who": "y", "id": "y"}]: dispatch 2026-03-10T05:53:12.086 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:11 vm02 bash[55303]: audit 2026-03-10T05:53:10.828211+0000 mon.b (mon.2) 109 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "mgr metadata", "who": "x", "id": "x"}]: dispatch 2026-03-10T05:53:12.086 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:11 vm02 bash[55303]: audit 2026-03-10T05:53:10.828211+0000 mon.b (mon.2) 109 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "mgr metadata", "who": "x", "id": "x"}]: dispatch 2026-03-10T05:53:12.086 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:11 vm02 bash[55303]: audit 2026-03-10T05:53:10.828321+0000 mon.b (mon.2) 110 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T05:53:12.086 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:11 vm02 bash[55303]: audit 2026-03-10T05:53:10.828321+0000 mon.b (mon.2) 110 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T05:53:12.086 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:11 vm02 bash[55303]: audit 2026-03-10T05:53:10.828490+0000 mon.b (mon.2) 111 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T05:53:12.086 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:11 vm02 bash[55303]: audit 2026-03-10T05:53:10.828490+0000 mon.b (mon.2) 111 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T05:53:12.086 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:11 vm02 bash[55303]: audit 2026-03-10T05:53:10.828671+0000 mon.b (mon.2) 112 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T05:53:12.086 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:11 vm02 bash[55303]: audit 2026-03-10T05:53:10.828671+0000 mon.b (mon.2) 112 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T05:53:12.086 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:11 vm02 bash[55303]: audit 2026-03-10T05:53:10.828800+0000 mon.b (mon.2) 113 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-10T05:53:12.086 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:11 vm02 bash[55303]: audit 2026-03-10T05:53:10.828800+0000 mon.b (mon.2) 113 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-10T05:53:12.086 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:11 vm02 bash[55303]: audit 2026-03-10T05:53:10.828948+0000 mon.b (mon.2) 114 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-10T05:53:12.086 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:11 vm02 bash[55303]: audit 2026-03-10T05:53:10.828948+0000 mon.b (mon.2) 114 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-10T05:53:12.086 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:11 vm02 bash[55303]: audit 2026-03-10T05:53:10.829102+0000 mon.b (mon.2) 115 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-10T05:53:12.086 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:11 vm02 bash[55303]: audit 2026-03-10T05:53:10.829102+0000 mon.b (mon.2) 115 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-10T05:53:12.086 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:11 vm02 bash[55303]: audit 2026-03-10T05:53:10.829253+0000 mon.b (mon.2) 116 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-10T05:53:12.086 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:11 vm02 bash[55303]: audit 2026-03-10T05:53:10.829253+0000 mon.b (mon.2) 116 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-10T05:53:12.086 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:11 vm02 bash[55303]: audit 2026-03-10T05:53:10.829402+0000 mon.b (mon.2) 117 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-10T05:53:12.086 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:11 vm02 bash[55303]: audit 2026-03-10T05:53:10.829402+0000 mon.b (mon.2) 117 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-10T05:53:12.086 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:11 vm02 bash[55303]: audit 2026-03-10T05:53:10.829580+0000 mon.b (mon.2) 118 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "mds metadata"}]: dispatch 2026-03-10T05:53:12.086 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:11 vm02 bash[55303]: audit 2026-03-10T05:53:10.829580+0000 mon.b (mon.2) 118 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "mds metadata"}]: dispatch 2026-03-10T05:53:12.086 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:11 vm02 bash[55303]: audit 2026-03-10T05:53:10.829710+0000 mon.b (mon.2) 119 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "osd metadata"}]: dispatch 2026-03-10T05:53:12.086 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:11 vm02 bash[55303]: audit 2026-03-10T05:53:10.829710+0000 mon.b (mon.2) 119 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "osd metadata"}]: dispatch 2026-03-10T05:53:12.086 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:11 vm02 bash[55303]: audit 2026-03-10T05:53:10.830003+0000 mon.b (mon.2) 120 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "mon metadata"}]: dispatch 2026-03-10T05:53:12.086 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:11 vm02 bash[55303]: audit 2026-03-10T05:53:10.830003+0000 mon.b (mon.2) 120 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "mon metadata"}]: dispatch 2026-03-10T05:53:12.086 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:11 vm02 bash[55303]: cluster 2026-03-10T05:53:10.835765+0000 mon.a (mon.0) 26 : cluster [INF] Manager daemon y is now available 2026-03-10T05:53:12.087 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:11 vm02 bash[55303]: cluster 2026-03-10T05:53:10.835765+0000 mon.a (mon.0) 26 : cluster [INF] Manager daemon y is now available 2026-03-10T05:53:12.087 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:11 vm02 bash[55303]: audit 2026-03-10T05:53:10.879713+0000 mon.a (mon.0) 27 : audit [INF] from='mgr.24992 ' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/mirror_snapshot_schedule"}]: dispatch 2026-03-10T05:53:12.087 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:11 vm02 bash[55303]: audit 2026-03-10T05:53:10.879713+0000 mon.a (mon.0) 27 : audit [INF] from='mgr.24992 ' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/mirror_snapshot_schedule"}]: dispatch 2026-03-10T05:53:12.087 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:11 vm02 bash[55303]: audit 2026-03-10T05:53:10.881937+0000 mon.b (mon.2) 121 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T05:53:12.087 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:11 vm02 bash[55303]: audit 2026-03-10T05:53:10.881937+0000 mon.b (mon.2) 121 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T05:53:12.087 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:11 vm02 bash[55303]: audit 2026-03-10T05:53:10.882091+0000 mon.b (mon.2) 122 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/mirror_snapshot_schedule"}]: dispatch 2026-03-10T05:53:12.087 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:11 vm02 bash[55303]: audit 2026-03-10T05:53:10.882091+0000 mon.b (mon.2) 122 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/mirror_snapshot_schedule"}]: dispatch 2026-03-10T05:53:12.087 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:11 vm02 bash[55303]: audit 2026-03-10T05:53:10.935640+0000 mon.a (mon.0) 28 : audit [INF] from='mgr.24992 ' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/trash_purge_schedule"}]: dispatch 2026-03-10T05:53:12.087 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:11 vm02 bash[55303]: audit 2026-03-10T05:53:10.935640+0000 mon.a (mon.0) 28 : audit [INF] from='mgr.24992 ' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/trash_purge_schedule"}]: dispatch 2026-03-10T05:53:12.087 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:11 vm02 bash[55303]: audit 2026-03-10T05:53:10.937993+0000 mon.b (mon.2) 123 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/trash_purge_schedule"}]: dispatch 2026-03-10T05:53:12.087 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:11 vm02 bash[55303]: audit 2026-03-10T05:53:10.937993+0000 mon.b (mon.2) 123 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/trash_purge_schedule"}]: dispatch 2026-03-10T05:53:12.251 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:11 vm05 bash[17864]: cluster 2026-03-10T05:53:10.753690+0000 mon.a (mon.0) 21 : cluster [DBG] mgrmap e35: y(active, since 49s), standbys: x 2026-03-10T05:53:12.251 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:11 vm05 bash[17864]: cluster 2026-03-10T05:53:10.801558+0000 mon.a (mon.0) 22 : cluster [INF] Active manager daemon y restarted 2026-03-10T05:53:12.251 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:11 vm05 bash[17864]: cluster 2026-03-10T05:53:10.801866+0000 mon.a (mon.0) 23 : cluster [INF] Activating manager daemon y 2026-03-10T05:53:12.251 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:11 vm05 bash[17864]: cluster 2026-03-10T05:53:10.822591+0000 mon.a (mon.0) 24 : cluster [DBG] osdmap e91: 8 total, 8 up, 8 in 2026-03-10T05:53:12.251 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:11 vm05 bash[17864]: cluster 2026-03-10T05:53:10.823149+0000 mon.a (mon.0) 25 : cluster [DBG] mgrmap e36: y(active, starting, since 0.0214071s), standbys: x 2026-03-10T05:53:12.251 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:11 vm05 bash[17864]: audit 2026-03-10T05:53:10.826090+0000 mon.b (mon.2) 105 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-10T05:53:12.251 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:11 vm05 bash[17864]: audit 2026-03-10T05:53:10.826193+0000 mon.b (mon.2) 106 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T05:53:12.251 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:11 vm05 bash[17864]: audit 2026-03-10T05:53:10.826312+0000 mon.b (mon.2) 107 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T05:53:12.251 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:11 vm05 bash[17864]: audit 2026-03-10T05:53:10.828109+0000 mon.b (mon.2) 108 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "mgr metadata", "who": "y", "id": "y"}]: dispatch 2026-03-10T05:53:12.251 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:11 vm05 bash[17864]: audit 2026-03-10T05:53:10.828211+0000 mon.b (mon.2) 109 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "mgr metadata", "who": "x", "id": "x"}]: dispatch 2026-03-10T05:53:12.251 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:11 vm05 bash[17864]: audit 2026-03-10T05:53:10.828321+0000 mon.b (mon.2) 110 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T05:53:12.251 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:11 vm05 bash[17864]: audit 2026-03-10T05:53:10.828490+0000 mon.b (mon.2) 111 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T05:53:12.251 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:11 vm05 bash[17864]: audit 2026-03-10T05:53:10.828671+0000 mon.b (mon.2) 112 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T05:53:12.251 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:11 vm05 bash[17864]: audit 2026-03-10T05:53:10.828800+0000 mon.b (mon.2) 113 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-10T05:53:12.251 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:11 vm05 bash[17864]: audit 2026-03-10T05:53:10.828948+0000 mon.b (mon.2) 114 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-10T05:53:12.251 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:11 vm05 bash[17864]: audit 2026-03-10T05:53:10.829102+0000 mon.b (mon.2) 115 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-10T05:53:12.251 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:11 vm05 bash[17864]: audit 2026-03-10T05:53:10.829253+0000 mon.b (mon.2) 116 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-10T05:53:12.251 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:11 vm05 bash[17864]: audit 2026-03-10T05:53:10.829402+0000 mon.b (mon.2) 117 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-10T05:53:12.251 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:11 vm05 bash[17864]: audit 2026-03-10T05:53:10.829580+0000 mon.b (mon.2) 118 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "mds metadata"}]: dispatch 2026-03-10T05:53:12.251 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:11 vm05 bash[17864]: audit 2026-03-10T05:53:10.829710+0000 mon.b (mon.2) 119 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "osd metadata"}]: dispatch 2026-03-10T05:53:12.251 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:11 vm05 bash[17864]: audit 2026-03-10T05:53:10.830003+0000 mon.b (mon.2) 120 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "mon metadata"}]: dispatch 2026-03-10T05:53:12.251 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:11 vm05 bash[17864]: cluster 2026-03-10T05:53:10.835765+0000 mon.a (mon.0) 26 : cluster [INF] Manager daemon y is now available 2026-03-10T05:53:12.251 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:11 vm05 bash[17864]: audit 2026-03-10T05:53:10.879713+0000 mon.a (mon.0) 27 : audit [INF] from='mgr.24992 ' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/mirror_snapshot_schedule"}]: dispatch 2026-03-10T05:53:12.251 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:11 vm05 bash[17864]: audit 2026-03-10T05:53:10.881937+0000 mon.b (mon.2) 121 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T05:53:12.251 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:11 vm05 bash[17864]: audit 2026-03-10T05:53:10.882091+0000 mon.b (mon.2) 122 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/mirror_snapshot_schedule"}]: dispatch 2026-03-10T05:53:12.251 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:11 vm05 bash[17864]: audit 2026-03-10T05:53:10.935640+0000 mon.a (mon.0) 28 : audit [INF] from='mgr.24992 ' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/trash_purge_schedule"}]: dispatch 2026-03-10T05:53:12.251 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:11 vm05 bash[17864]: audit 2026-03-10T05:53:10.937993+0000 mon.b (mon.2) 123 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/trash_purge_schedule"}]: dispatch 2026-03-10T05:53:13.084 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:12 vm02 bash[56371]: cephadm 2026-03-10T05:53:11.788129+0000 mgr.y (mgr.24992) 1 : cephadm [INF] [10/Mar/2026:05:53:11] ENGINE Bus STARTING 2026-03-10T05:53:13.084 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:12 vm02 bash[56371]: cephadm 2026-03-10T05:53:11.788129+0000 mgr.y (mgr.24992) 1 : cephadm [INF] [10/Mar/2026:05:53:11] ENGINE Bus STARTING 2026-03-10T05:53:13.084 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:12 vm02 bash[56371]: cluster 2026-03-10T05:53:11.826231+0000 mon.a (mon.0) 29 : cluster [DBG] mgrmap e37: y(active, since 1.02424s), standbys: x 2026-03-10T05:53:13.084 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:12 vm02 bash[56371]: cluster 2026-03-10T05:53:11.826231+0000 mon.a (mon.0) 29 : cluster [DBG] mgrmap e37: y(active, since 1.02424s), standbys: x 2026-03-10T05:53:13.084 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:12 vm02 bash[56371]: cluster 2026-03-10T05:53:11.844719+0000 mgr.y (mgr.24992) 2 : cluster [DBG] pgmap v3: 161 pgs: 161 active+clean; 457 KiB data, 103 MiB used, 160 GiB / 160 GiB avail 2026-03-10T05:53:13.084 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:12 vm02 bash[56371]: cluster 2026-03-10T05:53:11.844719+0000 mgr.y (mgr.24992) 2 : cluster [DBG] pgmap v3: 161 pgs: 161 active+clean; 457 KiB data, 103 MiB used, 160 GiB / 160 GiB avail 2026-03-10T05:53:13.084 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:12 vm02 bash[56371]: cephadm 2026-03-10T05:53:11.895934+0000 mgr.y (mgr.24992) 3 : cephadm [INF] [10/Mar/2026:05:53:11] ENGINE Serving on https://192.168.123.102:7150 2026-03-10T05:53:13.084 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:12 vm02 bash[56371]: cephadm 2026-03-10T05:53:11.895934+0000 mgr.y (mgr.24992) 3 : cephadm [INF] [10/Mar/2026:05:53:11] ENGINE Serving on https://192.168.123.102:7150 2026-03-10T05:53:13.084 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:12 vm02 bash[56371]: cephadm 2026-03-10T05:53:11.896380+0000 mgr.y (mgr.24992) 4 : cephadm [INF] [10/Mar/2026:05:53:11] ENGINE Client ('192.168.123.102', 41896) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)') 2026-03-10T05:53:13.084 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:12 vm02 bash[56371]: cephadm 2026-03-10T05:53:11.896380+0000 mgr.y (mgr.24992) 4 : cephadm [INF] [10/Mar/2026:05:53:11] ENGINE Client ('192.168.123.102', 41896) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)') 2026-03-10T05:53:13.084 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:12 vm02 bash[56371]: cephadm 2026-03-10T05:53:11.997011+0000 mgr.y (mgr.24992) 5 : cephadm [INF] [10/Mar/2026:05:53:11] ENGINE Serving on http://192.168.123.102:8765 2026-03-10T05:53:13.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:12 vm02 bash[56371]: cephadm 2026-03-10T05:53:11.997011+0000 mgr.y (mgr.24992) 5 : cephadm [INF] [10/Mar/2026:05:53:11] ENGINE Serving on http://192.168.123.102:8765 2026-03-10T05:53:13.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:12 vm02 bash[56371]: cephadm 2026-03-10T05:53:11.997265+0000 mgr.y (mgr.24992) 6 : cephadm [INF] [10/Mar/2026:05:53:11] ENGINE Bus STARTED 2026-03-10T05:53:13.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:12 vm02 bash[56371]: cephadm 2026-03-10T05:53:11.997265+0000 mgr.y (mgr.24992) 6 : cephadm [INF] [10/Mar/2026:05:53:11] ENGINE Bus STARTED 2026-03-10T05:53:13.085 INFO:journalctl@ceph.mgr.y.vm02.stdout:Mar 10 05:53:12 vm02 bash[52264]: ::ffff:192.168.123.105 - - [10/Mar/2026:05:53:12] "GET /metrics HTTP/1.1" 200 35013 "" "Prometheus/2.51.0" 2026-03-10T05:53:13.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:12 vm02 bash[55303]: cephadm 2026-03-10T05:53:11.788129+0000 mgr.y (mgr.24992) 1 : cephadm [INF] [10/Mar/2026:05:53:11] ENGINE Bus STARTING 2026-03-10T05:53:13.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:12 vm02 bash[55303]: cephadm 2026-03-10T05:53:11.788129+0000 mgr.y (mgr.24992) 1 : cephadm [INF] [10/Mar/2026:05:53:11] ENGINE Bus STARTING 2026-03-10T05:53:13.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:12 vm02 bash[55303]: cluster 2026-03-10T05:53:11.826231+0000 mon.a (mon.0) 29 : cluster [DBG] mgrmap e37: y(active, since 1.02424s), standbys: x 2026-03-10T05:53:13.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:12 vm02 bash[55303]: cluster 2026-03-10T05:53:11.826231+0000 mon.a (mon.0) 29 : cluster [DBG] mgrmap e37: y(active, since 1.02424s), standbys: x 2026-03-10T05:53:13.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:12 vm02 bash[55303]: cluster 2026-03-10T05:53:11.844719+0000 mgr.y (mgr.24992) 2 : cluster [DBG] pgmap v3: 161 pgs: 161 active+clean; 457 KiB data, 103 MiB used, 160 GiB / 160 GiB avail 2026-03-10T05:53:13.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:12 vm02 bash[55303]: cluster 2026-03-10T05:53:11.844719+0000 mgr.y (mgr.24992) 2 : cluster [DBG] pgmap v3: 161 pgs: 161 active+clean; 457 KiB data, 103 MiB used, 160 GiB / 160 GiB avail 2026-03-10T05:53:13.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:12 vm02 bash[55303]: cephadm 2026-03-10T05:53:11.895934+0000 mgr.y (mgr.24992) 3 : cephadm [INF] [10/Mar/2026:05:53:11] ENGINE Serving on https://192.168.123.102:7150 2026-03-10T05:53:13.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:12 vm02 bash[55303]: cephadm 2026-03-10T05:53:11.895934+0000 mgr.y (mgr.24992) 3 : cephadm [INF] [10/Mar/2026:05:53:11] ENGINE Serving on https://192.168.123.102:7150 2026-03-10T05:53:13.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:12 vm02 bash[55303]: cephadm 2026-03-10T05:53:11.896380+0000 mgr.y (mgr.24992) 4 : cephadm [INF] [10/Mar/2026:05:53:11] ENGINE Client ('192.168.123.102', 41896) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)') 2026-03-10T05:53:13.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:12 vm02 bash[55303]: cephadm 2026-03-10T05:53:11.896380+0000 mgr.y (mgr.24992) 4 : cephadm [INF] [10/Mar/2026:05:53:11] ENGINE Client ('192.168.123.102', 41896) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)') 2026-03-10T05:53:13.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:12 vm02 bash[55303]: cephadm 2026-03-10T05:53:11.997011+0000 mgr.y (mgr.24992) 5 : cephadm [INF] [10/Mar/2026:05:53:11] ENGINE Serving on http://192.168.123.102:8765 2026-03-10T05:53:13.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:12 vm02 bash[55303]: cephadm 2026-03-10T05:53:11.997011+0000 mgr.y (mgr.24992) 5 : cephadm [INF] [10/Mar/2026:05:53:11] ENGINE Serving on http://192.168.123.102:8765 2026-03-10T05:53:13.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:12 vm02 bash[55303]: cephadm 2026-03-10T05:53:11.997265+0000 mgr.y (mgr.24992) 6 : cephadm [INF] [10/Mar/2026:05:53:11] ENGINE Bus STARTED 2026-03-10T05:53:13.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:12 vm02 bash[55303]: cephadm 2026-03-10T05:53:11.997265+0000 mgr.y (mgr.24992) 6 : cephadm [INF] [10/Mar/2026:05:53:11] ENGINE Bus STARTED 2026-03-10T05:53:13.251 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:12 vm05 bash[17864]: cephadm 2026-03-10T05:53:11.788129+0000 mgr.y (mgr.24992) 1 : cephadm [INF] [10/Mar/2026:05:53:11] ENGINE Bus STARTING 2026-03-10T05:53:13.251 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:12 vm05 bash[17864]: cluster 2026-03-10T05:53:11.826231+0000 mon.a (mon.0) 29 : cluster [DBG] mgrmap e37: y(active, since 1.02424s), standbys: x 2026-03-10T05:53:13.251 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:12 vm05 bash[17864]: cluster 2026-03-10T05:53:11.844719+0000 mgr.y (mgr.24992) 2 : cluster [DBG] pgmap v3: 161 pgs: 161 active+clean; 457 KiB data, 103 MiB used, 160 GiB / 160 GiB avail 2026-03-10T05:53:13.251 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:12 vm05 bash[17864]: cephadm 2026-03-10T05:53:11.895934+0000 mgr.y (mgr.24992) 3 : cephadm [INF] [10/Mar/2026:05:53:11] ENGINE Serving on https://192.168.123.102:7150 2026-03-10T05:53:13.251 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:12 vm05 bash[17864]: cephadm 2026-03-10T05:53:11.896380+0000 mgr.y (mgr.24992) 4 : cephadm [INF] [10/Mar/2026:05:53:11] ENGINE Client ('192.168.123.102', 41896) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)') 2026-03-10T05:53:13.251 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:12 vm05 bash[17864]: cephadm 2026-03-10T05:53:11.997011+0000 mgr.y (mgr.24992) 5 : cephadm [INF] [10/Mar/2026:05:53:11] ENGINE Serving on http://192.168.123.102:8765 2026-03-10T05:53:13.251 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:12 vm05 bash[17864]: cephadm 2026-03-10T05:53:11.997265+0000 mgr.y (mgr.24992) 6 : cephadm [INF] [10/Mar/2026:05:53:11] ENGINE Bus STARTED 2026-03-10T05:53:14.084 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:13 vm02 bash[56371]: cluster 2026-03-10T05:53:12.826616+0000 mgr.y (mgr.24992) 7 : cluster [DBG] pgmap v4: 161 pgs: 161 active+clean; 457 KiB data, 103 MiB used, 160 GiB / 160 GiB avail 2026-03-10T05:53:14.084 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:13 vm02 bash[56371]: cluster 2026-03-10T05:53:12.826616+0000 mgr.y (mgr.24992) 7 : cluster [DBG] pgmap v4: 161 pgs: 161 active+clean; 457 KiB data, 103 MiB used, 160 GiB / 160 GiB avail 2026-03-10T05:53:14.084 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:13 vm02 bash[56371]: cluster 2026-03-10T05:53:12.840795+0000 mon.a (mon.0) 30 : cluster [DBG] mgrmap e38: y(active, since 2s), standbys: x 2026-03-10T05:53:14.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:13 vm02 bash[56371]: cluster 2026-03-10T05:53:12.840795+0000 mon.a (mon.0) 30 : cluster [DBG] mgrmap e38: y(active, since 2s), standbys: x 2026-03-10T05:53:14.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:13 vm02 bash[55303]: cluster 2026-03-10T05:53:12.826616+0000 mgr.y (mgr.24992) 7 : cluster [DBG] pgmap v4: 161 pgs: 161 active+clean; 457 KiB data, 103 MiB used, 160 GiB / 160 GiB avail 2026-03-10T05:53:14.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:13 vm02 bash[55303]: cluster 2026-03-10T05:53:12.826616+0000 mgr.y (mgr.24992) 7 : cluster [DBG] pgmap v4: 161 pgs: 161 active+clean; 457 KiB data, 103 MiB used, 160 GiB / 160 GiB avail 2026-03-10T05:53:14.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:13 vm02 bash[55303]: cluster 2026-03-10T05:53:12.840795+0000 mon.a (mon.0) 30 : cluster [DBG] mgrmap e38: y(active, since 2s), standbys: x 2026-03-10T05:53:14.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:13 vm02 bash[55303]: cluster 2026-03-10T05:53:12.840795+0000 mon.a (mon.0) 30 : cluster [DBG] mgrmap e38: y(active, since 2s), standbys: x 2026-03-10T05:53:14.146 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:13 vm05 bash[17864]: cluster 2026-03-10T05:53:12.826616+0000 mgr.y (mgr.24992) 7 : cluster [DBG] pgmap v4: 161 pgs: 161 active+clean; 457 KiB data, 103 MiB used, 160 GiB / 160 GiB avail 2026-03-10T05:53:14.146 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:13 vm05 bash[17864]: cluster 2026-03-10T05:53:12.840795+0000 mon.a (mon.0) 30 : cluster [DBG] mgrmap e38: y(active, since 2s), standbys: x 2026-03-10T05:53:14.501 INFO:journalctl@ceph.prometheus.a.vm05.stdout:Mar 10 05:53:14 vm05 bash[41269]: ts=2026-03-10T05:53:14.148Z caller=group.go:483 level=warn name=CephOSDFlapping index=13 component="rule manager" file=/etc/prometheus/alerting/ceph_alerts.yml group=osd msg="Evaluating rule failed" rule="alert: CephOSDFlapping\nexpr: (rate(ceph_osd_up[5m]) * on (ceph_daemon) group_left (hostname) ceph_osd_metadata)\n * 60 > 1\nlabels:\n oid: 1.3.6.1.4.1.50495.1.2.1.4.4\n severity: warning\n type: ceph_default\nannotations:\n description: OSD {{ $labels.ceph_daemon }} on {{ $labels.hostname }} was marked\n down and back up {{ $value | humanize }} times once a minute for 5 minutes. This\n may indicate a network issue (latency, packet loss, MTU mismatch) on the cluster\n network, or the public network if no cluster network is deployed. Check the network\n stats on the listed host(s).\n documentation: https://docs.ceph.com/en/latest/rados/troubleshooting/troubleshooting-osd#flapping-osds\n summary: Network issues are causing OSDs to flap (mark each other down)\n" err="found duplicate series for the match group {ceph_daemon=\"osd.0\"} on the right hand-side of the operation: [{__name__=\"ceph_osd_metadata\", ceph_daemon=\"osd.0\", ceph_version=\"ceph version 17.2.0 (43e2e60a7559d3f46c9d53f1ca875fd499a1e35e) quincy (stable)\", cluster=\"107483ae-1c44-11f1-b530-c1172cd6122a\", cluster_addr=\"192.168.123.102\", device_class=\"hdd\", hostname=\"vm02\", instance=\"ceph_cluster\", job=\"ceph\", objectstore=\"bluestore\", public_addr=\"192.168.123.102\"}, {__name__=\"ceph_osd_metadata\", ceph_daemon=\"osd.0\", ceph_version=\"ceph version 17.2.0 (43e2e60a7559d3f46c9d53f1ca875fd499a1e35e) quincy (stable)\", cluster_addr=\"192.168.123.102\", device_class=\"hdd\", hostname=\"vm02\", instance=\"192.168.123.105:9283\", job=\"ceph\", objectstore=\"bluestore\", public_addr=\"192.168.123.102\"}];many-to-many matching not allowed: matching labels must be unique on one side" 2026-03-10T05:53:16.251 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:15 vm05 bash[17864]: cluster 2026-03-10T05:53:14.826917+0000 mgr.y (mgr.24992) 8 : cluster [DBG] pgmap v5: 161 pgs: 161 active+clean; 457 KiB data, 103 MiB used, 160 GiB / 160 GiB avail 2026-03-10T05:53:16.251 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:15 vm05 bash[17864]: cluster 2026-03-10T05:53:14.848802+0000 mon.a (mon.0) 31 : cluster [DBG] mgrmap e39: y(active, since 4s), standbys: x 2026-03-10T05:53:16.334 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:15 vm02 bash[56371]: cluster 2026-03-10T05:53:14.826917+0000 mgr.y (mgr.24992) 8 : cluster [DBG] pgmap v5: 161 pgs: 161 active+clean; 457 KiB data, 103 MiB used, 160 GiB / 160 GiB avail 2026-03-10T05:53:16.335 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:15 vm02 bash[56371]: cluster 2026-03-10T05:53:14.826917+0000 mgr.y (mgr.24992) 8 : cluster [DBG] pgmap v5: 161 pgs: 161 active+clean; 457 KiB data, 103 MiB used, 160 GiB / 160 GiB avail 2026-03-10T05:53:16.335 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:15 vm02 bash[56371]: cluster 2026-03-10T05:53:14.848802+0000 mon.a (mon.0) 31 : cluster [DBG] mgrmap e39: y(active, since 4s), standbys: x 2026-03-10T05:53:16.335 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:15 vm02 bash[56371]: cluster 2026-03-10T05:53:14.848802+0000 mon.a (mon.0) 31 : cluster [DBG] mgrmap e39: y(active, since 4s), standbys: x 2026-03-10T05:53:16.335 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:15 vm02 bash[55303]: cluster 2026-03-10T05:53:14.826917+0000 mgr.y (mgr.24992) 8 : cluster [DBG] pgmap v5: 161 pgs: 161 active+clean; 457 KiB data, 103 MiB used, 160 GiB / 160 GiB avail 2026-03-10T05:53:16.335 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:15 vm02 bash[55303]: cluster 2026-03-10T05:53:14.826917+0000 mgr.y (mgr.24992) 8 : cluster [DBG] pgmap v5: 161 pgs: 161 active+clean; 457 KiB data, 103 MiB used, 160 GiB / 160 GiB avail 2026-03-10T05:53:16.335 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:15 vm02 bash[55303]: cluster 2026-03-10T05:53:14.848802+0000 mon.a (mon.0) 31 : cluster [DBG] mgrmap e39: y(active, since 4s), standbys: x 2026-03-10T05:53:16.335 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:15 vm02 bash[55303]: cluster 2026-03-10T05:53:14.848802+0000 mon.a (mon.0) 31 : cluster [DBG] mgrmap e39: y(active, since 4s), standbys: x 2026-03-10T05:53:17.251 INFO:journalctl@ceph.prometheus.a.vm05.stdout:Mar 10 05:53:16 vm05 bash[41269]: ts=2026-03-10T05:53:16.949Z caller=group.go:483 level=warn name=CephNodeDiskspaceWarning index=4 component="rule manager" file=/etc/prometheus/alerting/ceph_alerts.yml group=nodes msg="Evaluating rule failed" rule="alert: CephNodeDiskspaceWarning\nexpr: predict_linear(node_filesystem_free_bytes{device=~\"/.*\"}[2d], 3600 * 24 * 5)\n * on (instance) group_left (nodename) node_uname_info < 0\nlabels:\n oid: 1.3.6.1.4.1.50495.1.2.1.8.4\n severity: warning\n type: ceph_default\nannotations:\n description: Mountpoint {{ $labels.mountpoint }} on {{ $labels.nodename }} will\n be full in less than 5 days based on the 48 hour trailing fill rate.\n summary: Host filesystem free space is getting low\n" err="found duplicate series for the match group {instance=\"vm05\"} on the right hand-side of the operation: [{__name__=\"node_uname_info\", cluster=\"107483ae-1c44-11f1-b530-c1172cd6122a\", domainname=\"(none)\", instance=\"vm05\", job=\"node\", machine=\"x86_64\", nodename=\"vm05\", release=\"5.15.0-1092-kvm\", sysname=\"Linux\", version=\"#97-Ubuntu SMP Fri Jan 23 15:00:24 UTC 2026\"}, {__name__=\"node_uname_info\", domainname=\"(none)\", instance=\"vm05\", job=\"node\", machine=\"x86_64\", nodename=\"vm05\", release=\"5.15.0-1092-kvm\", sysname=\"Linux\", version=\"#97-Ubuntu SMP Fri Jan 23 15:00:24 UTC 2026\"}];many-to-many matching not allowed: matching labels must be unique on one side" 2026-03-10T05:53:18.001 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:17 vm05 bash[17864]: audit 2026-03-10T05:53:16.617593+0000 mon.a (mon.0) 32 : audit [INF] from='mgr.24992 ' entity='mgr.y' 2026-03-10T05:53:18.001 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:17 vm05 bash[17864]: audit 2026-03-10T05:53:16.626920+0000 mon.a (mon.0) 33 : audit [INF] from='mgr.24992 ' entity='mgr.y' 2026-03-10T05:53:18.001 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:17 vm05 bash[17864]: audit 2026-03-10T05:53:16.803464+0000 mon.a (mon.0) 34 : audit [INF] from='mgr.24992 ' entity='mgr.y' 2026-03-10T05:53:18.001 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:17 vm05 bash[17864]: audit 2026-03-10T05:53:16.809262+0000 mon.a (mon.0) 35 : audit [INF] from='mgr.24992 ' entity='mgr.y' 2026-03-10T05:53:18.001 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:17 vm05 bash[17864]: audit 2026-03-10T05:53:17.200292+0000 mon.a (mon.0) 36 : audit [INF] from='mgr.24992 ' entity='mgr.y' 2026-03-10T05:53:18.001 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:17 vm05 bash[17864]: audit 2026-03-10T05:53:17.206456+0000 mon.a (mon.0) 37 : audit [INF] from='mgr.24992 ' entity='mgr.y' 2026-03-10T05:53:18.001 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:17 vm05 bash[17864]: audit 2026-03-10T05:53:17.208977+0000 mon.a (mon.0) 38 : audit [INF] from='mgr.24992 ' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm05", "name": "osd_memory_target"}]: dispatch 2026-03-10T05:53:18.001 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:17 vm05 bash[17864]: audit 2026-03-10T05:53:17.211459+0000 mon.b (mon.2) 124 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm05", "name": "osd_memory_target"}]: dispatch 2026-03-10T05:53:18.001 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:17 vm05 bash[17864]: audit 2026-03-10T05:53:17.381078+0000 mon.a (mon.0) 39 : audit [INF] from='mgr.24992 ' entity='mgr.y' 2026-03-10T05:53:18.001 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:17 vm05 bash[17864]: audit 2026-03-10T05:53:17.387976+0000 mon.a (mon.0) 40 : audit [INF] from='mgr.24992 ' entity='mgr.y' 2026-03-10T05:53:18.084 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:17 vm02 bash[56371]: audit 2026-03-10T05:53:16.617593+0000 mon.a (mon.0) 32 : audit [INF] from='mgr.24992 ' entity='mgr.y' 2026-03-10T05:53:18.084 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:17 vm02 bash[56371]: audit 2026-03-10T05:53:16.617593+0000 mon.a (mon.0) 32 : audit [INF] from='mgr.24992 ' entity='mgr.y' 2026-03-10T05:53:18.084 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:17 vm02 bash[56371]: audit 2026-03-10T05:53:16.626920+0000 mon.a (mon.0) 33 : audit [INF] from='mgr.24992 ' entity='mgr.y' 2026-03-10T05:53:18.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:17 vm02 bash[56371]: audit 2026-03-10T05:53:16.626920+0000 mon.a (mon.0) 33 : audit [INF] from='mgr.24992 ' entity='mgr.y' 2026-03-10T05:53:18.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:17 vm02 bash[56371]: audit 2026-03-10T05:53:16.803464+0000 mon.a (mon.0) 34 : audit [INF] from='mgr.24992 ' entity='mgr.y' 2026-03-10T05:53:18.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:17 vm02 bash[56371]: audit 2026-03-10T05:53:16.803464+0000 mon.a (mon.0) 34 : audit [INF] from='mgr.24992 ' entity='mgr.y' 2026-03-10T05:53:18.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:17 vm02 bash[56371]: audit 2026-03-10T05:53:16.809262+0000 mon.a (mon.0) 35 : audit [INF] from='mgr.24992 ' entity='mgr.y' 2026-03-10T05:53:18.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:17 vm02 bash[56371]: audit 2026-03-10T05:53:16.809262+0000 mon.a (mon.0) 35 : audit [INF] from='mgr.24992 ' entity='mgr.y' 2026-03-10T05:53:18.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:17 vm02 bash[56371]: audit 2026-03-10T05:53:17.200292+0000 mon.a (mon.0) 36 : audit [INF] from='mgr.24992 ' entity='mgr.y' 2026-03-10T05:53:18.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:17 vm02 bash[56371]: audit 2026-03-10T05:53:17.200292+0000 mon.a (mon.0) 36 : audit [INF] from='mgr.24992 ' entity='mgr.y' 2026-03-10T05:53:18.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:17 vm02 bash[56371]: audit 2026-03-10T05:53:17.206456+0000 mon.a (mon.0) 37 : audit [INF] from='mgr.24992 ' entity='mgr.y' 2026-03-10T05:53:18.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:17 vm02 bash[56371]: audit 2026-03-10T05:53:17.206456+0000 mon.a (mon.0) 37 : audit [INF] from='mgr.24992 ' entity='mgr.y' 2026-03-10T05:53:18.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:17 vm02 bash[56371]: audit 2026-03-10T05:53:17.208977+0000 mon.a (mon.0) 38 : audit [INF] from='mgr.24992 ' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm05", "name": "osd_memory_target"}]: dispatch 2026-03-10T05:53:18.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:17 vm02 bash[56371]: audit 2026-03-10T05:53:17.208977+0000 mon.a (mon.0) 38 : audit [INF] from='mgr.24992 ' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm05", "name": "osd_memory_target"}]: dispatch 2026-03-10T05:53:18.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:17 vm02 bash[56371]: audit 2026-03-10T05:53:17.211459+0000 mon.b (mon.2) 124 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm05", "name": "osd_memory_target"}]: dispatch 2026-03-10T05:53:18.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:17 vm02 bash[56371]: audit 2026-03-10T05:53:17.211459+0000 mon.b (mon.2) 124 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm05", "name": "osd_memory_target"}]: dispatch 2026-03-10T05:53:18.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:17 vm02 bash[56371]: audit 2026-03-10T05:53:17.381078+0000 mon.a (mon.0) 39 : audit [INF] from='mgr.24992 ' entity='mgr.y' 2026-03-10T05:53:18.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:17 vm02 bash[56371]: audit 2026-03-10T05:53:17.381078+0000 mon.a (mon.0) 39 : audit [INF] from='mgr.24992 ' entity='mgr.y' 2026-03-10T05:53:18.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:17 vm02 bash[56371]: audit 2026-03-10T05:53:17.387976+0000 mon.a (mon.0) 40 : audit [INF] from='mgr.24992 ' entity='mgr.y' 2026-03-10T05:53:18.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:17 vm02 bash[56371]: audit 2026-03-10T05:53:17.387976+0000 mon.a (mon.0) 40 : audit [INF] from='mgr.24992 ' entity='mgr.y' 2026-03-10T05:53:18.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:17 vm02 bash[55303]: audit 2026-03-10T05:53:16.617593+0000 mon.a (mon.0) 32 : audit [INF] from='mgr.24992 ' entity='mgr.y' 2026-03-10T05:53:18.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:17 vm02 bash[55303]: audit 2026-03-10T05:53:16.617593+0000 mon.a (mon.0) 32 : audit [INF] from='mgr.24992 ' entity='mgr.y' 2026-03-10T05:53:18.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:17 vm02 bash[55303]: audit 2026-03-10T05:53:16.626920+0000 mon.a (mon.0) 33 : audit [INF] from='mgr.24992 ' entity='mgr.y' 2026-03-10T05:53:18.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:17 vm02 bash[55303]: audit 2026-03-10T05:53:16.626920+0000 mon.a (mon.0) 33 : audit [INF] from='mgr.24992 ' entity='mgr.y' 2026-03-10T05:53:18.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:17 vm02 bash[55303]: audit 2026-03-10T05:53:16.803464+0000 mon.a (mon.0) 34 : audit [INF] from='mgr.24992 ' entity='mgr.y' 2026-03-10T05:53:18.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:17 vm02 bash[55303]: audit 2026-03-10T05:53:16.803464+0000 mon.a (mon.0) 34 : audit [INF] from='mgr.24992 ' entity='mgr.y' 2026-03-10T05:53:18.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:17 vm02 bash[55303]: audit 2026-03-10T05:53:16.809262+0000 mon.a (mon.0) 35 : audit [INF] from='mgr.24992 ' entity='mgr.y' 2026-03-10T05:53:18.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:17 vm02 bash[55303]: audit 2026-03-10T05:53:16.809262+0000 mon.a (mon.0) 35 : audit [INF] from='mgr.24992 ' entity='mgr.y' 2026-03-10T05:53:18.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:17 vm02 bash[55303]: audit 2026-03-10T05:53:17.200292+0000 mon.a (mon.0) 36 : audit [INF] from='mgr.24992 ' entity='mgr.y' 2026-03-10T05:53:18.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:17 vm02 bash[55303]: audit 2026-03-10T05:53:17.200292+0000 mon.a (mon.0) 36 : audit [INF] from='mgr.24992 ' entity='mgr.y' 2026-03-10T05:53:18.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:17 vm02 bash[55303]: audit 2026-03-10T05:53:17.206456+0000 mon.a (mon.0) 37 : audit [INF] from='mgr.24992 ' entity='mgr.y' 2026-03-10T05:53:18.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:17 vm02 bash[55303]: audit 2026-03-10T05:53:17.206456+0000 mon.a (mon.0) 37 : audit [INF] from='mgr.24992 ' entity='mgr.y' 2026-03-10T05:53:18.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:17 vm02 bash[55303]: audit 2026-03-10T05:53:17.208977+0000 mon.a (mon.0) 38 : audit [INF] from='mgr.24992 ' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm05", "name": "osd_memory_target"}]: dispatch 2026-03-10T05:53:18.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:17 vm02 bash[55303]: audit 2026-03-10T05:53:17.208977+0000 mon.a (mon.0) 38 : audit [INF] from='mgr.24992 ' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm05", "name": "osd_memory_target"}]: dispatch 2026-03-10T05:53:18.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:17 vm02 bash[55303]: audit 2026-03-10T05:53:17.211459+0000 mon.b (mon.2) 124 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm05", "name": "osd_memory_target"}]: dispatch 2026-03-10T05:53:18.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:17 vm02 bash[55303]: audit 2026-03-10T05:53:17.211459+0000 mon.b (mon.2) 124 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm05", "name": "osd_memory_target"}]: dispatch 2026-03-10T05:53:18.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:17 vm02 bash[55303]: audit 2026-03-10T05:53:17.381078+0000 mon.a (mon.0) 39 : audit [INF] from='mgr.24992 ' entity='mgr.y' 2026-03-10T05:53:18.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:17 vm02 bash[55303]: audit 2026-03-10T05:53:17.381078+0000 mon.a (mon.0) 39 : audit [INF] from='mgr.24992 ' entity='mgr.y' 2026-03-10T05:53:18.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:17 vm02 bash[55303]: audit 2026-03-10T05:53:17.387976+0000 mon.a (mon.0) 40 : audit [INF] from='mgr.24992 ' entity='mgr.y' 2026-03-10T05:53:18.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:17 vm02 bash[55303]: audit 2026-03-10T05:53:17.387976+0000 mon.a (mon.0) 40 : audit [INF] from='mgr.24992 ' entity='mgr.y' 2026-03-10T05:53:18.361 INFO:teuthology.orchestra.run.vm02.stdout:true 2026-03-10T05:53:18.746 INFO:teuthology.orchestra.run.vm02.stdout:NAME HOST PORTS STATUS REFRESHED AGE MEM USE MEM LIM VERSION IMAGE ID CONTAINER ID 2026-03-10T05:53:18.746 INFO:teuthology.orchestra.run.vm02.stdout:alertmanager.a vm02 *:9093,9094 running (81s) 1s ago 6m 14.8M - 0.25.0 c8568f914cd2 7a7c5c2cddb6 2026-03-10T05:53:18.746 INFO:teuthology.orchestra.run.vm02.stdout:grafana.a vm05 *:3000 running (79s) 2s ago 5m 39.4M - dad864ee21e9 95c6d977988a 2026-03-10T05:53:18.746 INFO:teuthology.orchestra.run.vm02.stdout:iscsi.foo.vm02.mxbwmh vm02 running (42s) 1s ago 5m 43.0M - 3.5 e1d6a67b021e 62aba5b41046 2026-03-10T05:53:18.746 INFO:teuthology.orchestra.run.vm02.stdout:mgr.x vm05 *:8443,9283,8765 running (39s) 2s ago 8m 463M - 19.2.3-678-ge911bdeb 654f31e6858e 7579626ada90 2026-03-10T05:53:18.746 INFO:teuthology.orchestra.run.vm02.stdout:mgr.y vm02 *:8443,9283,8765 running (70s) 1s ago 9m 508M - 19.2.3-678-ge911bdeb 654f31e6858e ef46d0f7b15e 2026-03-10T05:53:18.746 INFO:teuthology.orchestra.run.vm02.stdout:mon.a vm02 running (12s) 1s ago 9m 30.8M 2048M 19.2.3-678-ge911bdeb 654f31e6858e df3a0a290a95 2026-03-10T05:53:18.746 INFO:teuthology.orchestra.run.vm02.stdout:mon.b vm05 running (8m) 2s ago 8m 51.8M 2048M 17.2.0 e1d6a67b021e 96a2a71fd403 2026-03-10T05:53:18.746 INFO:teuthology.orchestra.run.vm02.stdout:mon.c vm02 running (26s) 1s ago 8m 32.7M 2048M 19.2.3-678-ge911bdeb 654f31e6858e 7f2cdf1b7aa6 2026-03-10T05:53:18.746 INFO:teuthology.orchestra.run.vm02.stdout:node-exporter.a vm02 *:9100 running (77s) 1s ago 6m 7235k - 1.7.0 72c9c2088986 90288450bd1f 2026-03-10T05:53:18.746 INFO:teuthology.orchestra.run.vm02.stdout:node-exporter.b vm05 *:9100 running (76s) 2s ago 6m 7367k - 1.7.0 72c9c2088986 4e859143cb0e 2026-03-10T05:53:18.746 INFO:teuthology.orchestra.run.vm02.stdout:osd.0 vm02 running (8m) 1s ago 8m 51.4M 4096M 17.2.0 e1d6a67b021e 563d55a3e6a4 2026-03-10T05:53:18.746 INFO:teuthology.orchestra.run.vm02.stdout:osd.1 vm02 running (8m) 1s ago 8m 54.2M 4096M 17.2.0 e1d6a67b021e 8c25a1e89677 2026-03-10T05:53:18.746 INFO:teuthology.orchestra.run.vm02.stdout:osd.2 vm02 running (7m) 1s ago 7m 49.5M 4096M 17.2.0 e1d6a67b021e 826f54bdbc5c 2026-03-10T05:53:18.746 INFO:teuthology.orchestra.run.vm02.stdout:osd.3 vm02 running (7m) 1s ago 7m 53.2M 4096M 17.2.0 e1d6a67b021e 0c6cfa53c9fd 2026-03-10T05:53:18.746 INFO:teuthology.orchestra.run.vm02.stdout:osd.4 vm05 running (7m) 2s ago 7m 53.3M 4096M 17.2.0 e1d6a67b021e 4ffe1741f201 2026-03-10T05:53:18.746 INFO:teuthology.orchestra.run.vm02.stdout:osd.5 vm05 running (7m) 2s ago 7m 51.9M 4096M 17.2.0 e1d6a67b021e cba5583c238e 2026-03-10T05:53:18.746 INFO:teuthology.orchestra.run.vm02.stdout:osd.6 vm05 running (6m) 2s ago 6m 49.7M 4096M 17.2.0 e1d6a67b021e 9d1b370357d7 2026-03-10T05:53:18.746 INFO:teuthology.orchestra.run.vm02.stdout:osd.7 vm05 running (6m) 2s ago 6m 51.3M 4096M 17.2.0 e1d6a67b021e 8a4837b788cf 2026-03-10T05:53:18.746 INFO:teuthology.orchestra.run.vm02.stdout:prometheus.a vm05 *:9095 running (41s) 2s ago 6m 37.1M - 2.51.0 1d3b7f56885b 3328811f8f28 2026-03-10T05:53:18.746 INFO:teuthology.orchestra.run.vm02.stdout:rgw.foo.vm02.pbogjd vm02 *:8000 running (5m) 1s ago 5m 86.8M - 17.2.0 e1d6a67b021e 2ab2ffd1abaa 2026-03-10T05:53:18.746 INFO:teuthology.orchestra.run.vm02.stdout:rgw.foo.vm05.hvmsxl vm05 *:8000 running (5m) 2s ago 5m 85.8M - 17.2.0 e1d6a67b021e 85d1c77b7e9d 2026-03-10T05:53:18.746 INFO:teuthology.orchestra.run.vm02.stdout:rgw.smpl.vm02.pglcfm vm02 *:80 running (5m) 1s ago 5m 85.6M - 17.2.0 e1d6a67b021e ef152a460673 2026-03-10T05:53:18.746 INFO:teuthology.orchestra.run.vm02.stdout:rgw.smpl.vm05.hqqmap vm05 *:80 running (5m) 2s ago 5m 85.8M - 17.2.0 e1d6a67b021e 29c9ee794f34 2026-03-10T05:53:18.978 INFO:teuthology.orchestra.run.vm02.stdout:{ 2026-03-10T05:53:18.978 INFO:teuthology.orchestra.run.vm02.stdout: "mon": { 2026-03-10T05:53:18.979 INFO:teuthology.orchestra.run.vm02.stdout: "ceph version 17.2.0 (43e2e60a7559d3f46c9d53f1ca875fd499a1e35e) quincy (stable)": 1, 2026-03-10T05:53:18.979 INFO:teuthology.orchestra.run.vm02.stdout: "ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable)": 2 2026-03-10T05:53:18.979 INFO:teuthology.orchestra.run.vm02.stdout: }, 2026-03-10T05:53:18.979 INFO:teuthology.orchestra.run.vm02.stdout: "mgr": { 2026-03-10T05:53:18.979 INFO:teuthology.orchestra.run.vm02.stdout: "ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable)": 2 2026-03-10T05:53:18.979 INFO:teuthology.orchestra.run.vm02.stdout: }, 2026-03-10T05:53:18.979 INFO:teuthology.orchestra.run.vm02.stdout: "osd": { 2026-03-10T05:53:18.979 INFO:teuthology.orchestra.run.vm02.stdout: "ceph version 17.2.0 (43e2e60a7559d3f46c9d53f1ca875fd499a1e35e) quincy (stable)": 8 2026-03-10T05:53:18.979 INFO:teuthology.orchestra.run.vm02.stdout: }, 2026-03-10T05:53:18.979 INFO:teuthology.orchestra.run.vm02.stdout: "rgw": { 2026-03-10T05:53:18.979 INFO:teuthology.orchestra.run.vm02.stdout: "ceph version 17.2.0 (43e2e60a7559d3f46c9d53f1ca875fd499a1e35e) quincy (stable)": 4 2026-03-10T05:53:18.979 INFO:teuthology.orchestra.run.vm02.stdout: }, 2026-03-10T05:53:18.979 INFO:teuthology.orchestra.run.vm02.stdout: "overall": { 2026-03-10T05:53:18.979 INFO:teuthology.orchestra.run.vm02.stdout: "ceph version 17.2.0 (43e2e60a7559d3f46c9d53f1ca875fd499a1e35e) quincy (stable)": 13, 2026-03-10T05:53:18.979 INFO:teuthology.orchestra.run.vm02.stdout: "ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable)": 4 2026-03-10T05:53:18.979 INFO:teuthology.orchestra.run.vm02.stdout: } 2026-03-10T05:53:18.979 INFO:teuthology.orchestra.run.vm02.stdout:} 2026-03-10T05:53:19.001 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:18 vm05 bash[17864]: cluster 2026-03-10T05:53:16.827149+0000 mgr.y (mgr.24992) 9 : cluster [DBG] pgmap v6: 161 pgs: 161 active+clean; 457 KiB data, 103 MiB used, 160 GiB / 160 GiB avail 2026-03-10T05:53:19.001 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:18 vm05 bash[17864]: audit 2026-03-10T05:53:16.871695+0000 mgr.y (mgr.24992) 10 : audit [DBG] from='client.25048 -' entity='client.iscsi.foo.vm02.mxbwmh' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T05:53:19.084 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:18 vm02 bash[56371]: cluster 2026-03-10T05:53:16.827149+0000 mgr.y (mgr.24992) 9 : cluster [DBG] pgmap v6: 161 pgs: 161 active+clean; 457 KiB data, 103 MiB used, 160 GiB / 160 GiB avail 2026-03-10T05:53:19.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:18 vm02 bash[56371]: cluster 2026-03-10T05:53:16.827149+0000 mgr.y (mgr.24992) 9 : cluster [DBG] pgmap v6: 161 pgs: 161 active+clean; 457 KiB data, 103 MiB used, 160 GiB / 160 GiB avail 2026-03-10T05:53:19.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:18 vm02 bash[56371]: audit 2026-03-10T05:53:16.871695+0000 mgr.y (mgr.24992) 10 : audit [DBG] from='client.25048 -' entity='client.iscsi.foo.vm02.mxbwmh' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T05:53:19.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:18 vm02 bash[56371]: audit 2026-03-10T05:53:16.871695+0000 mgr.y (mgr.24992) 10 : audit [DBG] from='client.25048 -' entity='client.iscsi.foo.vm02.mxbwmh' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T05:53:19.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:18 vm02 bash[55303]: cluster 2026-03-10T05:53:16.827149+0000 mgr.y (mgr.24992) 9 : cluster [DBG] pgmap v6: 161 pgs: 161 active+clean; 457 KiB data, 103 MiB used, 160 GiB / 160 GiB avail 2026-03-10T05:53:19.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:18 vm02 bash[55303]: cluster 2026-03-10T05:53:16.827149+0000 mgr.y (mgr.24992) 9 : cluster [DBG] pgmap v6: 161 pgs: 161 active+clean; 457 KiB data, 103 MiB used, 160 GiB / 160 GiB avail 2026-03-10T05:53:19.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:18 vm02 bash[55303]: audit 2026-03-10T05:53:16.871695+0000 mgr.y (mgr.24992) 10 : audit [DBG] from='client.25048 -' entity='client.iscsi.foo.vm02.mxbwmh' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T05:53:19.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:18 vm02 bash[55303]: audit 2026-03-10T05:53:16.871695+0000 mgr.y (mgr.24992) 10 : audit [DBG] from='client.25048 -' entity='client.iscsi.foo.vm02.mxbwmh' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T05:53:19.174 INFO:teuthology.orchestra.run.vm02.stdout:{ 2026-03-10T05:53:19.174 INFO:teuthology.orchestra.run.vm02.stdout: "target_image": "quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df", 2026-03-10T05:53:19.174 INFO:teuthology.orchestra.run.vm02.stdout: "in_progress": true, 2026-03-10T05:53:19.174 INFO:teuthology.orchestra.run.vm02.stdout: "which": "Upgrading all daemon types on all hosts", 2026-03-10T05:53:19.174 INFO:teuthology.orchestra.run.vm02.stdout: "services_complete": [ 2026-03-10T05:53:19.174 INFO:teuthology.orchestra.run.vm02.stdout: "mgr" 2026-03-10T05:53:19.174 INFO:teuthology.orchestra.run.vm02.stdout: ], 2026-03-10T05:53:19.174 INFO:teuthology.orchestra.run.vm02.stdout: "progress": "4/23 daemons upgraded", 2026-03-10T05:53:19.174 INFO:teuthology.orchestra.run.vm02.stdout: "message": "", 2026-03-10T05:53:19.174 INFO:teuthology.orchestra.run.vm02.stdout: "is_paused": false 2026-03-10T05:53:19.174 INFO:teuthology.orchestra.run.vm02.stdout:} 2026-03-10T05:53:19.399 INFO:teuthology.orchestra.run.vm02.stdout:HEALTH_OK 2026-03-10T05:53:20.001 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:19 vm05 bash[17864]: audit 2026-03-10T05:53:18.350285+0000 mgr.y (mgr.24992) 11 : audit [DBG] from='client.44124 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T05:53:20.001 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:19 vm05 bash[17864]: audit 2026-03-10T05:53:18.541945+0000 mgr.y (mgr.24992) 12 : audit [DBG] from='client.44127 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T05:53:20.001 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:19 vm05 bash[17864]: audit 2026-03-10T05:53:18.740648+0000 mgr.y (mgr.24992) 13 : audit [DBG] from='client.25016 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T05:53:20.001 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:19 vm05 bash[17864]: cluster 2026-03-10T05:53:18.827612+0000 mgr.y (mgr.24992) 14 : cluster [DBG] pgmap v7: 161 pgs: 161 active+clean; 457 KiB data, 103 MiB used, 160 GiB / 160 GiB avail; 26 KiB/s rd, 0 B/s wr, 11 op/s 2026-03-10T05:53:20.001 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:19 vm05 bash[17864]: audit 2026-03-10T05:53:18.977408+0000 mon.c (mon.1) 2 : audit [DBG] from='client.? 192.168.123.102:0/4225137029' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T05:53:20.001 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:19 vm05 bash[17864]: audit 2026-03-10T05:53:19.172757+0000 mgr.y (mgr.24992) 15 : audit [DBG] from='client.44145 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T05:53:20.001 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:19 vm05 bash[17864]: audit 2026-03-10T05:53:19.398160+0000 mon.a (mon.0) 41 : audit [DBG] from='client.? 192.168.123.102:0/1639478505' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch 2026-03-10T05:53:20.084 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:19 vm02 bash[56371]: audit 2026-03-10T05:53:18.350285+0000 mgr.y (mgr.24992) 11 : audit [DBG] from='client.44124 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T05:53:20.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:19 vm02 bash[56371]: audit 2026-03-10T05:53:18.350285+0000 mgr.y (mgr.24992) 11 : audit [DBG] from='client.44124 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T05:53:20.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:19 vm02 bash[56371]: audit 2026-03-10T05:53:18.541945+0000 mgr.y (mgr.24992) 12 : audit [DBG] from='client.44127 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T05:53:20.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:19 vm02 bash[56371]: audit 2026-03-10T05:53:18.541945+0000 mgr.y (mgr.24992) 12 : audit [DBG] from='client.44127 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T05:53:20.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:19 vm02 bash[56371]: audit 2026-03-10T05:53:18.740648+0000 mgr.y (mgr.24992) 13 : audit [DBG] from='client.25016 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T05:53:20.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:19 vm02 bash[56371]: audit 2026-03-10T05:53:18.740648+0000 mgr.y (mgr.24992) 13 : audit [DBG] from='client.25016 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T05:53:20.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:19 vm02 bash[56371]: cluster 2026-03-10T05:53:18.827612+0000 mgr.y (mgr.24992) 14 : cluster [DBG] pgmap v7: 161 pgs: 161 active+clean; 457 KiB data, 103 MiB used, 160 GiB / 160 GiB avail; 26 KiB/s rd, 0 B/s wr, 11 op/s 2026-03-10T05:53:20.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:19 vm02 bash[56371]: cluster 2026-03-10T05:53:18.827612+0000 mgr.y (mgr.24992) 14 : cluster [DBG] pgmap v7: 161 pgs: 161 active+clean; 457 KiB data, 103 MiB used, 160 GiB / 160 GiB avail; 26 KiB/s rd, 0 B/s wr, 11 op/s 2026-03-10T05:53:20.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:19 vm02 bash[56371]: audit 2026-03-10T05:53:18.977408+0000 mon.c (mon.1) 2 : audit [DBG] from='client.? 192.168.123.102:0/4225137029' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T05:53:20.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:19 vm02 bash[56371]: audit 2026-03-10T05:53:18.977408+0000 mon.c (mon.1) 2 : audit [DBG] from='client.? 192.168.123.102:0/4225137029' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T05:53:20.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:19 vm02 bash[56371]: audit 2026-03-10T05:53:19.172757+0000 mgr.y (mgr.24992) 15 : audit [DBG] from='client.44145 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T05:53:20.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:19 vm02 bash[56371]: audit 2026-03-10T05:53:19.172757+0000 mgr.y (mgr.24992) 15 : audit [DBG] from='client.44145 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T05:53:20.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:19 vm02 bash[56371]: audit 2026-03-10T05:53:19.398160+0000 mon.a (mon.0) 41 : audit [DBG] from='client.? 192.168.123.102:0/1639478505' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch 2026-03-10T05:53:20.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:19 vm02 bash[56371]: audit 2026-03-10T05:53:19.398160+0000 mon.a (mon.0) 41 : audit [DBG] from='client.? 192.168.123.102:0/1639478505' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch 2026-03-10T05:53:20.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:19 vm02 bash[55303]: audit 2026-03-10T05:53:18.350285+0000 mgr.y (mgr.24992) 11 : audit [DBG] from='client.44124 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T05:53:20.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:19 vm02 bash[55303]: audit 2026-03-10T05:53:18.350285+0000 mgr.y (mgr.24992) 11 : audit [DBG] from='client.44124 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T05:53:20.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:19 vm02 bash[55303]: audit 2026-03-10T05:53:18.541945+0000 mgr.y (mgr.24992) 12 : audit [DBG] from='client.44127 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T05:53:20.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:19 vm02 bash[55303]: audit 2026-03-10T05:53:18.541945+0000 mgr.y (mgr.24992) 12 : audit [DBG] from='client.44127 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T05:53:20.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:19 vm02 bash[55303]: audit 2026-03-10T05:53:18.740648+0000 mgr.y (mgr.24992) 13 : audit [DBG] from='client.25016 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T05:53:20.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:19 vm02 bash[55303]: audit 2026-03-10T05:53:18.740648+0000 mgr.y (mgr.24992) 13 : audit [DBG] from='client.25016 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T05:53:20.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:19 vm02 bash[55303]: cluster 2026-03-10T05:53:18.827612+0000 mgr.y (mgr.24992) 14 : cluster [DBG] pgmap v7: 161 pgs: 161 active+clean; 457 KiB data, 103 MiB used, 160 GiB / 160 GiB avail; 26 KiB/s rd, 0 B/s wr, 11 op/s 2026-03-10T05:53:20.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:19 vm02 bash[55303]: cluster 2026-03-10T05:53:18.827612+0000 mgr.y (mgr.24992) 14 : cluster [DBG] pgmap v7: 161 pgs: 161 active+clean; 457 KiB data, 103 MiB used, 160 GiB / 160 GiB avail; 26 KiB/s rd, 0 B/s wr, 11 op/s 2026-03-10T05:53:20.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:19 vm02 bash[55303]: audit 2026-03-10T05:53:18.977408+0000 mon.c (mon.1) 2 : audit [DBG] from='client.? 192.168.123.102:0/4225137029' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T05:53:20.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:19 vm02 bash[55303]: audit 2026-03-10T05:53:18.977408+0000 mon.c (mon.1) 2 : audit [DBG] from='client.? 192.168.123.102:0/4225137029' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T05:53:20.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:19 vm02 bash[55303]: audit 2026-03-10T05:53:19.172757+0000 mgr.y (mgr.24992) 15 : audit [DBG] from='client.44145 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T05:53:20.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:19 vm02 bash[55303]: audit 2026-03-10T05:53:19.172757+0000 mgr.y (mgr.24992) 15 : audit [DBG] from='client.44145 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T05:53:20.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:19 vm02 bash[55303]: audit 2026-03-10T05:53:19.398160+0000 mon.a (mon.0) 41 : audit [DBG] from='client.? 192.168.123.102:0/1639478505' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch 2026-03-10T05:53:20.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:19 vm02 bash[55303]: audit 2026-03-10T05:53:19.398160+0000 mon.a (mon.0) 41 : audit [DBG] from='client.? 192.168.123.102:0/1639478505' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch 2026-03-10T05:53:22.501 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:22 vm05 bash[17864]: cluster 2026-03-10T05:53:20.827854+0000 mgr.y (mgr.24992) 16 : cluster [DBG] pgmap v8: 161 pgs: 161 active+clean; 457 KiB data, 103 MiB used, 160 GiB / 160 GiB avail; 20 KiB/s rd, 0 B/s wr, 8 op/s 2026-03-10T05:53:22.584 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:22 vm02 bash[56371]: cluster 2026-03-10T05:53:20.827854+0000 mgr.y (mgr.24992) 16 : cluster [DBG] pgmap v8: 161 pgs: 161 active+clean; 457 KiB data, 103 MiB used, 160 GiB / 160 GiB avail; 20 KiB/s rd, 0 B/s wr, 8 op/s 2026-03-10T05:53:22.584 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:22 vm02 bash[56371]: cluster 2026-03-10T05:53:20.827854+0000 mgr.y (mgr.24992) 16 : cluster [DBG] pgmap v8: 161 pgs: 161 active+clean; 457 KiB data, 103 MiB used, 160 GiB / 160 GiB avail; 20 KiB/s rd, 0 B/s wr, 8 op/s 2026-03-10T05:53:22.585 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:22 vm02 bash[55303]: cluster 2026-03-10T05:53:20.827854+0000 mgr.y (mgr.24992) 16 : cluster [DBG] pgmap v8: 161 pgs: 161 active+clean; 457 KiB data, 103 MiB used, 160 GiB / 160 GiB avail; 20 KiB/s rd, 0 B/s wr, 8 op/s 2026-03-10T05:53:22.585 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:22 vm02 bash[55303]: cluster 2026-03-10T05:53:20.827854+0000 mgr.y (mgr.24992) 16 : cluster [DBG] pgmap v8: 161 pgs: 161 active+clean; 457 KiB data, 103 MiB used, 160 GiB / 160 GiB avail; 20 KiB/s rd, 0 B/s wr, 8 op/s 2026-03-10T05:53:23.196 INFO:journalctl@ceph.mgr.y.vm02.stdout:Mar 10 05:53:22 vm02 bash[52264]: ::ffff:192.168.123.105 - - [10/Mar/2026:05:53:22] "GET /metrics HTTP/1.1" 200 35013 "" "Prometheus/2.51.0" 2026-03-10T05:53:24.432 INFO:journalctl@ceph.prometheus.a.vm05.stdout:Mar 10 05:53:24 vm05 bash[41269]: ts=2026-03-10T05:53:24.147Z caller=group.go:483 level=warn name=CephOSDFlapping index=13 component="rule manager" file=/etc/prometheus/alerting/ceph_alerts.yml group=osd msg="Evaluating rule failed" rule="alert: CephOSDFlapping\nexpr: (rate(ceph_osd_up[5m]) * on (ceph_daemon) group_left (hostname) ceph_osd_metadata)\n * 60 > 1\nlabels:\n oid: 1.3.6.1.4.1.50495.1.2.1.4.4\n severity: warning\n type: ceph_default\nannotations:\n description: OSD {{ $labels.ceph_daemon }} on {{ $labels.hostname }} was marked\n down and back up {{ $value | humanize }} times once a minute for 5 minutes. This\n may indicate a network issue (latency, packet loss, MTU mismatch) on the cluster\n network, or the public network if no cluster network is deployed. Check the network\n stats on the listed host(s).\n documentation: https://docs.ceph.com/en/latest/rados/troubleshooting/troubleshooting-osd#flapping-osds\n summary: Network issues are causing OSDs to flap (mark each other down)\n" err="found duplicate series for the match group {ceph_daemon=\"osd.0\"} on the right hand-side of the operation: [{__name__=\"ceph_osd_metadata\", ceph_daemon=\"osd.0\", ceph_version=\"ceph version 17.2.0 (43e2e60a7559d3f46c9d53f1ca875fd499a1e35e) quincy (stable)\", cluster=\"107483ae-1c44-11f1-b530-c1172cd6122a\", cluster_addr=\"192.168.123.102\", device_class=\"hdd\", hostname=\"vm02\", instance=\"ceph_cluster\", job=\"ceph\", objectstore=\"bluestore\", public_addr=\"192.168.123.102\"}, {__name__=\"ceph_osd_metadata\", ceph_daemon=\"osd.0\", ceph_version=\"ceph version 17.2.0 (43e2e60a7559d3f46c9d53f1ca875fd499a1e35e) quincy (stable)\", cluster_addr=\"192.168.123.102\", device_class=\"hdd\", hostname=\"vm02\", instance=\"192.168.123.105:9283\", job=\"ceph\", objectstore=\"bluestore\", public_addr=\"192.168.123.102\"}];many-to-many matching not allowed: matching labels must be unique on one side" 2026-03-10T05:53:24.432 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:24 vm05 bash[17864]: cluster 2026-03-10T05:53:22.828277+0000 mgr.y (mgr.24992) 17 : cluster [DBG] pgmap v9: 161 pgs: 161 active+clean; 457 KiB data, 103 MiB used, 160 GiB / 160 GiB avail; 17 KiB/s rd, 0 B/s wr, 7 op/s 2026-03-10T05:53:24.432 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:24 vm05 bash[17864]: audit 2026-03-10T05:53:23.946874+0000 mon.a (mon.0) 42 : audit [INF] from='mgr.24992 ' entity='mgr.y' 2026-03-10T05:53:24.432 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:24 vm05 bash[17864]: audit 2026-03-10T05:53:23.952036+0000 mon.a (mon.0) 43 : audit [INF] from='mgr.24992 ' entity='mgr.y' 2026-03-10T05:53:24.432 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:24 vm05 bash[17864]: audit 2026-03-10T05:53:23.953938+0000 mon.a (mon.0) 44 : audit [INF] from='mgr.24992 ' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm02", "name": "osd_memory_target"}]: dispatch 2026-03-10T05:53:24.432 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:24 vm05 bash[17864]: audit 2026-03-10T05:53:23.956562+0000 mon.b (mon.2) 125 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm02", "name": "osd_memory_target"}]: dispatch 2026-03-10T05:53:24.432 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:24 vm05 bash[17864]: audit 2026-03-10T05:53:23.957631+0000 mon.b (mon.2) 126 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:53:24.432 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:24 vm05 bash[17864]: audit 2026-03-10T05:53:23.958120+0000 mon.b (mon.2) 127 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T05:53:24.432 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:24 vm05 bash[17864]: audit 2026-03-10T05:53:24.102338+0000 mon.a (mon.0) 45 : audit [INF] from='mgr.24992 ' entity='mgr.y' 2026-03-10T05:53:24.432 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:24 vm05 bash[17864]: audit 2026-03-10T05:53:24.107356+0000 mon.a (mon.0) 46 : audit [INF] from='mgr.24992 ' entity='mgr.y' 2026-03-10T05:53:24.432 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:24 vm05 bash[17864]: audit 2026-03-10T05:53:24.112015+0000 mon.a (mon.0) 47 : audit [INF] from='mgr.24992 ' entity='mgr.y' 2026-03-10T05:53:24.432 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:24 vm05 bash[17864]: audit 2026-03-10T05:53:24.116387+0000 mon.a (mon.0) 48 : audit [INF] from='mgr.24992 ' entity='mgr.y' 2026-03-10T05:53:24.432 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:24 vm05 bash[17864]: audit 2026-03-10T05:53:24.121138+0000 mon.a (mon.0) 49 : audit [INF] from='mgr.24992 ' entity='mgr.y' 2026-03-10T05:53:24.432 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:24 vm05 bash[17864]: audit 2026-03-10T05:53:24.162889+0000 mon.b (mon.2) 128 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T05:53:24.432 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:24 vm05 bash[17864]: audit 2026-03-10T05:53:24.164427+0000 mon.b (mon.2) 129 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T05:53:24.432 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:24 vm05 bash[17864]: audit 2026-03-10T05:53:24.165302+0000 mon.b (mon.2) 130 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "quorum_status"}]: dispatch 2026-03-10T05:53:24.432 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:24 vm05 bash[17864]: audit 2026-03-10T05:53:24.165965+0000 mon.b (mon.2) 131 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "mon ok-to-stop", "ids": ["b"]}]: dispatch 2026-03-10T05:53:24.584 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:24 vm02 bash[56371]: cluster 2026-03-10T05:53:22.828277+0000 mgr.y (mgr.24992) 17 : cluster [DBG] pgmap v9: 161 pgs: 161 active+clean; 457 KiB data, 103 MiB used, 160 GiB / 160 GiB avail; 17 KiB/s rd, 0 B/s wr, 7 op/s 2026-03-10T05:53:24.584 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:24 vm02 bash[56371]: cluster 2026-03-10T05:53:22.828277+0000 mgr.y (mgr.24992) 17 : cluster [DBG] pgmap v9: 161 pgs: 161 active+clean; 457 KiB data, 103 MiB used, 160 GiB / 160 GiB avail; 17 KiB/s rd, 0 B/s wr, 7 op/s 2026-03-10T05:53:24.584 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:24 vm02 bash[56371]: audit 2026-03-10T05:53:23.946874+0000 mon.a (mon.0) 42 : audit [INF] from='mgr.24992 ' entity='mgr.y' 2026-03-10T05:53:24.584 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:24 vm02 bash[56371]: audit 2026-03-10T05:53:23.946874+0000 mon.a (mon.0) 42 : audit [INF] from='mgr.24992 ' entity='mgr.y' 2026-03-10T05:53:24.584 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:24 vm02 bash[56371]: audit 2026-03-10T05:53:23.952036+0000 mon.a (mon.0) 43 : audit [INF] from='mgr.24992 ' entity='mgr.y' 2026-03-10T05:53:24.585 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:24 vm02 bash[56371]: audit 2026-03-10T05:53:23.952036+0000 mon.a (mon.0) 43 : audit [INF] from='mgr.24992 ' entity='mgr.y' 2026-03-10T05:53:24.585 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:24 vm02 bash[56371]: audit 2026-03-10T05:53:23.953938+0000 mon.a (mon.0) 44 : audit [INF] from='mgr.24992 ' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm02", "name": "osd_memory_target"}]: dispatch 2026-03-10T05:53:24.585 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:24 vm02 bash[56371]: audit 2026-03-10T05:53:23.953938+0000 mon.a (mon.0) 44 : audit [INF] from='mgr.24992 ' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm02", "name": "osd_memory_target"}]: dispatch 2026-03-10T05:53:24.585 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:24 vm02 bash[56371]: audit 2026-03-10T05:53:23.956562+0000 mon.b (mon.2) 125 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm02", "name": "osd_memory_target"}]: dispatch 2026-03-10T05:53:24.585 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:24 vm02 bash[56371]: audit 2026-03-10T05:53:23.956562+0000 mon.b (mon.2) 125 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm02", "name": "osd_memory_target"}]: dispatch 2026-03-10T05:53:24.585 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:24 vm02 bash[56371]: audit 2026-03-10T05:53:23.957631+0000 mon.b (mon.2) 126 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:53:24.585 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:24 vm02 bash[56371]: audit 2026-03-10T05:53:23.957631+0000 mon.b (mon.2) 126 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:53:24.585 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:24 vm02 bash[56371]: audit 2026-03-10T05:53:23.958120+0000 mon.b (mon.2) 127 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T05:53:24.585 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:24 vm02 bash[56371]: audit 2026-03-10T05:53:23.958120+0000 mon.b (mon.2) 127 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T05:53:24.585 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:24 vm02 bash[56371]: audit 2026-03-10T05:53:24.102338+0000 mon.a (mon.0) 45 : audit [INF] from='mgr.24992 ' entity='mgr.y' 2026-03-10T05:53:24.585 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:24 vm02 bash[56371]: audit 2026-03-10T05:53:24.102338+0000 mon.a (mon.0) 45 : audit [INF] from='mgr.24992 ' entity='mgr.y' 2026-03-10T05:53:24.585 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:24 vm02 bash[56371]: audit 2026-03-10T05:53:24.107356+0000 mon.a (mon.0) 46 : audit [INF] from='mgr.24992 ' entity='mgr.y' 2026-03-10T05:53:24.585 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:24 vm02 bash[56371]: audit 2026-03-10T05:53:24.107356+0000 mon.a (mon.0) 46 : audit [INF] from='mgr.24992 ' entity='mgr.y' 2026-03-10T05:53:24.585 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:24 vm02 bash[56371]: audit 2026-03-10T05:53:24.112015+0000 mon.a (mon.0) 47 : audit [INF] from='mgr.24992 ' entity='mgr.y' 2026-03-10T05:53:24.585 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:24 vm02 bash[56371]: audit 2026-03-10T05:53:24.112015+0000 mon.a (mon.0) 47 : audit [INF] from='mgr.24992 ' entity='mgr.y' 2026-03-10T05:53:24.585 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:24 vm02 bash[56371]: audit 2026-03-10T05:53:24.116387+0000 mon.a (mon.0) 48 : audit [INF] from='mgr.24992 ' entity='mgr.y' 2026-03-10T05:53:24.585 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:24 vm02 bash[56371]: audit 2026-03-10T05:53:24.116387+0000 mon.a (mon.0) 48 : audit [INF] from='mgr.24992 ' entity='mgr.y' 2026-03-10T05:53:24.585 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:24 vm02 bash[56371]: audit 2026-03-10T05:53:24.121138+0000 mon.a (mon.0) 49 : audit [INF] from='mgr.24992 ' entity='mgr.y' 2026-03-10T05:53:24.585 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:24 vm02 bash[56371]: audit 2026-03-10T05:53:24.121138+0000 mon.a (mon.0) 49 : audit [INF] from='mgr.24992 ' entity='mgr.y' 2026-03-10T05:53:24.585 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:24 vm02 bash[56371]: audit 2026-03-10T05:53:24.162889+0000 mon.b (mon.2) 128 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T05:53:24.585 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:24 vm02 bash[56371]: audit 2026-03-10T05:53:24.162889+0000 mon.b (mon.2) 128 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T05:53:24.585 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:24 vm02 bash[56371]: audit 2026-03-10T05:53:24.164427+0000 mon.b (mon.2) 129 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T05:53:24.585 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:24 vm02 bash[56371]: audit 2026-03-10T05:53:24.164427+0000 mon.b (mon.2) 129 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T05:53:24.585 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:24 vm02 bash[56371]: audit 2026-03-10T05:53:24.165302+0000 mon.b (mon.2) 130 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "quorum_status"}]: dispatch 2026-03-10T05:53:24.585 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:24 vm02 bash[56371]: audit 2026-03-10T05:53:24.165302+0000 mon.b (mon.2) 130 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "quorum_status"}]: dispatch 2026-03-10T05:53:24.585 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:24 vm02 bash[56371]: audit 2026-03-10T05:53:24.165965+0000 mon.b (mon.2) 131 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "mon ok-to-stop", "ids": ["b"]}]: dispatch 2026-03-10T05:53:24.585 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:24 vm02 bash[56371]: audit 2026-03-10T05:53:24.165965+0000 mon.b (mon.2) 131 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "mon ok-to-stop", "ids": ["b"]}]: dispatch 2026-03-10T05:53:24.585 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:24 vm02 bash[55303]: cluster 2026-03-10T05:53:22.828277+0000 mgr.y (mgr.24992) 17 : cluster [DBG] pgmap v9: 161 pgs: 161 active+clean; 457 KiB data, 103 MiB used, 160 GiB / 160 GiB avail; 17 KiB/s rd, 0 B/s wr, 7 op/s 2026-03-10T05:53:24.585 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:24 vm02 bash[55303]: cluster 2026-03-10T05:53:22.828277+0000 mgr.y (mgr.24992) 17 : cluster [DBG] pgmap v9: 161 pgs: 161 active+clean; 457 KiB data, 103 MiB used, 160 GiB / 160 GiB avail; 17 KiB/s rd, 0 B/s wr, 7 op/s 2026-03-10T05:53:24.585 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:24 vm02 bash[55303]: audit 2026-03-10T05:53:23.946874+0000 mon.a (mon.0) 42 : audit [INF] from='mgr.24992 ' entity='mgr.y' 2026-03-10T05:53:24.585 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:24 vm02 bash[55303]: audit 2026-03-10T05:53:23.946874+0000 mon.a (mon.0) 42 : audit [INF] from='mgr.24992 ' entity='mgr.y' 2026-03-10T05:53:24.585 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:24 vm02 bash[55303]: audit 2026-03-10T05:53:23.952036+0000 mon.a (mon.0) 43 : audit [INF] from='mgr.24992 ' entity='mgr.y' 2026-03-10T05:53:24.585 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:24 vm02 bash[55303]: audit 2026-03-10T05:53:23.952036+0000 mon.a (mon.0) 43 : audit [INF] from='mgr.24992 ' entity='mgr.y' 2026-03-10T05:53:24.585 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:24 vm02 bash[55303]: audit 2026-03-10T05:53:23.953938+0000 mon.a (mon.0) 44 : audit [INF] from='mgr.24992 ' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm02", "name": "osd_memory_target"}]: dispatch 2026-03-10T05:53:24.585 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:24 vm02 bash[55303]: audit 2026-03-10T05:53:23.953938+0000 mon.a (mon.0) 44 : audit [INF] from='mgr.24992 ' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm02", "name": "osd_memory_target"}]: dispatch 2026-03-10T05:53:24.585 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:24 vm02 bash[55303]: audit 2026-03-10T05:53:23.956562+0000 mon.b (mon.2) 125 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm02", "name": "osd_memory_target"}]: dispatch 2026-03-10T05:53:24.585 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:24 vm02 bash[55303]: audit 2026-03-10T05:53:23.956562+0000 mon.b (mon.2) 125 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm02", "name": "osd_memory_target"}]: dispatch 2026-03-10T05:53:24.585 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:24 vm02 bash[55303]: audit 2026-03-10T05:53:23.957631+0000 mon.b (mon.2) 126 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:53:24.585 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:24 vm02 bash[55303]: audit 2026-03-10T05:53:23.957631+0000 mon.b (mon.2) 126 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:53:24.585 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:24 vm02 bash[55303]: audit 2026-03-10T05:53:23.958120+0000 mon.b (mon.2) 127 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T05:53:24.585 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:24 vm02 bash[55303]: audit 2026-03-10T05:53:23.958120+0000 mon.b (mon.2) 127 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T05:53:24.585 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:24 vm02 bash[55303]: audit 2026-03-10T05:53:24.102338+0000 mon.a (mon.0) 45 : audit [INF] from='mgr.24992 ' entity='mgr.y' 2026-03-10T05:53:24.585 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:24 vm02 bash[55303]: audit 2026-03-10T05:53:24.102338+0000 mon.a (mon.0) 45 : audit [INF] from='mgr.24992 ' entity='mgr.y' 2026-03-10T05:53:24.585 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:24 vm02 bash[55303]: audit 2026-03-10T05:53:24.107356+0000 mon.a (mon.0) 46 : audit [INF] from='mgr.24992 ' entity='mgr.y' 2026-03-10T05:53:24.585 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:24 vm02 bash[55303]: audit 2026-03-10T05:53:24.107356+0000 mon.a (mon.0) 46 : audit [INF] from='mgr.24992 ' entity='mgr.y' 2026-03-10T05:53:24.585 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:24 vm02 bash[55303]: audit 2026-03-10T05:53:24.112015+0000 mon.a (mon.0) 47 : audit [INF] from='mgr.24992 ' entity='mgr.y' 2026-03-10T05:53:24.585 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:24 vm02 bash[55303]: audit 2026-03-10T05:53:24.112015+0000 mon.a (mon.0) 47 : audit [INF] from='mgr.24992 ' entity='mgr.y' 2026-03-10T05:53:24.585 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:24 vm02 bash[55303]: audit 2026-03-10T05:53:24.116387+0000 mon.a (mon.0) 48 : audit [INF] from='mgr.24992 ' entity='mgr.y' 2026-03-10T05:53:24.585 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:24 vm02 bash[55303]: audit 2026-03-10T05:53:24.116387+0000 mon.a (mon.0) 48 : audit [INF] from='mgr.24992 ' entity='mgr.y' 2026-03-10T05:53:24.585 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:24 vm02 bash[55303]: audit 2026-03-10T05:53:24.121138+0000 mon.a (mon.0) 49 : audit [INF] from='mgr.24992 ' entity='mgr.y' 2026-03-10T05:53:24.585 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:24 vm02 bash[55303]: audit 2026-03-10T05:53:24.121138+0000 mon.a (mon.0) 49 : audit [INF] from='mgr.24992 ' entity='mgr.y' 2026-03-10T05:53:24.585 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:24 vm02 bash[55303]: audit 2026-03-10T05:53:24.162889+0000 mon.b (mon.2) 128 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T05:53:24.585 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:24 vm02 bash[55303]: audit 2026-03-10T05:53:24.162889+0000 mon.b (mon.2) 128 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T05:53:24.586 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:24 vm02 bash[55303]: audit 2026-03-10T05:53:24.164427+0000 mon.b (mon.2) 129 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T05:53:24.586 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:24 vm02 bash[55303]: audit 2026-03-10T05:53:24.164427+0000 mon.b (mon.2) 129 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T05:53:24.586 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:24 vm02 bash[55303]: audit 2026-03-10T05:53:24.165302+0000 mon.b (mon.2) 130 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "quorum_status"}]: dispatch 2026-03-10T05:53:24.586 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:24 vm02 bash[55303]: audit 2026-03-10T05:53:24.165302+0000 mon.b (mon.2) 130 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "quorum_status"}]: dispatch 2026-03-10T05:53:24.586 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:24 vm02 bash[55303]: audit 2026-03-10T05:53:24.165965+0000 mon.b (mon.2) 131 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "mon ok-to-stop", "ids": ["b"]}]: dispatch 2026-03-10T05:53:24.586 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:24 vm02 bash[55303]: audit 2026-03-10T05:53:24.165965+0000 mon.b (mon.2) 131 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "mon ok-to-stop", "ids": ["b"]}]: dispatch 2026-03-10T05:53:25.152 INFO:journalctl@ceph.mgr.x.vm05.stdout:Mar 10 05:53:25 vm05 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:53:25.152 INFO:journalctl@ceph.osd.6.vm05.stdout:Mar 10 05:53:25 vm05 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:53:25.153 INFO:journalctl@ceph.osd.7.vm05.stdout:Mar 10 05:53:25 vm05 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:53:25.153 INFO:journalctl@ceph.osd.4.vm05.stdout:Mar 10 05:53:25 vm05 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:53:25.153 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:25 vm05 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:53:25.153 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:25 vm05 systemd[1]: Stopping Ceph mon.b for 107483ae-1c44-11f1-b530-c1172cd6122a... 2026-03-10T05:53:25.153 INFO:journalctl@ceph.osd.5.vm05.stdout:Mar 10 05:53:25 vm05 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:53:25.153 INFO:journalctl@ceph.prometheus.a.vm05.stdout:Mar 10 05:53:25 vm05 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:53:25.153 INFO:journalctl@ceph.node-exporter.b.vm05.stdout:Mar 10 05:53:25 vm05 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:53:25.153 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:53:25 vm05 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:53:25.470 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:25 vm05 bash[17864]: debug 2026-03-10T05:53:25.189+0000 7f7c7983e700 -1 received signal: Terminated from /sbin/docker-init -- /usr/bin/ceph-mon -n mon.b -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-stderr=true --default-log-stderr-prefix=debug --default-mon-cluster-log-to-file=false --default-mon-cluster-log-to-stderr=true (PID: 1) UID: 0 2026-03-10T05:53:25.470 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:25 vm05 bash[17864]: debug 2026-03-10T05:53:25.189+0000 7f7c7983e700 -1 mon.b@2(peon) e3 *** Got Signal Terminated *** 2026-03-10T05:53:25.470 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:25 vm05 bash[43426]: ceph-107483ae-1c44-11f1-b530-c1172cd6122a-mon-b 2026-03-10T05:53:25.470 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:25 vm05 systemd[1]: ceph-107483ae-1c44-11f1-b530-c1172cd6122a@mon.b.service: Deactivated successfully. 2026-03-10T05:53:25.470 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:25 vm05 systemd[1]: Stopped Ceph mon.b for 107483ae-1c44-11f1-b530-c1172cd6122a. 2026-03-10T05:53:25.496 INFO:journalctl@ceph.mgr.y.vm02.stdout:Mar 10 05:53:25 vm02 bash[52264]: [10/Mar/2026:05:53:25] ENGINE Bus STOPPING 2026-03-10T05:53:25.496 INFO:journalctl@ceph.mgr.y.vm02.stdout:Mar 10 05:53:25 vm02 bash[52264]: [10/Mar/2026:05:53:25] ENGINE HTTP Server cherrypy._cpwsgi_server.CPWSGIServer(('::', 9283)) shut down 2026-03-10T05:53:25.496 INFO:journalctl@ceph.mgr.y.vm02.stdout:Mar 10 05:53:25 vm02 bash[52264]: [10/Mar/2026:05:53:25] ENGINE Bus STOPPED 2026-03-10T05:53:25.496 INFO:journalctl@ceph.mgr.y.vm02.stdout:Mar 10 05:53:25 vm02 bash[52264]: [10/Mar/2026:05:53:25] ENGINE Bus STARTING 2026-03-10T05:53:25.751 INFO:journalctl@ceph.mgr.x.vm05.stdout:Mar 10 05:53:25 vm05 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:53:25.751 INFO:journalctl@ceph.osd.4.vm05.stdout:Mar 10 05:53:25 vm05 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:53:25.751 INFO:journalctl@ceph.osd.6.vm05.stdout:Mar 10 05:53:25 vm05 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:53:25.752 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:25 vm05 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:53:25.752 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:25 vm05 systemd[1]: Started Ceph mon.b for 107483ae-1c44-11f1-b530-c1172cd6122a. 2026-03-10T05:53:25.753 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:25 vm05 bash[43541]: debug 2026-03-10T05:53:25.653+0000 7f9f03ad6d80 0 set uid:gid to 167:167 (ceph:ceph) 2026-03-10T05:53:25.753 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:25 vm05 bash[43541]: debug 2026-03-10T05:53:25.653+0000 7f9f03ad6d80 0 ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable), process ceph-mon, pid 7 2026-03-10T05:53:25.753 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:25 vm05 bash[43541]: debug 2026-03-10T05:53:25.653+0000 7f9f03ad6d80 0 pidfile_write: ignore empty --pid-file 2026-03-10T05:53:25.753 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:25 vm05 bash[43541]: debug 2026-03-10T05:53:25.657+0000 7f9f03ad6d80 0 load: jerasure load: lrc 2026-03-10T05:53:25.753 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:25 vm05 bash[43541]: debug 2026-03-10T05:53:25.657+0000 7f9f03ad6d80 4 rocksdb: RocksDB version: 7.9.2 2026-03-10T05:53:25.753 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:25 vm05 bash[43541]: debug 2026-03-10T05:53:25.657+0000 7f9f03ad6d80 4 rocksdb: Git sha 0 2026-03-10T05:53:25.753 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:25 vm05 bash[43541]: debug 2026-03-10T05:53:25.657+0000 7f9f03ad6d80 4 rocksdb: Compile date 2026-02-25 18:11:04 2026-03-10T05:53:25.753 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:25 vm05 bash[43541]: debug 2026-03-10T05:53:25.657+0000 7f9f03ad6d80 4 rocksdb: DB SUMMARY 2026-03-10T05:53:25.753 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:25 vm05 bash[43541]: debug 2026-03-10T05:53:25.657+0000 7f9f03ad6d80 4 rocksdb: DB Session ID: D69BBMWP4NOK3IHRCWC4 2026-03-10T05:53:25.753 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:25 vm05 bash[43541]: debug 2026-03-10T05:53:25.657+0000 7f9f03ad6d80 4 rocksdb: CURRENT file: CURRENT 2026-03-10T05:53:25.753 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:25 vm05 bash[43541]: debug 2026-03-10T05:53:25.657+0000 7f9f03ad6d80 4 rocksdb: IDENTITY file: IDENTITY 2026-03-10T05:53:25.753 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:25 vm05 bash[43541]: debug 2026-03-10T05:53:25.657+0000 7f9f03ad6d80 4 rocksdb: MANIFEST file: MANIFEST-000009 size: 883 Bytes 2026-03-10T05:53:25.753 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:25 vm05 bash[43541]: debug 2026-03-10T05:53:25.657+0000 7f9f03ad6d80 4 rocksdb: SST files in /var/lib/ceph/mon/ceph-b/store.db dir, Total Num: 1, files: 000024.sst 2026-03-10T05:53:25.753 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:25 vm05 bash[43541]: debug 2026-03-10T05:53:25.657+0000 7f9f03ad6d80 4 rocksdb: Write Ahead Log file in /var/lib/ceph/mon/ceph-b/store.db: 000022.log size: 358645 ; 2026-03-10T05:53:25.753 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:25 vm05 bash[43541]: debug 2026-03-10T05:53:25.657+0000 7f9f03ad6d80 4 rocksdb: Options.error_if_exists: 0 2026-03-10T05:53:25.753 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:25 vm05 bash[43541]: debug 2026-03-10T05:53:25.657+0000 7f9f03ad6d80 4 rocksdb: Options.create_if_missing: 0 2026-03-10T05:53:25.753 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:25 vm05 bash[43541]: debug 2026-03-10T05:53:25.657+0000 7f9f03ad6d80 4 rocksdb: Options.paranoid_checks: 1 2026-03-10T05:53:25.753 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:25 vm05 bash[43541]: debug 2026-03-10T05:53:25.657+0000 7f9f03ad6d80 4 rocksdb: Options.flush_verify_memtable_count: 1 2026-03-10T05:53:25.753 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:25 vm05 bash[43541]: debug 2026-03-10T05:53:25.657+0000 7f9f03ad6d80 4 rocksdb: Options.track_and_verify_wals_in_manifest: 0 2026-03-10T05:53:25.753 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:25 vm05 bash[43541]: debug 2026-03-10T05:53:25.657+0000 7f9f03ad6d80 4 rocksdb: Options.verify_sst_unique_id_in_manifest: 1 2026-03-10T05:53:25.753 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:25 vm05 bash[43541]: debug 2026-03-10T05:53:25.657+0000 7f9f03ad6d80 4 rocksdb: Options.env: 0x560a0845adc0 2026-03-10T05:53:25.753 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:25 vm05 bash[43541]: debug 2026-03-10T05:53:25.657+0000 7f9f03ad6d80 4 rocksdb: Options.fs: PosixFileSystem 2026-03-10T05:53:25.753 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:25 vm05 bash[43541]: debug 2026-03-10T05:53:25.657+0000 7f9f03ad6d80 4 rocksdb: Options.info_log: 0x560a460b37e0 2026-03-10T05:53:25.753 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:25 vm05 bash[43541]: debug 2026-03-10T05:53:25.657+0000 7f9f03ad6d80 4 rocksdb: Options.max_file_opening_threads: 16 2026-03-10T05:53:25.753 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:25 vm05 bash[43541]: debug 2026-03-10T05:53:25.657+0000 7f9f03ad6d80 4 rocksdb: Options.statistics: (nil) 2026-03-10T05:53:25.753 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:25 vm05 bash[43541]: debug 2026-03-10T05:53:25.657+0000 7f9f03ad6d80 4 rocksdb: Options.use_fsync: 0 2026-03-10T05:53:25.753 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:25 vm05 bash[43541]: debug 2026-03-10T05:53:25.657+0000 7f9f03ad6d80 4 rocksdb: Options.max_log_file_size: 0 2026-03-10T05:53:25.753 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:25 vm05 bash[43541]: debug 2026-03-10T05:53:25.657+0000 7f9f03ad6d80 4 rocksdb: Options.max_manifest_file_size: 1073741824 2026-03-10T05:53:25.753 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:25 vm05 bash[43541]: debug 2026-03-10T05:53:25.657+0000 7f9f03ad6d80 4 rocksdb: Options.log_file_time_to_roll: 0 2026-03-10T05:53:25.753 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:25 vm05 bash[43541]: debug 2026-03-10T05:53:25.657+0000 7f9f03ad6d80 4 rocksdb: Options.keep_log_file_num: 1000 2026-03-10T05:53:25.753 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:25 vm05 bash[43541]: debug 2026-03-10T05:53:25.657+0000 7f9f03ad6d80 4 rocksdb: Options.recycle_log_file_num: 0 2026-03-10T05:53:25.753 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:25 vm05 bash[43541]: debug 2026-03-10T05:53:25.657+0000 7f9f03ad6d80 4 rocksdb: Options.allow_fallocate: 1 2026-03-10T05:53:25.753 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:25 vm05 bash[43541]: debug 2026-03-10T05:53:25.657+0000 7f9f03ad6d80 4 rocksdb: Options.allow_mmap_reads: 0 2026-03-10T05:53:25.753 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:25 vm05 bash[43541]: debug 2026-03-10T05:53:25.657+0000 7f9f03ad6d80 4 rocksdb: Options.allow_mmap_writes: 0 2026-03-10T05:53:25.753 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:25 vm05 bash[43541]: debug 2026-03-10T05:53:25.657+0000 7f9f03ad6d80 4 rocksdb: Options.use_direct_reads: 0 2026-03-10T05:53:25.753 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:25 vm05 bash[43541]: debug 2026-03-10T05:53:25.657+0000 7f9f03ad6d80 4 rocksdb: Options.use_direct_io_for_flush_and_compaction: 0 2026-03-10T05:53:25.753 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:25 vm05 bash[43541]: debug 2026-03-10T05:53:25.657+0000 7f9f03ad6d80 4 rocksdb: Options.create_missing_column_families: 0 2026-03-10T05:53:25.753 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:25 vm05 bash[43541]: debug 2026-03-10T05:53:25.657+0000 7f9f03ad6d80 4 rocksdb: Options.db_log_dir: 2026-03-10T05:53:25.753 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:25 vm05 bash[43541]: debug 2026-03-10T05:53:25.657+0000 7f9f03ad6d80 4 rocksdb: Options.wal_dir: 2026-03-10T05:53:25.753 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:25 vm05 bash[43541]: debug 2026-03-10T05:53:25.657+0000 7f9f03ad6d80 4 rocksdb: Options.table_cache_numshardbits: 6 2026-03-10T05:53:25.753 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:25 vm05 bash[43541]: debug 2026-03-10T05:53:25.657+0000 7f9f03ad6d80 4 rocksdb: Options.WAL_ttl_seconds: 0 2026-03-10T05:53:25.753 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:25 vm05 bash[43541]: debug 2026-03-10T05:53:25.657+0000 7f9f03ad6d80 4 rocksdb: Options.WAL_size_limit_MB: 0 2026-03-10T05:53:25.753 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:25 vm05 bash[43541]: debug 2026-03-10T05:53:25.657+0000 7f9f03ad6d80 4 rocksdb: Options.max_write_batch_group_size_bytes: 1048576 2026-03-10T05:53:25.753 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:25 vm05 bash[43541]: debug 2026-03-10T05:53:25.657+0000 7f9f03ad6d80 4 rocksdb: Options.manifest_preallocation_size: 4194304 2026-03-10T05:53:25.753 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:25 vm05 bash[43541]: debug 2026-03-10T05:53:25.657+0000 7f9f03ad6d80 4 rocksdb: Options.is_fd_close_on_exec: 1 2026-03-10T05:53:25.753 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:25 vm05 bash[43541]: debug 2026-03-10T05:53:25.657+0000 7f9f03ad6d80 4 rocksdb: Options.advise_random_on_open: 1 2026-03-10T05:53:25.753 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:25 vm05 bash[43541]: debug 2026-03-10T05:53:25.657+0000 7f9f03ad6d80 4 rocksdb: Options.db_write_buffer_size: 0 2026-03-10T05:53:25.753 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:25 vm05 bash[43541]: debug 2026-03-10T05:53:25.657+0000 7f9f03ad6d80 4 rocksdb: Options.write_buffer_manager: 0x560a460b7900 2026-03-10T05:53:25.753 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:25 vm05 bash[43541]: debug 2026-03-10T05:53:25.657+0000 7f9f03ad6d80 4 rocksdb: Options.access_hint_on_compaction_start: 1 2026-03-10T05:53:25.753 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:25 vm05 bash[43541]: debug 2026-03-10T05:53:25.657+0000 7f9f03ad6d80 4 rocksdb: Options.random_access_max_buffer_size: 1048576 2026-03-10T05:53:25.753 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:25 vm05 bash[43541]: debug 2026-03-10T05:53:25.657+0000 7f9f03ad6d80 4 rocksdb: Options.use_adaptive_mutex: 0 2026-03-10T05:53:25.753 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:25 vm05 bash[43541]: debug 2026-03-10T05:53:25.657+0000 7f9f03ad6d80 4 rocksdb: Options.rate_limiter: (nil) 2026-03-10T05:53:25.753 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:25 vm05 bash[43541]: debug 2026-03-10T05:53:25.657+0000 7f9f03ad6d80 4 rocksdb: Options.sst_file_manager.rate_bytes_per_sec: 0 2026-03-10T05:53:25.753 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:25 vm05 bash[43541]: debug 2026-03-10T05:53:25.657+0000 7f9f03ad6d80 4 rocksdb: Options.wal_recovery_mode: 2 2026-03-10T05:53:25.753 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:25 vm05 bash[43541]: debug 2026-03-10T05:53:25.657+0000 7f9f03ad6d80 4 rocksdb: Options.enable_thread_tracking: 0 2026-03-10T05:53:25.753 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:25 vm05 bash[43541]: debug 2026-03-10T05:53:25.657+0000 7f9f03ad6d80 4 rocksdb: Options.enable_pipelined_write: 0 2026-03-10T05:53:25.753 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:25 vm05 bash[43541]: debug 2026-03-10T05:53:25.657+0000 7f9f03ad6d80 4 rocksdb: Options.unordered_write: 0 2026-03-10T05:53:25.753 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:25 vm05 bash[43541]: debug 2026-03-10T05:53:25.657+0000 7f9f03ad6d80 4 rocksdb: Options.allow_concurrent_memtable_write: 1 2026-03-10T05:53:25.753 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:25 vm05 bash[43541]: debug 2026-03-10T05:53:25.657+0000 7f9f03ad6d80 4 rocksdb: Options.enable_write_thread_adaptive_yield: 1 2026-03-10T05:53:25.753 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:25 vm05 bash[43541]: debug 2026-03-10T05:53:25.657+0000 7f9f03ad6d80 4 rocksdb: Options.write_thread_max_yield_usec: 100 2026-03-10T05:53:25.753 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:25 vm05 bash[43541]: debug 2026-03-10T05:53:25.657+0000 7f9f03ad6d80 4 rocksdb: Options.write_thread_slow_yield_usec: 3 2026-03-10T05:53:25.753 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:25 vm05 bash[43541]: debug 2026-03-10T05:53:25.657+0000 7f9f03ad6d80 4 rocksdb: Options.row_cache: None 2026-03-10T05:53:25.753 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:25 vm05 bash[43541]: debug 2026-03-10T05:53:25.657+0000 7f9f03ad6d80 4 rocksdb: Options.wal_filter: None 2026-03-10T05:53:25.753 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:25 vm05 bash[43541]: debug 2026-03-10T05:53:25.657+0000 7f9f03ad6d80 4 rocksdb: Options.avoid_flush_during_recovery: 0 2026-03-10T05:53:25.754 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:25 vm05 bash[43541]: debug 2026-03-10T05:53:25.657+0000 7f9f03ad6d80 4 rocksdb: Options.allow_ingest_behind: 0 2026-03-10T05:53:25.754 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:25 vm05 bash[43541]: debug 2026-03-10T05:53:25.657+0000 7f9f03ad6d80 4 rocksdb: Options.two_write_queues: 0 2026-03-10T05:53:25.754 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:25 vm05 bash[43541]: debug 2026-03-10T05:53:25.657+0000 7f9f03ad6d80 4 rocksdb: Options.manual_wal_flush: 0 2026-03-10T05:53:25.754 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:25 vm05 bash[43541]: debug 2026-03-10T05:53:25.657+0000 7f9f03ad6d80 4 rocksdb: Options.wal_compression: 0 2026-03-10T05:53:25.754 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:25 vm05 bash[43541]: debug 2026-03-10T05:53:25.657+0000 7f9f03ad6d80 4 rocksdb: Options.atomic_flush: 0 2026-03-10T05:53:25.754 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:25 vm05 bash[43541]: debug 2026-03-10T05:53:25.657+0000 7f9f03ad6d80 4 rocksdb: Options.avoid_unnecessary_blocking_io: 0 2026-03-10T05:53:25.754 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:25 vm05 bash[43541]: debug 2026-03-10T05:53:25.657+0000 7f9f03ad6d80 4 rocksdb: Options.persist_stats_to_disk: 0 2026-03-10T05:53:25.754 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:25 vm05 bash[43541]: debug 2026-03-10T05:53:25.657+0000 7f9f03ad6d80 4 rocksdb: Options.write_dbid_to_manifest: 0 2026-03-10T05:53:25.754 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:25 vm05 bash[43541]: debug 2026-03-10T05:53:25.657+0000 7f9f03ad6d80 4 rocksdb: Options.log_readahead_size: 0 2026-03-10T05:53:25.754 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:25 vm05 bash[43541]: debug 2026-03-10T05:53:25.657+0000 7f9f03ad6d80 4 rocksdb: Options.file_checksum_gen_factory: Unknown 2026-03-10T05:53:25.754 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:25 vm05 bash[43541]: debug 2026-03-10T05:53:25.657+0000 7f9f03ad6d80 4 rocksdb: Options.best_efforts_recovery: 0 2026-03-10T05:53:25.754 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:25 vm05 bash[43541]: debug 2026-03-10T05:53:25.657+0000 7f9f03ad6d80 4 rocksdb: Options.max_bgerror_resume_count: 2147483647 2026-03-10T05:53:25.754 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:25 vm05 bash[43541]: debug 2026-03-10T05:53:25.657+0000 7f9f03ad6d80 4 rocksdb: Options.bgerror_resume_retry_interval: 1000000 2026-03-10T05:53:25.754 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:25 vm05 bash[43541]: debug 2026-03-10T05:53:25.657+0000 7f9f03ad6d80 4 rocksdb: Options.allow_data_in_errors: 0 2026-03-10T05:53:25.754 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:25 vm05 bash[43541]: debug 2026-03-10T05:53:25.657+0000 7f9f03ad6d80 4 rocksdb: Options.db_host_id: __hostname__ 2026-03-10T05:53:25.754 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:25 vm05 bash[43541]: debug 2026-03-10T05:53:25.657+0000 7f9f03ad6d80 4 rocksdb: Options.enforce_single_del_contracts: true 2026-03-10T05:53:25.754 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:25 vm05 bash[43541]: debug 2026-03-10T05:53:25.657+0000 7f9f03ad6d80 4 rocksdb: Options.max_background_jobs: 2 2026-03-10T05:53:25.754 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:25 vm05 bash[43541]: debug 2026-03-10T05:53:25.657+0000 7f9f03ad6d80 4 rocksdb: Options.max_background_compactions: -1 2026-03-10T05:53:25.754 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:25 vm05 bash[43541]: debug 2026-03-10T05:53:25.657+0000 7f9f03ad6d80 4 rocksdb: Options.max_subcompactions: 1 2026-03-10T05:53:25.754 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:25 vm05 bash[43541]: debug 2026-03-10T05:53:25.657+0000 7f9f03ad6d80 4 rocksdb: Options.avoid_flush_during_shutdown: 0 2026-03-10T05:53:25.754 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:25 vm05 bash[43541]: debug 2026-03-10T05:53:25.657+0000 7f9f03ad6d80 4 rocksdb: Options.writable_file_max_buffer_size: 1048576 2026-03-10T05:53:25.754 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:25 vm05 bash[43541]: debug 2026-03-10T05:53:25.657+0000 7f9f03ad6d80 4 rocksdb: Options.delayed_write_rate : 16777216 2026-03-10T05:53:25.754 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:25 vm05 bash[43541]: debug 2026-03-10T05:53:25.657+0000 7f9f03ad6d80 4 rocksdb: Options.max_total_wal_size: 0 2026-03-10T05:53:25.754 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:25 vm05 bash[43541]: debug 2026-03-10T05:53:25.657+0000 7f9f03ad6d80 4 rocksdb: Options.delete_obsolete_files_period_micros: 21600000000 2026-03-10T05:53:25.754 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:25 vm05 bash[43541]: debug 2026-03-10T05:53:25.657+0000 7f9f03ad6d80 4 rocksdb: Options.stats_dump_period_sec: 600 2026-03-10T05:53:25.754 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:25 vm05 bash[43541]: debug 2026-03-10T05:53:25.657+0000 7f9f03ad6d80 4 rocksdb: Options.stats_persist_period_sec: 600 2026-03-10T05:53:25.754 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:25 vm05 bash[43541]: debug 2026-03-10T05:53:25.657+0000 7f9f03ad6d80 4 rocksdb: Options.stats_history_buffer_size: 1048576 2026-03-10T05:53:25.754 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:25 vm05 bash[43541]: debug 2026-03-10T05:53:25.657+0000 7f9f03ad6d80 4 rocksdb: Options.max_open_files: -1 2026-03-10T05:53:25.754 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:25 vm05 bash[43541]: debug 2026-03-10T05:53:25.657+0000 7f9f03ad6d80 4 rocksdb: Options.bytes_per_sync: 0 2026-03-10T05:53:25.754 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:25 vm05 bash[43541]: debug 2026-03-10T05:53:25.657+0000 7f9f03ad6d80 4 rocksdb: Options.wal_bytes_per_sync: 0 2026-03-10T05:53:25.754 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:25 vm05 bash[43541]: debug 2026-03-10T05:53:25.657+0000 7f9f03ad6d80 4 rocksdb: Options.strict_bytes_per_sync: 0 2026-03-10T05:53:25.754 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:25 vm05 bash[43541]: debug 2026-03-10T05:53:25.657+0000 7f9f03ad6d80 4 rocksdb: Options.compaction_readahead_size: 0 2026-03-10T05:53:25.754 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:25 vm05 bash[43541]: debug 2026-03-10T05:53:25.657+0000 7f9f03ad6d80 4 rocksdb: Options.max_background_flushes: -1 2026-03-10T05:53:25.754 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:25 vm05 bash[43541]: debug 2026-03-10T05:53:25.657+0000 7f9f03ad6d80 4 rocksdb: Compression algorithms supported: 2026-03-10T05:53:25.754 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:25 vm05 bash[43541]: debug 2026-03-10T05:53:25.657+0000 7f9f03ad6d80 4 rocksdb: kZSTD supported: 0 2026-03-10T05:53:25.754 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:25 vm05 bash[43541]: debug 2026-03-10T05:53:25.657+0000 7f9f03ad6d80 4 rocksdb: kXpressCompression supported: 0 2026-03-10T05:53:25.754 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:25 vm05 bash[43541]: debug 2026-03-10T05:53:25.657+0000 7f9f03ad6d80 4 rocksdb: kBZip2Compression supported: 0 2026-03-10T05:53:25.754 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:25 vm05 bash[43541]: debug 2026-03-10T05:53:25.657+0000 7f9f03ad6d80 4 rocksdb: kZSTDNotFinalCompression supported: 0 2026-03-10T05:53:25.754 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:25 vm05 bash[43541]: debug 2026-03-10T05:53:25.657+0000 7f9f03ad6d80 4 rocksdb: kLZ4Compression supported: 1 2026-03-10T05:53:25.754 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:25 vm05 bash[43541]: debug 2026-03-10T05:53:25.657+0000 7f9f03ad6d80 4 rocksdb: kZlibCompression supported: 1 2026-03-10T05:53:25.754 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:25 vm05 bash[43541]: debug 2026-03-10T05:53:25.657+0000 7f9f03ad6d80 4 rocksdb: kLZ4HCCompression supported: 1 2026-03-10T05:53:25.754 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:25 vm05 bash[43541]: debug 2026-03-10T05:53:25.657+0000 7f9f03ad6d80 4 rocksdb: kSnappyCompression supported: 1 2026-03-10T05:53:25.754 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:25 vm05 bash[43541]: debug 2026-03-10T05:53:25.657+0000 7f9f03ad6d80 4 rocksdb: Fast CRC32 supported: Supported on x86 2026-03-10T05:53:25.754 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:25 vm05 bash[43541]: debug 2026-03-10T05:53:25.657+0000 7f9f03ad6d80 4 rocksdb: DMutex implementation: pthread_mutex_t 2026-03-10T05:53:25.754 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:25 vm05 bash[43541]: debug 2026-03-10T05:53:25.657+0000 7f9f03ad6d80 4 rocksdb: [db/version_set.cc:5527] Recovering from manifest file: /var/lib/ceph/mon/ceph-b/store.db/MANIFEST-000009 2026-03-10T05:53:25.754 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:25 vm05 bash[43541]: debug 2026-03-10T05:53:25.657+0000 7f9f03ad6d80 4 rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]: 2026-03-10T05:53:25.754 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:25 vm05 bash[43541]: debug 2026-03-10T05:53:25.657+0000 7f9f03ad6d80 4 rocksdb: Options.comparator: leveldb.BytewiseComparator 2026-03-10T05:53:25.754 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:25 vm05 bash[43541]: debug 2026-03-10T05:53:25.657+0000 7f9f03ad6d80 4 rocksdb: Options.merge_operator: 2026-03-10T05:53:25.754 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:25 vm05 bash[43541]: debug 2026-03-10T05:53:25.657+0000 7f9f03ad6d80 4 rocksdb: Options.compaction_filter: None 2026-03-10T05:53:25.754 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:25 vm05 bash[43541]: debug 2026-03-10T05:53:25.657+0000 7f9f03ad6d80 4 rocksdb: Options.compaction_filter_factory: None 2026-03-10T05:53:25.754 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:25 vm05 bash[43541]: debug 2026-03-10T05:53:25.657+0000 7f9f03ad6d80 4 rocksdb: Options.sst_partitioner_factory: None 2026-03-10T05:53:25.754 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:25 vm05 bash[43541]: debug 2026-03-10T05:53:25.657+0000 7f9f03ad6d80 4 rocksdb: Options.memtable_factory: SkipListFactory 2026-03-10T05:53:25.754 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:25 vm05 bash[43541]: debug 2026-03-10T05:53:25.657+0000 7f9f03ad6d80 4 rocksdb: Options.table_factory: BlockBasedTable 2026-03-10T05:53:25.754 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:25 vm05 bash[43541]: debug 2026-03-10T05:53:25.657+0000 7f9f03ad6d80 4 rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x560a460b2320) 2026-03-10T05:53:25.754 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:25 vm05 bash[43541]: cache_index_and_filter_blocks: 1 2026-03-10T05:53:25.754 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:25 vm05 bash[43541]: cache_index_and_filter_blocks_with_high_priority: 0 2026-03-10T05:53:25.754 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:25 vm05 bash[43541]: pin_l0_filter_and_index_blocks_in_cache: 0 2026-03-10T05:53:25.754 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:25 vm05 bash[43541]: pin_top_level_index_and_filter: 1 2026-03-10T05:53:25.754 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:25 vm05 bash[43541]: index_type: 0 2026-03-10T05:53:25.754 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:25 vm05 bash[43541]: data_block_index_type: 0 2026-03-10T05:53:25.754 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:25 vm05 bash[43541]: index_shortening: 1 2026-03-10T05:53:25.754 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:25 vm05 bash[43541]: data_block_hash_table_util_ratio: 0.750000 2026-03-10T05:53:25.754 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:25 vm05 bash[43541]: checksum: 4 2026-03-10T05:53:25.755 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:25 vm05 bash[43541]: no_block_cache: 0 2026-03-10T05:53:25.755 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:25 vm05 bash[43541]: block_cache: 0x560a460d9350 2026-03-10T05:53:25.755 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:25 vm05 bash[43541]: block_cache_name: BinnedLRUCache 2026-03-10T05:53:25.755 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:25 vm05 bash[43541]: block_cache_options: 2026-03-10T05:53:25.755 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:25 vm05 bash[43541]: capacity : 536870912 2026-03-10T05:53:25.755 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:25 vm05 bash[43541]: num_shard_bits : 4 2026-03-10T05:53:25.755 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:25 vm05 bash[43541]: strict_capacity_limit : 0 2026-03-10T05:53:25.755 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:25 vm05 bash[43541]: high_pri_pool_ratio: 0.000 2026-03-10T05:53:25.755 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:25 vm05 bash[43541]: block_cache_compressed: (nil) 2026-03-10T05:53:25.755 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:25 vm05 bash[43541]: persistent_cache: (nil) 2026-03-10T05:53:25.755 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:25 vm05 bash[43541]: block_size: 4096 2026-03-10T05:53:25.755 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:25 vm05 bash[43541]: block_size_deviation: 10 2026-03-10T05:53:25.755 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:25 vm05 bash[43541]: block_restart_interval: 16 2026-03-10T05:53:25.755 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:25 vm05 bash[43541]: index_block_restart_interval: 1 2026-03-10T05:53:25.755 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:25 vm05 bash[43541]: metadata_block_size: 4096 2026-03-10T05:53:25.755 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:25 vm05 bash[43541]: partition_filters: 0 2026-03-10T05:53:25.755 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:25 vm05 bash[43541]: use_delta_encoding: 1 2026-03-10T05:53:25.755 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:25 vm05 bash[43541]: filter_policy: bloomfilter 2026-03-10T05:53:25.755 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:25 vm05 bash[43541]: whole_key_filtering: 1 2026-03-10T05:53:25.755 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:25 vm05 bash[43541]: verify_compression: 0 2026-03-10T05:53:25.755 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:25 vm05 bash[43541]: read_amp_bytes_per_bit: 0 2026-03-10T05:53:25.755 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:25 vm05 bash[43541]: format_version: 5 2026-03-10T05:53:25.755 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:25 vm05 bash[43541]: enable_index_compression: 1 2026-03-10T05:53:25.755 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:25 vm05 bash[43541]: block_align: 0 2026-03-10T05:53:25.755 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:25 vm05 bash[43541]: max_auto_readahead_size: 262144 2026-03-10T05:53:25.755 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:25 vm05 bash[43541]: prepopulate_block_cache: 0 2026-03-10T05:53:25.755 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:25 vm05 bash[43541]: initial_auto_readahead_size: 8192 2026-03-10T05:53:25.755 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:25 vm05 bash[43541]: num_file_reads_for_auto_readahead: 2 2026-03-10T05:53:25.755 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:25 vm05 bash[43541]: debug 2026-03-10T05:53:25.657+0000 7f9f03ad6d80 4 rocksdb: Options.write_buffer_size: 33554432 2026-03-10T05:53:25.755 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:25 vm05 bash[43541]: debug 2026-03-10T05:53:25.657+0000 7f9f03ad6d80 4 rocksdb: Options.max_write_buffer_number: 2 2026-03-10T05:53:25.755 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:25 vm05 bash[43541]: debug 2026-03-10T05:53:25.657+0000 7f9f03ad6d80 4 rocksdb: Options.compression: NoCompression 2026-03-10T05:53:25.755 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:25 vm05 bash[43541]: debug 2026-03-10T05:53:25.657+0000 7f9f03ad6d80 4 rocksdb: Options.bottommost_compression: Disabled 2026-03-10T05:53:25.755 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:25 vm05 bash[43541]: debug 2026-03-10T05:53:25.657+0000 7f9f03ad6d80 4 rocksdb: Options.prefix_extractor: nullptr 2026-03-10T05:53:25.755 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:25 vm05 bash[43541]: debug 2026-03-10T05:53:25.657+0000 7f9f03ad6d80 4 rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr 2026-03-10T05:53:25.755 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:25 vm05 bash[43541]: debug 2026-03-10T05:53:25.657+0000 7f9f03ad6d80 4 rocksdb: Options.num_levels: 7 2026-03-10T05:53:25.755 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:25 vm05 bash[43541]: debug 2026-03-10T05:53:25.657+0000 7f9f03ad6d80 4 rocksdb: Options.min_write_buffer_number_to_merge: 1 2026-03-10T05:53:25.755 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:25 vm05 bash[43541]: debug 2026-03-10T05:53:25.657+0000 7f9f03ad6d80 4 rocksdb: Options.max_write_buffer_number_to_maintain: 0 2026-03-10T05:53:25.755 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:25 vm05 bash[43541]: debug 2026-03-10T05:53:25.657+0000 7f9f03ad6d80 4 rocksdb: Options.max_write_buffer_size_to_maintain: 0 2026-03-10T05:53:25.755 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:25 vm05 bash[43541]: debug 2026-03-10T05:53:25.657+0000 7f9f03ad6d80 4 rocksdb: Options.bottommost_compression_opts.window_bits: -14 2026-03-10T05:53:25.755 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:25 vm05 bash[43541]: debug 2026-03-10T05:53:25.657+0000 7f9f03ad6d80 4 rocksdb: Options.bottommost_compression_opts.level: 32767 2026-03-10T05:53:25.755 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:25 vm05 bash[43541]: debug 2026-03-10T05:53:25.657+0000 7f9f03ad6d80 4 rocksdb: Options.bottommost_compression_opts.strategy: 0 2026-03-10T05:53:25.755 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:25 vm05 bash[43541]: debug 2026-03-10T05:53:25.657+0000 7f9f03ad6d80 4 rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 2026-03-10T05:53:25.755 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:25 vm05 bash[43541]: debug 2026-03-10T05:53:25.657+0000 7f9f03ad6d80 4 rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 2026-03-10T05:53:25.755 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:25 vm05 bash[43541]: debug 2026-03-10T05:53:25.657+0000 7f9f03ad6d80 4 rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 2026-03-10T05:53:25.755 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:25 vm05 bash[43541]: debug 2026-03-10T05:53:25.657+0000 7f9f03ad6d80 4 rocksdb: Options.bottommost_compression_opts.enabled: false 2026-03-10T05:53:25.755 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:25 vm05 bash[43541]: debug 2026-03-10T05:53:25.657+0000 7f9f03ad6d80 4 rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 2026-03-10T05:53:25.755 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:25 vm05 bash[43541]: debug 2026-03-10T05:53:25.657+0000 7f9f03ad6d80 4 rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true 2026-03-10T05:53:25.755 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:25 vm05 bash[43541]: debug 2026-03-10T05:53:25.657+0000 7f9f03ad6d80 4 rocksdb: Options.compression_opts.window_bits: -14 2026-03-10T05:53:25.755 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:25 vm05 bash[43541]: debug 2026-03-10T05:53:25.657+0000 7f9f03ad6d80 4 rocksdb: Options.compression_opts.level: 32767 2026-03-10T05:53:25.755 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:25 vm05 bash[43541]: debug 2026-03-10T05:53:25.657+0000 7f9f03ad6d80 4 rocksdb: Options.compression_opts.strategy: 0 2026-03-10T05:53:25.755 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:25 vm05 bash[43541]: debug 2026-03-10T05:53:25.657+0000 7f9f03ad6d80 4 rocksdb: Options.compression_opts.max_dict_bytes: 0 2026-03-10T05:53:25.755 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:25 vm05 bash[43541]: debug 2026-03-10T05:53:25.657+0000 7f9f03ad6d80 4 rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 2026-03-10T05:53:25.755 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:25 vm05 bash[43541]: debug 2026-03-10T05:53:25.657+0000 7f9f03ad6d80 4 rocksdb: Options.compression_opts.use_zstd_dict_trainer: true 2026-03-10T05:53:25.755 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:25 vm05 bash[43541]: debug 2026-03-10T05:53:25.657+0000 7f9f03ad6d80 4 rocksdb: Options.compression_opts.parallel_threads: 1 2026-03-10T05:53:25.755 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:25 vm05 bash[43541]: debug 2026-03-10T05:53:25.657+0000 7f9f03ad6d80 4 rocksdb: Options.compression_opts.enabled: false 2026-03-10T05:53:25.755 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:25 vm05 bash[43541]: debug 2026-03-10T05:53:25.657+0000 7f9f03ad6d80 4 rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 2026-03-10T05:53:25.755 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:25 vm05 bash[43541]: debug 2026-03-10T05:53:25.657+0000 7f9f03ad6d80 4 rocksdb: Options.level0_file_num_compaction_trigger: 4 2026-03-10T05:53:25.755 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:25 vm05 bash[43541]: debug 2026-03-10T05:53:25.657+0000 7f9f03ad6d80 4 rocksdb: Options.level0_slowdown_writes_trigger: 20 2026-03-10T05:53:25.755 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:25 vm05 bash[43541]: debug 2026-03-10T05:53:25.657+0000 7f9f03ad6d80 4 rocksdb: Options.level0_stop_writes_trigger: 36 2026-03-10T05:53:25.755 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:25 vm05 bash[43541]: debug 2026-03-10T05:53:25.657+0000 7f9f03ad6d80 4 rocksdb: Options.target_file_size_base: 67108864 2026-03-10T05:53:25.755 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:25 vm05 bash[43541]: debug 2026-03-10T05:53:25.657+0000 7f9f03ad6d80 4 rocksdb: Options.target_file_size_multiplier: 1 2026-03-10T05:53:25.755 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:25 vm05 bash[43541]: debug 2026-03-10T05:53:25.657+0000 7f9f03ad6d80 4 rocksdb: Options.max_bytes_for_level_base: 268435456 2026-03-10T05:53:25.755 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:25 vm05 bash[43541]: debug 2026-03-10T05:53:25.657+0000 7f9f03ad6d80 4 rocksdb: Options.level_compaction_dynamic_level_bytes: 1 2026-03-10T05:53:25.755 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:25 vm05 bash[43541]: debug 2026-03-10T05:53:25.657+0000 7f9f03ad6d80 4 rocksdb: Options.max_bytes_for_level_multiplier: 10.000000 2026-03-10T05:53:25.755 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:25 vm05 bash[43541]: debug 2026-03-10T05:53:25.657+0000 7f9f03ad6d80 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 2026-03-10T05:53:25.755 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:25 vm05 bash[43541]: debug 2026-03-10T05:53:25.657+0000 7f9f03ad6d80 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 2026-03-10T05:53:25.755 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:25 vm05 bash[43541]: debug 2026-03-10T05:53:25.657+0000 7f9f03ad6d80 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 2026-03-10T05:53:25.756 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:25 vm05 bash[43541]: debug 2026-03-10T05:53:25.657+0000 7f9f03ad6d80 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 2026-03-10T05:53:25.756 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:25 vm05 bash[43541]: debug 2026-03-10T05:53:25.657+0000 7f9f03ad6d80 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 2026-03-10T05:53:25.756 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:25 vm05 bash[43541]: debug 2026-03-10T05:53:25.657+0000 7f9f03ad6d80 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 2026-03-10T05:53:25.756 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:25 vm05 bash[43541]: debug 2026-03-10T05:53:25.657+0000 7f9f03ad6d80 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 2026-03-10T05:53:25.756 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:25 vm05 bash[43541]: debug 2026-03-10T05:53:25.657+0000 7f9f03ad6d80 4 rocksdb: Options.max_sequential_skip_in_iterations: 8 2026-03-10T05:53:25.756 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:25 vm05 bash[43541]: debug 2026-03-10T05:53:25.657+0000 7f9f03ad6d80 4 rocksdb: Options.max_compaction_bytes: 1677721600 2026-03-10T05:53:25.756 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:25 vm05 bash[43541]: debug 2026-03-10T05:53:25.657+0000 7f9f03ad6d80 4 rocksdb: Options.ignore_max_compaction_bytes_for_input: true 2026-03-10T05:53:25.756 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:25 vm05 bash[43541]: debug 2026-03-10T05:53:25.657+0000 7f9f03ad6d80 4 rocksdb: Options.arena_block_size: 1048576 2026-03-10T05:53:25.756 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:25 vm05 bash[43541]: debug 2026-03-10T05:53:25.657+0000 7f9f03ad6d80 4 rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 2026-03-10T05:53:25.756 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:25 vm05 bash[43541]: debug 2026-03-10T05:53:25.657+0000 7f9f03ad6d80 4 rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 2026-03-10T05:53:25.756 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:25 vm05 bash[43541]: debug 2026-03-10T05:53:25.657+0000 7f9f03ad6d80 4 rocksdb: Options.disable_auto_compactions: 0 2026-03-10T05:53:25.756 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:25 vm05 bash[43541]: debug 2026-03-10T05:53:25.657+0000 7f9f03ad6d80 4 rocksdb: Options.compaction_style: kCompactionStyleLevel 2026-03-10T05:53:25.756 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:25 vm05 bash[43541]: debug 2026-03-10T05:53:25.657+0000 7f9f03ad6d80 4 rocksdb: Options.compaction_pri: kMinOverlappingRatio 2026-03-10T05:53:25.756 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:25 vm05 bash[43541]: debug 2026-03-10T05:53:25.657+0000 7f9f03ad6d80 4 rocksdb: Options.compaction_options_universal.size_ratio: 1 2026-03-10T05:53:25.756 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:25 vm05 bash[43541]: debug 2026-03-10T05:53:25.657+0000 7f9f03ad6d80 4 rocksdb: Options.compaction_options_universal.min_merge_width: 2 2026-03-10T05:53:25.756 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:25 vm05 bash[43541]: debug 2026-03-10T05:53:25.657+0000 7f9f03ad6d80 4 rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 2026-03-10T05:53:25.756 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:25 vm05 bash[43541]: debug 2026-03-10T05:53:25.657+0000 7f9f03ad6d80 4 rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 2026-03-10T05:53:25.756 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:25 vm05 bash[43541]: debug 2026-03-10T05:53:25.657+0000 7f9f03ad6d80 4 rocksdb: Options.compaction_options_universal.compression_size_percent: -1 2026-03-10T05:53:25.756 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:25 vm05 bash[43541]: debug 2026-03-10T05:53:25.657+0000 7f9f03ad6d80 4 rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize 2026-03-10T05:53:25.756 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:25 vm05 bash[43541]: debug 2026-03-10T05:53:25.657+0000 7f9f03ad6d80 4 rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 2026-03-10T05:53:25.756 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:25 vm05 bash[43541]: debug 2026-03-10T05:53:25.657+0000 7f9f03ad6d80 4 rocksdb: Options.compaction_options_fifo.allow_compaction: 0 2026-03-10T05:53:25.756 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:25 vm05 bash[43541]: debug 2026-03-10T05:53:25.657+0000 7f9f03ad6d80 4 rocksdb: Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0); 2026-03-10T05:53:25.756 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:25 vm05 bash[43541]: debug 2026-03-10T05:53:25.657+0000 7f9f03ad6d80 4 rocksdb: Options.inplace_update_support: 0 2026-03-10T05:53:25.756 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:25 vm05 bash[43541]: debug 2026-03-10T05:53:25.657+0000 7f9f03ad6d80 4 rocksdb: Options.inplace_update_num_locks: 10000 2026-03-10T05:53:25.756 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:25 vm05 bash[43541]: debug 2026-03-10T05:53:25.657+0000 7f9f03ad6d80 4 rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 2026-03-10T05:53:25.756 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:25 vm05 bash[43541]: debug 2026-03-10T05:53:25.657+0000 7f9f03ad6d80 4 rocksdb: Options.memtable_whole_key_filtering: 0 2026-03-10T05:53:25.756 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:25 vm05 bash[43541]: debug 2026-03-10T05:53:25.657+0000 7f9f03ad6d80 4 rocksdb: Options.memtable_huge_page_size: 0 2026-03-10T05:53:25.756 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:25 vm05 bash[43541]: debug 2026-03-10T05:53:25.657+0000 7f9f03ad6d80 4 rocksdb: Options.bloom_locality: 0 2026-03-10T05:53:25.756 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:25 vm05 bash[43541]: debug 2026-03-10T05:53:25.657+0000 7f9f03ad6d80 4 rocksdb: Options.max_successive_merges: 0 2026-03-10T05:53:25.756 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:25 vm05 bash[43541]: debug 2026-03-10T05:53:25.657+0000 7f9f03ad6d80 4 rocksdb: Options.optimize_filters_for_hits: 0 2026-03-10T05:53:25.756 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:25 vm05 bash[43541]: debug 2026-03-10T05:53:25.657+0000 7f9f03ad6d80 4 rocksdb: Options.paranoid_file_checks: 0 2026-03-10T05:53:25.756 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:25 vm05 bash[43541]: debug 2026-03-10T05:53:25.657+0000 7f9f03ad6d80 4 rocksdb: Options.force_consistency_checks: 1 2026-03-10T05:53:25.756 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:25 vm05 bash[43541]: debug 2026-03-10T05:53:25.657+0000 7f9f03ad6d80 4 rocksdb: Options.report_bg_io_stats: 0 2026-03-10T05:53:25.756 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:25 vm05 bash[43541]: debug 2026-03-10T05:53:25.657+0000 7f9f03ad6d80 4 rocksdb: Options.ttl: 2592000 2026-03-10T05:53:25.756 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:25 vm05 bash[43541]: debug 2026-03-10T05:53:25.657+0000 7f9f03ad6d80 4 rocksdb: Options.periodic_compaction_seconds: 0 2026-03-10T05:53:25.756 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:25 vm05 bash[43541]: debug 2026-03-10T05:53:25.657+0000 7f9f03ad6d80 4 rocksdb: Options.preclude_last_level_data_seconds: 0 2026-03-10T05:53:25.756 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:25 vm05 bash[43541]: debug 2026-03-10T05:53:25.657+0000 7f9f03ad6d80 4 rocksdb: Options.preserve_internal_time_seconds: 0 2026-03-10T05:53:25.756 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:25 vm05 bash[43541]: debug 2026-03-10T05:53:25.657+0000 7f9f03ad6d80 4 rocksdb: Options.enable_blob_files: false 2026-03-10T05:53:25.756 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:25 vm05 bash[43541]: debug 2026-03-10T05:53:25.657+0000 7f9f03ad6d80 4 rocksdb: Options.min_blob_size: 0 2026-03-10T05:53:25.756 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:25 vm05 bash[43541]: debug 2026-03-10T05:53:25.657+0000 7f9f03ad6d80 4 rocksdb: Options.blob_file_size: 268435456 2026-03-10T05:53:25.756 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:25 vm05 bash[43541]: debug 2026-03-10T05:53:25.657+0000 7f9f03ad6d80 4 rocksdb: Options.blob_compression_type: NoCompression 2026-03-10T05:53:25.756 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:25 vm05 bash[43541]: debug 2026-03-10T05:53:25.657+0000 7f9f03ad6d80 4 rocksdb: Options.enable_blob_garbage_collection: false 2026-03-10T05:53:25.756 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:25 vm05 bash[43541]: debug 2026-03-10T05:53:25.657+0000 7f9f03ad6d80 4 rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 2026-03-10T05:53:25.756 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:25 vm05 bash[43541]: debug 2026-03-10T05:53:25.657+0000 7f9f03ad6d80 4 rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 2026-03-10T05:53:25.756 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:25 vm05 bash[43541]: debug 2026-03-10T05:53:25.657+0000 7f9f03ad6d80 4 rocksdb: Options.blob_compaction_readahead_size: 0 2026-03-10T05:53:25.756 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:25 vm05 bash[43541]: debug 2026-03-10T05:53:25.657+0000 7f9f03ad6d80 4 rocksdb: Options.blob_file_starting_level: 0 2026-03-10T05:53:25.756 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:25 vm05 bash[43541]: debug 2026-03-10T05:53:25.657+0000 7f9f03ad6d80 4 rocksdb: Options.experimental_mempurge_threshold: 0.000000 2026-03-10T05:53:25.756 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:25 vm05 bash[43541]: debug 2026-03-10T05:53:25.657+0000 7f9f03ad6d80 3 rocksdb: [table/block_based/block_based_table_reader.cc:721] At least one SST file opened without unique ID to verify: 24.sst 2026-03-10T05:53:25.756 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:25 vm05 bash[43541]: debug 2026-03-10T05:53:25.657+0000 7f9f03ad6d80 4 rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. 2026-03-10T05:53:25.756 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:25 vm05 bash[43541]: debug 2026-03-10T05:53:25.657+0000 7f9f03ad6d80 4 rocksdb: [db/version_set.cc:5566] Recovered from manifest file:/var/lib/ceph/mon/ceph-b/store.db/MANIFEST-000009 succeeded,manifest_file_number is 9, next_file_number is 26, last_sequence is 12792, log_number is 22,prev_log_number is 0,max_column_family is 0,min_log_number_to_keep is 0 2026-03-10T05:53:25.756 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:25 vm05 bash[43541]: debug 2026-03-10T05:53:25.657+0000 7f9f03ad6d80 4 rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 22 2026-03-10T05:53:25.756 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:25 vm05 bash[43541]: debug 2026-03-10T05:53:25.657+0000 7f9f03ad6d80 4 rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 81ac7175-a424-488b-bd0c-3a8de312a420 2026-03-10T05:53:25.756 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:25 vm05 bash[43541]: debug 2026-03-10T05:53:25.657+0000 7f9f03ad6d80 4 rocksdb: EVENT_LOG_v1 {"time_micros": 1773122005664533, "job": 1, "event": "recovery_started", "wal_files": [22]} 2026-03-10T05:53:25.756 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:25 vm05 bash[43541]: debug 2026-03-10T05:53:25.657+0000 7f9f03ad6d80 4 rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #22 mode 2 2026-03-10T05:53:25.756 INFO:journalctl@ceph.osd.5.vm05.stdout:Mar 10 05:53:25 vm05 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:53:25.756 INFO:journalctl@ceph.osd.7.vm05.stdout:Mar 10 05:53:25 vm05 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:53:25.757 INFO:journalctl@ceph.prometheus.a.vm05.stdout:Mar 10 05:53:25 vm05 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:53:25.757 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:53:25 vm05 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:53:25.757 INFO:journalctl@ceph.node-exporter.b.vm05.stdout:Mar 10 05:53:25 vm05 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:53:25.787 INFO:journalctl@ceph.mgr.y.vm02.stdout:Mar 10 05:53:25 vm02 bash[52264]: [10/Mar/2026:05:53:25] ENGINE Serving on http://:::9283 2026-03-10T05:53:25.788 INFO:journalctl@ceph.mgr.y.vm02.stdout:Mar 10 05:53:25 vm02 bash[52264]: [10/Mar/2026:05:53:25] ENGINE Bus STARTED 2026-03-10T05:53:26.084 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:25 vm02 bash[56371]: cephadm 2026-03-10T05:53:23.955961+0000 mgr.y (mgr.24992) 18 : cephadm [INF] Updating vm02:/etc/ceph/ceph.conf 2026-03-10T05:53:26.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:25 vm02 bash[56371]: cephadm 2026-03-10T05:53:23.955961+0000 mgr.y (mgr.24992) 18 : cephadm [INF] Updating vm02:/etc/ceph/ceph.conf 2026-03-10T05:53:26.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:25 vm02 bash[56371]: cephadm 2026-03-10T05:53:23.956042+0000 mgr.y (mgr.24992) 19 : cephadm [INF] Updating vm05:/etc/ceph/ceph.conf 2026-03-10T05:53:26.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:25 vm02 bash[56371]: cephadm 2026-03-10T05:53:23.956042+0000 mgr.y (mgr.24992) 19 : cephadm [INF] Updating vm05:/etc/ceph/ceph.conf 2026-03-10T05:53:26.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:25 vm02 bash[56371]: cephadm 2026-03-10T05:53:23.990330+0000 mgr.y (mgr.24992) 20 : cephadm [INF] Updating vm02:/var/lib/ceph/107483ae-1c44-11f1-b530-c1172cd6122a/config/ceph.conf 2026-03-10T05:53:26.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:25 vm02 bash[56371]: cephadm 2026-03-10T05:53:23.990330+0000 mgr.y (mgr.24992) 20 : cephadm [INF] Updating vm02:/var/lib/ceph/107483ae-1c44-11f1-b530-c1172cd6122a/config/ceph.conf 2026-03-10T05:53:26.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:25 vm02 bash[56371]: cephadm 2026-03-10T05:53:23.992447+0000 mgr.y (mgr.24992) 21 : cephadm [INF] Updating vm05:/var/lib/ceph/107483ae-1c44-11f1-b530-c1172cd6122a/config/ceph.conf 2026-03-10T05:53:26.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:25 vm02 bash[56371]: cephadm 2026-03-10T05:53:23.992447+0000 mgr.y (mgr.24992) 21 : cephadm [INF] Updating vm05:/var/lib/ceph/107483ae-1c44-11f1-b530-c1172cd6122a/config/ceph.conf 2026-03-10T05:53:26.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:25 vm02 bash[56371]: cephadm 2026-03-10T05:53:24.021270+0000 mgr.y (mgr.24992) 22 : cephadm [INF] Updating vm02:/etc/ceph/ceph.client.admin.keyring 2026-03-10T05:53:26.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:25 vm02 bash[56371]: cephadm 2026-03-10T05:53:24.021270+0000 mgr.y (mgr.24992) 22 : cephadm [INF] Updating vm02:/etc/ceph/ceph.client.admin.keyring 2026-03-10T05:53:26.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:25 vm02 bash[56371]: cephadm 2026-03-10T05:53:24.026621+0000 mgr.y (mgr.24992) 23 : cephadm [INF] Updating vm05:/etc/ceph/ceph.client.admin.keyring 2026-03-10T05:53:26.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:25 vm02 bash[56371]: cephadm 2026-03-10T05:53:24.026621+0000 mgr.y (mgr.24992) 23 : cephadm [INF] Updating vm05:/etc/ceph/ceph.client.admin.keyring 2026-03-10T05:53:26.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:25 vm02 bash[56371]: cephadm 2026-03-10T05:53:24.057473+0000 mgr.y (mgr.24992) 24 : cephadm [INF] Updating vm02:/var/lib/ceph/107483ae-1c44-11f1-b530-c1172cd6122a/config/ceph.client.admin.keyring 2026-03-10T05:53:26.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:25 vm02 bash[56371]: cephadm 2026-03-10T05:53:24.057473+0000 mgr.y (mgr.24992) 24 : cephadm [INF] Updating vm02:/var/lib/ceph/107483ae-1c44-11f1-b530-c1172cd6122a/config/ceph.client.admin.keyring 2026-03-10T05:53:26.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:25 vm02 bash[56371]: cephadm 2026-03-10T05:53:24.060576+0000 mgr.y (mgr.24992) 25 : cephadm [INF] Updating vm05:/var/lib/ceph/107483ae-1c44-11f1-b530-c1172cd6122a/config/ceph.client.admin.keyring 2026-03-10T05:53:26.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:25 vm02 bash[56371]: cephadm 2026-03-10T05:53:24.060576+0000 mgr.y (mgr.24992) 25 : cephadm [INF] Updating vm05:/var/lib/ceph/107483ae-1c44-11f1-b530-c1172cd6122a/config/ceph.client.admin.keyring 2026-03-10T05:53:26.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:25 vm02 bash[56371]: cephadm 2026-03-10T05:53:24.163572+0000 mgr.y (mgr.24992) 26 : cephadm [INF] Upgrade: It appears safe to stop mon.b 2026-03-10T05:53:26.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:25 vm02 bash[56371]: cephadm 2026-03-10T05:53:24.163572+0000 mgr.y (mgr.24992) 26 : cephadm [INF] Upgrade: It appears safe to stop mon.b 2026-03-10T05:53:26.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:25 vm02 bash[56371]: cephadm 2026-03-10T05:53:24.572629+0000 mgr.y (mgr.24992) 27 : cephadm [INF] Upgrade: Updating mon.b 2026-03-10T05:53:26.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:25 vm02 bash[56371]: cephadm 2026-03-10T05:53:24.572629+0000 mgr.y (mgr.24992) 27 : cephadm [INF] Upgrade: Updating mon.b 2026-03-10T05:53:26.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:25 vm02 bash[56371]: audit 2026-03-10T05:53:24.579407+0000 mon.a (mon.0) 50 : audit [INF] from='mgr.24992 ' entity='mgr.y' 2026-03-10T05:53:26.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:25 vm02 bash[56371]: audit 2026-03-10T05:53:24.579407+0000 mon.a (mon.0) 50 : audit [INF] from='mgr.24992 ' entity='mgr.y' 2026-03-10T05:53:26.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:25 vm02 bash[56371]: cephadm 2026-03-10T05:53:24.582068+0000 mgr.y (mgr.24992) 28 : cephadm [INF] Deploying daemon mon.b on vm05 2026-03-10T05:53:26.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:25 vm02 bash[56371]: cephadm 2026-03-10T05:53:24.582068+0000 mgr.y (mgr.24992) 28 : cephadm [INF] Deploying daemon mon.b on vm05 2026-03-10T05:53:26.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:25 vm02 bash[56371]: audit 2026-03-10T05:53:24.583421+0000 mon.b (mon.2) 132 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-10T05:53:26.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:25 vm02 bash[56371]: audit 2026-03-10T05:53:24.583421+0000 mon.b (mon.2) 132 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-10T05:53:26.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:25 vm02 bash[56371]: audit 2026-03-10T05:53:24.583894+0000 mon.b (mon.2) 133 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch 2026-03-10T05:53:26.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:25 vm02 bash[56371]: audit 2026-03-10T05:53:24.583894+0000 mon.b (mon.2) 133 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch 2026-03-10T05:53:26.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:25 vm02 bash[56371]: audit 2026-03-10T05:53:24.584340+0000 mon.b (mon.2) 134 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:53:26.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:25 vm02 bash[56371]: audit 2026-03-10T05:53:24.584340+0000 mon.b (mon.2) 134 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:53:26.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:25 vm02 bash[56371]: cluster 2026-03-10T05:53:24.828475+0000 mgr.y (mgr.24992) 29 : cluster [DBG] pgmap v10: 161 pgs: 161 active+clean; 457 KiB data, 103 MiB used, 160 GiB / 160 GiB avail; 15 KiB/s rd, 0 B/s wr, 6 op/s 2026-03-10T05:53:26.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:25 vm02 bash[56371]: cluster 2026-03-10T05:53:24.828475+0000 mgr.y (mgr.24992) 29 : cluster [DBG] pgmap v10: 161 pgs: 161 active+clean; 457 KiB data, 103 MiB used, 160 GiB / 160 GiB avail; 15 KiB/s rd, 0 B/s wr, 6 op/s 2026-03-10T05:53:26.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:25 vm02 bash[56371]: audit 2026-03-10T05:53:25.225037+0000 mon.a (mon.0) 51 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-10T05:53:26.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:25 vm02 bash[56371]: audit 2026-03-10T05:53:25.225037+0000 mon.a (mon.0) 51 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-10T05:53:26.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:25 vm02 bash[56371]: audit 2026-03-10T05:53:25.225258+0000 mon.a (mon.0) 52 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T05:53:26.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:25 vm02 bash[56371]: audit 2026-03-10T05:53:25.225258+0000 mon.a (mon.0) 52 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T05:53:26.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:25 vm02 bash[56371]: audit 2026-03-10T05:53:25.225460+0000 mon.a (mon.0) 53 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T05:53:26.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:25 vm02 bash[56371]: audit 2026-03-10T05:53:25.225460+0000 mon.a (mon.0) 53 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T05:53:26.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:25 vm02 bash[55303]: cephadm 2026-03-10T05:53:23.955961+0000 mgr.y (mgr.24992) 18 : cephadm [INF] Updating vm02:/etc/ceph/ceph.conf 2026-03-10T05:53:26.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:25 vm02 bash[55303]: cephadm 2026-03-10T05:53:23.955961+0000 mgr.y (mgr.24992) 18 : cephadm [INF] Updating vm02:/etc/ceph/ceph.conf 2026-03-10T05:53:26.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:25 vm02 bash[55303]: cephadm 2026-03-10T05:53:23.956042+0000 mgr.y (mgr.24992) 19 : cephadm [INF] Updating vm05:/etc/ceph/ceph.conf 2026-03-10T05:53:26.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:25 vm02 bash[55303]: cephadm 2026-03-10T05:53:23.956042+0000 mgr.y (mgr.24992) 19 : cephadm [INF] Updating vm05:/etc/ceph/ceph.conf 2026-03-10T05:53:26.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:25 vm02 bash[55303]: cephadm 2026-03-10T05:53:23.990330+0000 mgr.y (mgr.24992) 20 : cephadm [INF] Updating vm02:/var/lib/ceph/107483ae-1c44-11f1-b530-c1172cd6122a/config/ceph.conf 2026-03-10T05:53:26.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:25 vm02 bash[55303]: cephadm 2026-03-10T05:53:23.990330+0000 mgr.y (mgr.24992) 20 : cephadm [INF] Updating vm02:/var/lib/ceph/107483ae-1c44-11f1-b530-c1172cd6122a/config/ceph.conf 2026-03-10T05:53:26.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:25 vm02 bash[55303]: cephadm 2026-03-10T05:53:23.992447+0000 mgr.y (mgr.24992) 21 : cephadm [INF] Updating vm05:/var/lib/ceph/107483ae-1c44-11f1-b530-c1172cd6122a/config/ceph.conf 2026-03-10T05:53:26.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:25 vm02 bash[55303]: cephadm 2026-03-10T05:53:23.992447+0000 mgr.y (mgr.24992) 21 : cephadm [INF] Updating vm05:/var/lib/ceph/107483ae-1c44-11f1-b530-c1172cd6122a/config/ceph.conf 2026-03-10T05:53:26.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:25 vm02 bash[55303]: cephadm 2026-03-10T05:53:24.021270+0000 mgr.y (mgr.24992) 22 : cephadm [INF] Updating vm02:/etc/ceph/ceph.client.admin.keyring 2026-03-10T05:53:26.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:25 vm02 bash[55303]: cephadm 2026-03-10T05:53:24.021270+0000 mgr.y (mgr.24992) 22 : cephadm [INF] Updating vm02:/etc/ceph/ceph.client.admin.keyring 2026-03-10T05:53:26.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:25 vm02 bash[55303]: cephadm 2026-03-10T05:53:24.026621+0000 mgr.y (mgr.24992) 23 : cephadm [INF] Updating vm05:/etc/ceph/ceph.client.admin.keyring 2026-03-10T05:53:26.086 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:25 vm02 bash[55303]: cephadm 2026-03-10T05:53:24.026621+0000 mgr.y (mgr.24992) 23 : cephadm [INF] Updating vm05:/etc/ceph/ceph.client.admin.keyring 2026-03-10T05:53:26.086 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:25 vm02 bash[55303]: cephadm 2026-03-10T05:53:24.057473+0000 mgr.y (mgr.24992) 24 : cephadm [INF] Updating vm02:/var/lib/ceph/107483ae-1c44-11f1-b530-c1172cd6122a/config/ceph.client.admin.keyring 2026-03-10T05:53:26.086 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:25 vm02 bash[55303]: cephadm 2026-03-10T05:53:24.057473+0000 mgr.y (mgr.24992) 24 : cephadm [INF] Updating vm02:/var/lib/ceph/107483ae-1c44-11f1-b530-c1172cd6122a/config/ceph.client.admin.keyring 2026-03-10T05:53:26.086 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:25 vm02 bash[55303]: cephadm 2026-03-10T05:53:24.060576+0000 mgr.y (mgr.24992) 25 : cephadm [INF] Updating vm05:/var/lib/ceph/107483ae-1c44-11f1-b530-c1172cd6122a/config/ceph.client.admin.keyring 2026-03-10T05:53:26.086 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:25 vm02 bash[55303]: cephadm 2026-03-10T05:53:24.060576+0000 mgr.y (mgr.24992) 25 : cephadm [INF] Updating vm05:/var/lib/ceph/107483ae-1c44-11f1-b530-c1172cd6122a/config/ceph.client.admin.keyring 2026-03-10T05:53:26.086 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:25 vm02 bash[55303]: cephadm 2026-03-10T05:53:24.163572+0000 mgr.y (mgr.24992) 26 : cephadm [INF] Upgrade: It appears safe to stop mon.b 2026-03-10T05:53:26.086 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:25 vm02 bash[55303]: cephadm 2026-03-10T05:53:24.163572+0000 mgr.y (mgr.24992) 26 : cephadm [INF] Upgrade: It appears safe to stop mon.b 2026-03-10T05:53:26.086 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:25 vm02 bash[55303]: cephadm 2026-03-10T05:53:24.572629+0000 mgr.y (mgr.24992) 27 : cephadm [INF] Upgrade: Updating mon.b 2026-03-10T05:53:26.086 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:25 vm02 bash[55303]: cephadm 2026-03-10T05:53:24.572629+0000 mgr.y (mgr.24992) 27 : cephadm [INF] Upgrade: Updating mon.b 2026-03-10T05:53:26.086 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:25 vm02 bash[55303]: audit 2026-03-10T05:53:24.579407+0000 mon.a (mon.0) 50 : audit [INF] from='mgr.24992 ' entity='mgr.y' 2026-03-10T05:53:26.086 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:25 vm02 bash[55303]: audit 2026-03-10T05:53:24.579407+0000 mon.a (mon.0) 50 : audit [INF] from='mgr.24992 ' entity='mgr.y' 2026-03-10T05:53:26.086 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:25 vm02 bash[55303]: cephadm 2026-03-10T05:53:24.582068+0000 mgr.y (mgr.24992) 28 : cephadm [INF] Deploying daemon mon.b on vm05 2026-03-10T05:53:26.086 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:25 vm02 bash[55303]: cephadm 2026-03-10T05:53:24.582068+0000 mgr.y (mgr.24992) 28 : cephadm [INF] Deploying daemon mon.b on vm05 2026-03-10T05:53:26.086 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:25 vm02 bash[55303]: audit 2026-03-10T05:53:24.583421+0000 mon.b (mon.2) 132 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-10T05:53:26.086 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:25 vm02 bash[55303]: audit 2026-03-10T05:53:24.583421+0000 mon.b (mon.2) 132 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-10T05:53:26.086 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:25 vm02 bash[55303]: audit 2026-03-10T05:53:24.583894+0000 mon.b (mon.2) 133 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch 2026-03-10T05:53:26.086 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:25 vm02 bash[55303]: audit 2026-03-10T05:53:24.583894+0000 mon.b (mon.2) 133 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch 2026-03-10T05:53:26.086 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:25 vm02 bash[55303]: audit 2026-03-10T05:53:24.584340+0000 mon.b (mon.2) 134 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:53:26.086 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:25 vm02 bash[55303]: audit 2026-03-10T05:53:24.584340+0000 mon.b (mon.2) 134 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:53:26.086 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:25 vm02 bash[55303]: cluster 2026-03-10T05:53:24.828475+0000 mgr.y (mgr.24992) 29 : cluster [DBG] pgmap v10: 161 pgs: 161 active+clean; 457 KiB data, 103 MiB used, 160 GiB / 160 GiB avail; 15 KiB/s rd, 0 B/s wr, 6 op/s 2026-03-10T05:53:26.086 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:25 vm02 bash[55303]: cluster 2026-03-10T05:53:24.828475+0000 mgr.y (mgr.24992) 29 : cluster [DBG] pgmap v10: 161 pgs: 161 active+clean; 457 KiB data, 103 MiB used, 160 GiB / 160 GiB avail; 15 KiB/s rd, 0 B/s wr, 6 op/s 2026-03-10T05:53:26.086 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:25 vm02 bash[55303]: audit 2026-03-10T05:53:25.225037+0000 mon.a (mon.0) 51 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-10T05:53:26.086 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:25 vm02 bash[55303]: audit 2026-03-10T05:53:25.225037+0000 mon.a (mon.0) 51 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-10T05:53:26.086 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:25 vm02 bash[55303]: audit 2026-03-10T05:53:25.225258+0000 mon.a (mon.0) 52 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T05:53:26.086 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:25 vm02 bash[55303]: audit 2026-03-10T05:53:25.225258+0000 mon.a (mon.0) 52 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T05:53:26.086 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:25 vm02 bash[55303]: audit 2026-03-10T05:53:25.225460+0000 mon.a (mon.0) 53 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T05:53:26.086 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:25 vm02 bash[55303]: audit 2026-03-10T05:53:25.225460+0000 mon.a (mon.0) 53 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T05:53:26.251 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:25 vm05 bash[43541]: cephadm 2026-03-10T05:53:23.955961+0000 mgr.y (mgr.24992) 18 : cephadm [INF] Updating vm02:/etc/ceph/ceph.conf 2026-03-10T05:53:26.251 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:25 vm05 bash[43541]: cephadm 2026-03-10T05:53:23.955961+0000 mgr.y (mgr.24992) 18 : cephadm [INF] Updating vm02:/etc/ceph/ceph.conf 2026-03-10T05:53:26.251 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:25 vm05 bash[43541]: cephadm 2026-03-10T05:53:23.956042+0000 mgr.y (mgr.24992) 19 : cephadm [INF] Updating vm05:/etc/ceph/ceph.conf 2026-03-10T05:53:26.251 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:25 vm05 bash[43541]: cephadm 2026-03-10T05:53:23.956042+0000 mgr.y (mgr.24992) 19 : cephadm [INF] Updating vm05:/etc/ceph/ceph.conf 2026-03-10T05:53:26.251 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:25 vm05 bash[43541]: cephadm 2026-03-10T05:53:23.990330+0000 mgr.y (mgr.24992) 20 : cephadm [INF] Updating vm02:/var/lib/ceph/107483ae-1c44-11f1-b530-c1172cd6122a/config/ceph.conf 2026-03-10T05:53:26.251 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:25 vm05 bash[43541]: cephadm 2026-03-10T05:53:23.990330+0000 mgr.y (mgr.24992) 20 : cephadm [INF] Updating vm02:/var/lib/ceph/107483ae-1c44-11f1-b530-c1172cd6122a/config/ceph.conf 2026-03-10T05:53:26.251 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:25 vm05 bash[43541]: cephadm 2026-03-10T05:53:23.992447+0000 mgr.y (mgr.24992) 21 : cephadm [INF] Updating vm05:/var/lib/ceph/107483ae-1c44-11f1-b530-c1172cd6122a/config/ceph.conf 2026-03-10T05:53:26.251 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:25 vm05 bash[43541]: cephadm 2026-03-10T05:53:23.992447+0000 mgr.y (mgr.24992) 21 : cephadm [INF] Updating vm05:/var/lib/ceph/107483ae-1c44-11f1-b530-c1172cd6122a/config/ceph.conf 2026-03-10T05:53:26.251 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:25 vm05 bash[43541]: cephadm 2026-03-10T05:53:24.021270+0000 mgr.y (mgr.24992) 22 : cephadm [INF] Updating vm02:/etc/ceph/ceph.client.admin.keyring 2026-03-10T05:53:26.251 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:25 vm05 bash[43541]: cephadm 2026-03-10T05:53:24.021270+0000 mgr.y (mgr.24992) 22 : cephadm [INF] Updating vm02:/etc/ceph/ceph.client.admin.keyring 2026-03-10T05:53:26.251 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:25 vm05 bash[43541]: cephadm 2026-03-10T05:53:24.026621+0000 mgr.y (mgr.24992) 23 : cephadm [INF] Updating vm05:/etc/ceph/ceph.client.admin.keyring 2026-03-10T05:53:26.251 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:25 vm05 bash[43541]: cephadm 2026-03-10T05:53:24.026621+0000 mgr.y (mgr.24992) 23 : cephadm [INF] Updating vm05:/etc/ceph/ceph.client.admin.keyring 2026-03-10T05:53:26.251 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:25 vm05 bash[43541]: cephadm 2026-03-10T05:53:24.057473+0000 mgr.y (mgr.24992) 24 : cephadm [INF] Updating vm02:/var/lib/ceph/107483ae-1c44-11f1-b530-c1172cd6122a/config/ceph.client.admin.keyring 2026-03-10T05:53:26.251 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:25 vm05 bash[43541]: cephadm 2026-03-10T05:53:24.057473+0000 mgr.y (mgr.24992) 24 : cephadm [INF] Updating vm02:/var/lib/ceph/107483ae-1c44-11f1-b530-c1172cd6122a/config/ceph.client.admin.keyring 2026-03-10T05:53:26.251 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:25 vm05 bash[43541]: cephadm 2026-03-10T05:53:24.060576+0000 mgr.y (mgr.24992) 25 : cephadm [INF] Updating vm05:/var/lib/ceph/107483ae-1c44-11f1-b530-c1172cd6122a/config/ceph.client.admin.keyring 2026-03-10T05:53:26.251 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:25 vm05 bash[43541]: cephadm 2026-03-10T05:53:24.060576+0000 mgr.y (mgr.24992) 25 : cephadm [INF] Updating vm05:/var/lib/ceph/107483ae-1c44-11f1-b530-c1172cd6122a/config/ceph.client.admin.keyring 2026-03-10T05:53:26.251 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:25 vm05 bash[43541]: cephadm 2026-03-10T05:53:24.163572+0000 mgr.y (mgr.24992) 26 : cephadm [INF] Upgrade: It appears safe to stop mon.b 2026-03-10T05:53:26.251 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:25 vm05 bash[43541]: cephadm 2026-03-10T05:53:24.163572+0000 mgr.y (mgr.24992) 26 : cephadm [INF] Upgrade: It appears safe to stop mon.b 2026-03-10T05:53:26.251 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:25 vm05 bash[43541]: cephadm 2026-03-10T05:53:24.572629+0000 mgr.y (mgr.24992) 27 : cephadm [INF] Upgrade: Updating mon.b 2026-03-10T05:53:26.251 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:25 vm05 bash[43541]: cephadm 2026-03-10T05:53:24.572629+0000 mgr.y (mgr.24992) 27 : cephadm [INF] Upgrade: Updating mon.b 2026-03-10T05:53:26.251 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:25 vm05 bash[43541]: audit 2026-03-10T05:53:24.579407+0000 mon.a (mon.0) 50 : audit [INF] from='mgr.24992 ' entity='mgr.y' 2026-03-10T05:53:26.251 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:25 vm05 bash[43541]: audit 2026-03-10T05:53:24.579407+0000 mon.a (mon.0) 50 : audit [INF] from='mgr.24992 ' entity='mgr.y' 2026-03-10T05:53:26.251 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:25 vm05 bash[43541]: cephadm 2026-03-10T05:53:24.582068+0000 mgr.y (mgr.24992) 28 : cephadm [INF] Deploying daemon mon.b on vm05 2026-03-10T05:53:26.251 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:25 vm05 bash[43541]: cephadm 2026-03-10T05:53:24.582068+0000 mgr.y (mgr.24992) 28 : cephadm [INF] Deploying daemon mon.b on vm05 2026-03-10T05:53:26.251 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:25 vm05 bash[43541]: audit 2026-03-10T05:53:24.583421+0000 mon.b (mon.2) 132 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-10T05:53:26.251 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:25 vm05 bash[43541]: audit 2026-03-10T05:53:24.583421+0000 mon.b (mon.2) 132 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-10T05:53:26.251 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:25 vm05 bash[43541]: audit 2026-03-10T05:53:24.583894+0000 mon.b (mon.2) 133 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch 2026-03-10T05:53:26.251 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:25 vm05 bash[43541]: audit 2026-03-10T05:53:24.583894+0000 mon.b (mon.2) 133 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch 2026-03-10T05:53:26.251 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:25 vm05 bash[43541]: audit 2026-03-10T05:53:24.584340+0000 mon.b (mon.2) 134 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:53:26.251 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:25 vm05 bash[43541]: audit 2026-03-10T05:53:24.584340+0000 mon.b (mon.2) 134 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:53:26.251 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:25 vm05 bash[43541]: cluster 2026-03-10T05:53:24.828475+0000 mgr.y (mgr.24992) 29 : cluster [DBG] pgmap v10: 161 pgs: 161 active+clean; 457 KiB data, 103 MiB used, 160 GiB / 160 GiB avail; 15 KiB/s rd, 0 B/s wr, 6 op/s 2026-03-10T05:53:26.251 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:25 vm05 bash[43541]: cluster 2026-03-10T05:53:24.828475+0000 mgr.y (mgr.24992) 29 : cluster [DBG] pgmap v10: 161 pgs: 161 active+clean; 457 KiB data, 103 MiB used, 160 GiB / 160 GiB avail; 15 KiB/s rd, 0 B/s wr, 6 op/s 2026-03-10T05:53:26.251 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:25 vm05 bash[43541]: audit 2026-03-10T05:53:25.225037+0000 mon.a (mon.0) 51 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-10T05:53:26.251 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:25 vm05 bash[43541]: audit 2026-03-10T05:53:25.225037+0000 mon.a (mon.0) 51 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-10T05:53:26.251 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:25 vm05 bash[43541]: audit 2026-03-10T05:53:25.225258+0000 mon.a (mon.0) 52 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T05:53:26.251 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:25 vm05 bash[43541]: audit 2026-03-10T05:53:25.225258+0000 mon.a (mon.0) 52 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T05:53:26.251 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:25 vm05 bash[43541]: audit 2026-03-10T05:53:25.225460+0000 mon.a (mon.0) 53 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T05:53:26.251 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:25 vm05 bash[43541]: audit 2026-03-10T05:53:25.225460+0000 mon.a (mon.0) 53 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T05:53:27.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:26 vm02 bash[56371]: audit 2026-03-10T05:53:25.817774+0000 mon.a (mon.0) 69 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-10T05:53:27.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:26 vm02 bash[56371]: audit 2026-03-10T05:53:25.817774+0000 mon.a (mon.0) 69 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-10T05:53:27.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:26 vm02 bash[56371]: cluster 2026-03-10T05:53:25.818739+0000 mon.c (mon.1) 3 : cluster [INF] mon.c calling monitor election 2026-03-10T05:53:27.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:26 vm02 bash[56371]: cluster 2026-03-10T05:53:25.818739+0000 mon.c (mon.1) 3 : cluster [INF] mon.c calling monitor election 2026-03-10T05:53:27.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:26 vm02 bash[56371]: cluster 2026-03-10T05:53:25.818783+0000 mon.a (mon.0) 70 : cluster [INF] mon.a calling monitor election 2026-03-10T05:53:27.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:26 vm02 bash[56371]: cluster 2026-03-10T05:53:25.818783+0000 mon.a (mon.0) 70 : cluster [INF] mon.a calling monitor election 2026-03-10T05:53:27.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:26 vm02 bash[56371]: cluster 2026-03-10T05:53:25.821338+0000 mon.b (mon.2) 2 : cluster [INF] mon.b calling monitor election 2026-03-10T05:53:27.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:26 vm02 bash[56371]: cluster 2026-03-10T05:53:25.821338+0000 mon.b (mon.2) 2 : cluster [INF] mon.b calling monitor election 2026-03-10T05:53:27.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:26 vm02 bash[56371]: cluster 2026-03-10T05:53:25.821778+0000 mon.a (mon.0) 71 : cluster [INF] mon.a is new leader, mons a,c,b in quorum (ranks 0,1,2) 2026-03-10T05:53:27.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:26 vm02 bash[56371]: cluster 2026-03-10T05:53:25.821778+0000 mon.a (mon.0) 71 : cluster [INF] mon.a is new leader, mons a,c,b in quorum (ranks 0,1,2) 2026-03-10T05:53:27.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:26 vm02 bash[56371]: cluster 2026-03-10T05:53:25.826397+0000 mon.a (mon.0) 72 : cluster [DBG] monmap epoch 4 2026-03-10T05:53:27.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:26 vm02 bash[56371]: cluster 2026-03-10T05:53:25.826397+0000 mon.a (mon.0) 72 : cluster [DBG] monmap epoch 4 2026-03-10T05:53:27.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:26 vm02 bash[56371]: cluster 2026-03-10T05:53:25.826450+0000 mon.a (mon.0) 73 : cluster [DBG] fsid 107483ae-1c44-11f1-b530-c1172cd6122a 2026-03-10T05:53:27.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:26 vm02 bash[56371]: cluster 2026-03-10T05:53:25.826450+0000 mon.a (mon.0) 73 : cluster [DBG] fsid 107483ae-1c44-11f1-b530-c1172cd6122a 2026-03-10T05:53:27.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:26 vm02 bash[56371]: cluster 2026-03-10T05:53:25.826501+0000 mon.a (mon.0) 74 : cluster [DBG] last_changed 2026-03-10T05:53:25.801686+0000 2026-03-10T05:53:27.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:26 vm02 bash[56371]: cluster 2026-03-10T05:53:25.826501+0000 mon.a (mon.0) 74 : cluster [DBG] last_changed 2026-03-10T05:53:25.801686+0000 2026-03-10T05:53:27.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:26 vm02 bash[56371]: cluster 2026-03-10T05:53:25.826548+0000 mon.a (mon.0) 75 : cluster [DBG] created 2026-03-10T05:43:50.866640+0000 2026-03-10T05:53:27.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:26 vm02 bash[56371]: cluster 2026-03-10T05:53:25.826548+0000 mon.a (mon.0) 75 : cluster [DBG] created 2026-03-10T05:43:50.866640+0000 2026-03-10T05:53:27.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:26 vm02 bash[56371]: cluster 2026-03-10T05:53:25.826599+0000 mon.a (mon.0) 76 : cluster [DBG] min_mon_release 19 (squid) 2026-03-10T05:53:27.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:26 vm02 bash[56371]: cluster 2026-03-10T05:53:25.826599+0000 mon.a (mon.0) 76 : cluster [DBG] min_mon_release 19 (squid) 2026-03-10T05:53:27.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:26 vm02 bash[56371]: cluster 2026-03-10T05:53:25.826646+0000 mon.a (mon.0) 77 : cluster [DBG] election_strategy: 1 2026-03-10T05:53:27.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:26 vm02 bash[56371]: cluster 2026-03-10T05:53:25.826646+0000 mon.a (mon.0) 77 : cluster [DBG] election_strategy: 1 2026-03-10T05:53:27.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:26 vm02 bash[56371]: cluster 2026-03-10T05:53:25.826700+0000 mon.a (mon.0) 78 : cluster [DBG] 0: [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] mon.a 2026-03-10T05:53:27.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:26 vm02 bash[56371]: cluster 2026-03-10T05:53:25.826700+0000 mon.a (mon.0) 78 : cluster [DBG] 0: [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] mon.a 2026-03-10T05:53:27.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:26 vm02 bash[56371]: cluster 2026-03-10T05:53:25.826751+0000 mon.a (mon.0) 79 : cluster [DBG] 1: [v2:192.168.123.102:3301/0,v1:192.168.123.102:6790/0] mon.c 2026-03-10T05:53:27.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:26 vm02 bash[56371]: cluster 2026-03-10T05:53:25.826751+0000 mon.a (mon.0) 79 : cluster [DBG] 1: [v2:192.168.123.102:3301/0,v1:192.168.123.102:6790/0] mon.c 2026-03-10T05:53:27.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:26 vm02 bash[56371]: cluster 2026-03-10T05:53:25.826801+0000 mon.a (mon.0) 80 : cluster [DBG] 2: [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] mon.b 2026-03-10T05:53:27.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:26 vm02 bash[56371]: cluster 2026-03-10T05:53:25.826801+0000 mon.a (mon.0) 80 : cluster [DBG] 2: [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] mon.b 2026-03-10T05:53:27.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:26 vm02 bash[56371]: cluster 2026-03-10T05:53:25.827214+0000 mon.a (mon.0) 81 : cluster [DBG] fsmap 2026-03-10T05:53:27.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:26 vm02 bash[56371]: cluster 2026-03-10T05:53:25.827214+0000 mon.a (mon.0) 81 : cluster [DBG] fsmap 2026-03-10T05:53:27.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:26 vm02 bash[56371]: cluster 2026-03-10T05:53:25.827306+0000 mon.a (mon.0) 82 : cluster [DBG] osdmap e91: 8 total, 8 up, 8 in 2026-03-10T05:53:27.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:26 vm02 bash[56371]: cluster 2026-03-10T05:53:25.827306+0000 mon.a (mon.0) 82 : cluster [DBG] osdmap e91: 8 total, 8 up, 8 in 2026-03-10T05:53:27.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:26 vm02 bash[56371]: cluster 2026-03-10T05:53:25.827783+0000 mon.a (mon.0) 83 : cluster [DBG] mgrmap e39: y(active, since 15s), standbys: x 2026-03-10T05:53:27.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:26 vm02 bash[56371]: cluster 2026-03-10T05:53:25.827783+0000 mon.a (mon.0) 83 : cluster [DBG] mgrmap e39: y(active, since 15s), standbys: x 2026-03-10T05:53:27.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:26 vm02 bash[56371]: cluster 2026-03-10T05:53:25.828105+0000 mon.a (mon.0) 84 : cluster [INF] overall HEALTH_OK 2026-03-10T05:53:27.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:26 vm02 bash[56371]: cluster 2026-03-10T05:53:25.828105+0000 mon.a (mon.0) 84 : cluster [INF] overall HEALTH_OK 2026-03-10T05:53:27.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:26 vm02 bash[56371]: audit 2026-03-10T05:53:25.834725+0000 mon.a (mon.0) 85 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:53:27.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:26 vm02 bash[56371]: audit 2026-03-10T05:53:25.834725+0000 mon.a (mon.0) 85 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:53:27.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:26 vm02 bash[56371]: audit 2026-03-10T05:53:25.837187+0000 mon.a (mon.0) 86 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T05:53:27.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:26 vm02 bash[56371]: audit 2026-03-10T05:53:25.837187+0000 mon.a (mon.0) 86 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T05:53:27.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:26 vm02 bash[56371]: audit 2026-03-10T05:53:25.837682+0000 mon.a (mon.0) 87 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T05:53:27.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:26 vm02 bash[56371]: audit 2026-03-10T05:53:25.837682+0000 mon.a (mon.0) 87 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T05:53:27.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:26 vm02 bash[56371]: audit 2026-03-10T05:53:25.841143+0000 mon.a (mon.0) 88 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:53:27.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:26 vm02 bash[56371]: audit 2026-03-10T05:53:25.841143+0000 mon.a (mon.0) 88 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:53:27.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:26 vm02 bash[56371]: audit 2026-03-10T05:53:25.879581+0000 mon.a (mon.0) 89 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T05:53:27.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:26 vm02 bash[56371]: audit 2026-03-10T05:53:25.879581+0000 mon.a (mon.0) 89 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T05:53:27.086 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:26 vm02 bash[55303]: audit 2026-03-10T05:53:25.817774+0000 mon.a (mon.0) 69 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-10T05:53:27.086 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:26 vm02 bash[55303]: audit 2026-03-10T05:53:25.817774+0000 mon.a (mon.0) 69 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-10T05:53:27.086 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:26 vm02 bash[55303]: cluster 2026-03-10T05:53:25.818739+0000 mon.c (mon.1) 3 : cluster [INF] mon.c calling monitor election 2026-03-10T05:53:27.086 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:26 vm02 bash[55303]: cluster 2026-03-10T05:53:25.818739+0000 mon.c (mon.1) 3 : cluster [INF] mon.c calling monitor election 2026-03-10T05:53:27.086 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:26 vm02 bash[55303]: cluster 2026-03-10T05:53:25.818783+0000 mon.a (mon.0) 70 : cluster [INF] mon.a calling monitor election 2026-03-10T05:53:27.086 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:26 vm02 bash[55303]: cluster 2026-03-10T05:53:25.818783+0000 mon.a (mon.0) 70 : cluster [INF] mon.a calling monitor election 2026-03-10T05:53:27.086 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:26 vm02 bash[55303]: cluster 2026-03-10T05:53:25.821338+0000 mon.b (mon.2) 2 : cluster [INF] mon.b calling monitor election 2026-03-10T05:53:27.086 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:26 vm02 bash[55303]: cluster 2026-03-10T05:53:25.821338+0000 mon.b (mon.2) 2 : cluster [INF] mon.b calling monitor election 2026-03-10T05:53:27.086 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:26 vm02 bash[55303]: cluster 2026-03-10T05:53:25.821778+0000 mon.a (mon.0) 71 : cluster [INF] mon.a is new leader, mons a,c,b in quorum (ranks 0,1,2) 2026-03-10T05:53:27.086 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:26 vm02 bash[55303]: cluster 2026-03-10T05:53:25.821778+0000 mon.a (mon.0) 71 : cluster [INF] mon.a is new leader, mons a,c,b in quorum (ranks 0,1,2) 2026-03-10T05:53:27.086 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:26 vm02 bash[55303]: cluster 2026-03-10T05:53:25.826397+0000 mon.a (mon.0) 72 : cluster [DBG] monmap epoch 4 2026-03-10T05:53:27.086 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:26 vm02 bash[55303]: cluster 2026-03-10T05:53:25.826397+0000 mon.a (mon.0) 72 : cluster [DBG] monmap epoch 4 2026-03-10T05:53:27.086 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:26 vm02 bash[55303]: cluster 2026-03-10T05:53:25.826450+0000 mon.a (mon.0) 73 : cluster [DBG] fsid 107483ae-1c44-11f1-b530-c1172cd6122a 2026-03-10T05:53:27.086 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:26 vm02 bash[55303]: cluster 2026-03-10T05:53:25.826450+0000 mon.a (mon.0) 73 : cluster [DBG] fsid 107483ae-1c44-11f1-b530-c1172cd6122a 2026-03-10T05:53:27.086 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:26 vm02 bash[55303]: cluster 2026-03-10T05:53:25.826501+0000 mon.a (mon.0) 74 : cluster [DBG] last_changed 2026-03-10T05:53:25.801686+0000 2026-03-10T05:53:27.086 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:26 vm02 bash[55303]: cluster 2026-03-10T05:53:25.826501+0000 mon.a (mon.0) 74 : cluster [DBG] last_changed 2026-03-10T05:53:25.801686+0000 2026-03-10T05:53:27.086 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:26 vm02 bash[55303]: cluster 2026-03-10T05:53:25.826548+0000 mon.a (mon.0) 75 : cluster [DBG] created 2026-03-10T05:43:50.866640+0000 2026-03-10T05:53:27.086 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:26 vm02 bash[55303]: cluster 2026-03-10T05:53:25.826548+0000 mon.a (mon.0) 75 : cluster [DBG] created 2026-03-10T05:43:50.866640+0000 2026-03-10T05:53:27.086 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:26 vm02 bash[55303]: cluster 2026-03-10T05:53:25.826599+0000 mon.a (mon.0) 76 : cluster [DBG] min_mon_release 19 (squid) 2026-03-10T05:53:27.086 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:26 vm02 bash[55303]: cluster 2026-03-10T05:53:25.826599+0000 mon.a (mon.0) 76 : cluster [DBG] min_mon_release 19 (squid) 2026-03-10T05:53:27.086 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:26 vm02 bash[55303]: cluster 2026-03-10T05:53:25.826646+0000 mon.a (mon.0) 77 : cluster [DBG] election_strategy: 1 2026-03-10T05:53:27.086 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:26 vm02 bash[55303]: cluster 2026-03-10T05:53:25.826646+0000 mon.a (mon.0) 77 : cluster [DBG] election_strategy: 1 2026-03-10T05:53:27.086 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:26 vm02 bash[55303]: cluster 2026-03-10T05:53:25.826700+0000 mon.a (mon.0) 78 : cluster [DBG] 0: [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] mon.a 2026-03-10T05:53:27.086 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:26 vm02 bash[55303]: cluster 2026-03-10T05:53:25.826700+0000 mon.a (mon.0) 78 : cluster [DBG] 0: [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] mon.a 2026-03-10T05:53:27.086 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:26 vm02 bash[55303]: cluster 2026-03-10T05:53:25.826751+0000 mon.a (mon.0) 79 : cluster [DBG] 1: [v2:192.168.123.102:3301/0,v1:192.168.123.102:6790/0] mon.c 2026-03-10T05:53:27.086 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:26 vm02 bash[55303]: cluster 2026-03-10T05:53:25.826751+0000 mon.a (mon.0) 79 : cluster [DBG] 1: [v2:192.168.123.102:3301/0,v1:192.168.123.102:6790/0] mon.c 2026-03-10T05:53:27.086 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:26 vm02 bash[55303]: cluster 2026-03-10T05:53:25.826801+0000 mon.a (mon.0) 80 : cluster [DBG] 2: [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] mon.b 2026-03-10T05:53:27.086 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:26 vm02 bash[55303]: cluster 2026-03-10T05:53:25.826801+0000 mon.a (mon.0) 80 : cluster [DBG] 2: [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] mon.b 2026-03-10T05:53:27.086 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:26 vm02 bash[55303]: cluster 2026-03-10T05:53:25.827214+0000 mon.a (mon.0) 81 : cluster [DBG] fsmap 2026-03-10T05:53:27.086 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:26 vm02 bash[55303]: cluster 2026-03-10T05:53:25.827214+0000 mon.a (mon.0) 81 : cluster [DBG] fsmap 2026-03-10T05:53:27.086 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:26 vm02 bash[55303]: cluster 2026-03-10T05:53:25.827306+0000 mon.a (mon.0) 82 : cluster [DBG] osdmap e91: 8 total, 8 up, 8 in 2026-03-10T05:53:27.086 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:26 vm02 bash[55303]: cluster 2026-03-10T05:53:25.827306+0000 mon.a (mon.0) 82 : cluster [DBG] osdmap e91: 8 total, 8 up, 8 in 2026-03-10T05:53:27.086 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:26 vm02 bash[55303]: cluster 2026-03-10T05:53:25.827783+0000 mon.a (mon.0) 83 : cluster [DBG] mgrmap e39: y(active, since 15s), standbys: x 2026-03-10T05:53:27.086 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:26 vm02 bash[55303]: cluster 2026-03-10T05:53:25.827783+0000 mon.a (mon.0) 83 : cluster [DBG] mgrmap e39: y(active, since 15s), standbys: x 2026-03-10T05:53:27.086 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:26 vm02 bash[55303]: cluster 2026-03-10T05:53:25.828105+0000 mon.a (mon.0) 84 : cluster [INF] overall HEALTH_OK 2026-03-10T05:53:27.086 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:26 vm02 bash[55303]: cluster 2026-03-10T05:53:25.828105+0000 mon.a (mon.0) 84 : cluster [INF] overall HEALTH_OK 2026-03-10T05:53:27.086 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:26 vm02 bash[55303]: audit 2026-03-10T05:53:25.834725+0000 mon.a (mon.0) 85 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:53:27.086 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:26 vm02 bash[55303]: audit 2026-03-10T05:53:25.834725+0000 mon.a (mon.0) 85 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:53:27.086 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:26 vm02 bash[55303]: audit 2026-03-10T05:53:25.837187+0000 mon.a (mon.0) 86 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T05:53:27.086 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:26 vm02 bash[55303]: audit 2026-03-10T05:53:25.837187+0000 mon.a (mon.0) 86 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T05:53:27.086 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:26 vm02 bash[55303]: audit 2026-03-10T05:53:25.837682+0000 mon.a (mon.0) 87 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T05:53:27.086 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:26 vm02 bash[55303]: audit 2026-03-10T05:53:25.837682+0000 mon.a (mon.0) 87 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T05:53:27.086 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:26 vm02 bash[55303]: audit 2026-03-10T05:53:25.841143+0000 mon.a (mon.0) 88 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:53:27.086 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:26 vm02 bash[55303]: audit 2026-03-10T05:53:25.841143+0000 mon.a (mon.0) 88 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:53:27.086 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:26 vm02 bash[55303]: audit 2026-03-10T05:53:25.879581+0000 mon.a (mon.0) 89 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T05:53:27.086 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:26 vm02 bash[55303]: audit 2026-03-10T05:53:25.879581+0000 mon.a (mon.0) 89 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T05:53:27.251 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:26 vm05 bash[43541]: audit 2026-03-10T05:53:25.817774+0000 mon.a (mon.0) 69 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-10T05:53:27.251 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:26 vm05 bash[43541]: audit 2026-03-10T05:53:25.817774+0000 mon.a (mon.0) 69 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-10T05:53:27.251 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:26 vm05 bash[43541]: cluster 2026-03-10T05:53:25.818739+0000 mon.c (mon.1) 3 : cluster [INF] mon.c calling monitor election 2026-03-10T05:53:27.251 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:26 vm05 bash[43541]: cluster 2026-03-10T05:53:25.818739+0000 mon.c (mon.1) 3 : cluster [INF] mon.c calling monitor election 2026-03-10T05:53:27.251 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:26 vm05 bash[43541]: cluster 2026-03-10T05:53:25.818783+0000 mon.a (mon.0) 70 : cluster [INF] mon.a calling monitor election 2026-03-10T05:53:27.251 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:26 vm05 bash[43541]: cluster 2026-03-10T05:53:25.818783+0000 mon.a (mon.0) 70 : cluster [INF] mon.a calling monitor election 2026-03-10T05:53:27.251 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:26 vm05 bash[43541]: cluster 2026-03-10T05:53:25.821338+0000 mon.b (mon.2) 2 : cluster [INF] mon.b calling monitor election 2026-03-10T05:53:27.251 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:26 vm05 bash[43541]: cluster 2026-03-10T05:53:25.821338+0000 mon.b (mon.2) 2 : cluster [INF] mon.b calling monitor election 2026-03-10T05:53:27.251 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:26 vm05 bash[43541]: cluster 2026-03-10T05:53:25.821778+0000 mon.a (mon.0) 71 : cluster [INF] mon.a is new leader, mons a,c,b in quorum (ranks 0,1,2) 2026-03-10T05:53:27.251 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:26 vm05 bash[43541]: cluster 2026-03-10T05:53:25.821778+0000 mon.a (mon.0) 71 : cluster [INF] mon.a is new leader, mons a,c,b in quorum (ranks 0,1,2) 2026-03-10T05:53:27.251 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:26 vm05 bash[43541]: cluster 2026-03-10T05:53:25.826397+0000 mon.a (mon.0) 72 : cluster [DBG] monmap epoch 4 2026-03-10T05:53:27.251 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:26 vm05 bash[43541]: cluster 2026-03-10T05:53:25.826397+0000 mon.a (mon.0) 72 : cluster [DBG] monmap epoch 4 2026-03-10T05:53:27.251 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:26 vm05 bash[43541]: cluster 2026-03-10T05:53:25.826450+0000 mon.a (mon.0) 73 : cluster [DBG] fsid 107483ae-1c44-11f1-b530-c1172cd6122a 2026-03-10T05:53:27.251 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:26 vm05 bash[43541]: cluster 2026-03-10T05:53:25.826450+0000 mon.a (mon.0) 73 : cluster [DBG] fsid 107483ae-1c44-11f1-b530-c1172cd6122a 2026-03-10T05:53:27.251 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:26 vm05 bash[43541]: cluster 2026-03-10T05:53:25.826501+0000 mon.a (mon.0) 74 : cluster [DBG] last_changed 2026-03-10T05:53:25.801686+0000 2026-03-10T05:53:27.251 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:26 vm05 bash[43541]: cluster 2026-03-10T05:53:25.826501+0000 mon.a (mon.0) 74 : cluster [DBG] last_changed 2026-03-10T05:53:25.801686+0000 2026-03-10T05:53:27.251 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:26 vm05 bash[43541]: cluster 2026-03-10T05:53:25.826548+0000 mon.a (mon.0) 75 : cluster [DBG] created 2026-03-10T05:43:50.866640+0000 2026-03-10T05:53:27.251 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:26 vm05 bash[43541]: cluster 2026-03-10T05:53:25.826548+0000 mon.a (mon.0) 75 : cluster [DBG] created 2026-03-10T05:43:50.866640+0000 2026-03-10T05:53:27.251 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:26 vm05 bash[43541]: cluster 2026-03-10T05:53:25.826599+0000 mon.a (mon.0) 76 : cluster [DBG] min_mon_release 19 (squid) 2026-03-10T05:53:27.251 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:26 vm05 bash[43541]: cluster 2026-03-10T05:53:25.826599+0000 mon.a (mon.0) 76 : cluster [DBG] min_mon_release 19 (squid) 2026-03-10T05:53:27.251 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:26 vm05 bash[43541]: cluster 2026-03-10T05:53:25.826646+0000 mon.a (mon.0) 77 : cluster [DBG] election_strategy: 1 2026-03-10T05:53:27.251 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:26 vm05 bash[43541]: cluster 2026-03-10T05:53:25.826646+0000 mon.a (mon.0) 77 : cluster [DBG] election_strategy: 1 2026-03-10T05:53:27.251 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:26 vm05 bash[43541]: cluster 2026-03-10T05:53:25.826700+0000 mon.a (mon.0) 78 : cluster [DBG] 0: [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] mon.a 2026-03-10T05:53:27.251 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:26 vm05 bash[43541]: cluster 2026-03-10T05:53:25.826700+0000 mon.a (mon.0) 78 : cluster [DBG] 0: [v2:192.168.123.102:3300/0,v1:192.168.123.102:6789/0] mon.a 2026-03-10T05:53:27.251 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:26 vm05 bash[43541]: cluster 2026-03-10T05:53:25.826751+0000 mon.a (mon.0) 79 : cluster [DBG] 1: [v2:192.168.123.102:3301/0,v1:192.168.123.102:6790/0] mon.c 2026-03-10T05:53:27.251 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:26 vm05 bash[43541]: cluster 2026-03-10T05:53:25.826751+0000 mon.a (mon.0) 79 : cluster [DBG] 1: [v2:192.168.123.102:3301/0,v1:192.168.123.102:6790/0] mon.c 2026-03-10T05:53:27.251 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:26 vm05 bash[43541]: cluster 2026-03-10T05:53:25.826801+0000 mon.a (mon.0) 80 : cluster [DBG] 2: [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] mon.b 2026-03-10T05:53:27.251 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:26 vm05 bash[43541]: cluster 2026-03-10T05:53:25.826801+0000 mon.a (mon.0) 80 : cluster [DBG] 2: [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] mon.b 2026-03-10T05:53:27.251 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:26 vm05 bash[43541]: cluster 2026-03-10T05:53:25.827214+0000 mon.a (mon.0) 81 : cluster [DBG] fsmap 2026-03-10T05:53:27.251 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:26 vm05 bash[43541]: cluster 2026-03-10T05:53:25.827214+0000 mon.a (mon.0) 81 : cluster [DBG] fsmap 2026-03-10T05:53:27.251 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:26 vm05 bash[43541]: cluster 2026-03-10T05:53:25.827306+0000 mon.a (mon.0) 82 : cluster [DBG] osdmap e91: 8 total, 8 up, 8 in 2026-03-10T05:53:27.251 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:26 vm05 bash[43541]: cluster 2026-03-10T05:53:25.827306+0000 mon.a (mon.0) 82 : cluster [DBG] osdmap e91: 8 total, 8 up, 8 in 2026-03-10T05:53:27.251 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:26 vm05 bash[43541]: cluster 2026-03-10T05:53:25.827783+0000 mon.a (mon.0) 83 : cluster [DBG] mgrmap e39: y(active, since 15s), standbys: x 2026-03-10T05:53:27.251 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:26 vm05 bash[43541]: cluster 2026-03-10T05:53:25.827783+0000 mon.a (mon.0) 83 : cluster [DBG] mgrmap e39: y(active, since 15s), standbys: x 2026-03-10T05:53:27.251 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:26 vm05 bash[43541]: cluster 2026-03-10T05:53:25.828105+0000 mon.a (mon.0) 84 : cluster [INF] overall HEALTH_OK 2026-03-10T05:53:27.251 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:26 vm05 bash[43541]: cluster 2026-03-10T05:53:25.828105+0000 mon.a (mon.0) 84 : cluster [INF] overall HEALTH_OK 2026-03-10T05:53:27.251 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:26 vm05 bash[43541]: audit 2026-03-10T05:53:25.834725+0000 mon.a (mon.0) 85 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:53:27.251 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:26 vm05 bash[43541]: audit 2026-03-10T05:53:25.834725+0000 mon.a (mon.0) 85 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:53:27.251 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:26 vm05 bash[43541]: audit 2026-03-10T05:53:25.837187+0000 mon.a (mon.0) 86 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T05:53:27.251 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:26 vm05 bash[43541]: audit 2026-03-10T05:53:25.837187+0000 mon.a (mon.0) 86 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T05:53:27.251 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:26 vm05 bash[43541]: audit 2026-03-10T05:53:25.837682+0000 mon.a (mon.0) 87 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T05:53:27.251 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:26 vm05 bash[43541]: audit 2026-03-10T05:53:25.837682+0000 mon.a (mon.0) 87 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T05:53:27.251 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:26 vm05 bash[43541]: audit 2026-03-10T05:53:25.841143+0000 mon.a (mon.0) 88 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:53:27.251 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:26 vm05 bash[43541]: audit 2026-03-10T05:53:25.841143+0000 mon.a (mon.0) 88 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:53:27.251 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:26 vm05 bash[43541]: audit 2026-03-10T05:53:25.879581+0000 mon.a (mon.0) 89 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T05:53:27.251 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:26 vm05 bash[43541]: audit 2026-03-10T05:53:25.879581+0000 mon.a (mon.0) 89 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T05:53:27.252 INFO:journalctl@ceph.prometheus.a.vm05.stdout:Mar 10 05:53:26 vm05 bash[41269]: ts=2026-03-10T05:53:26.948Z caller=group.go:483 level=warn name=CephNodeDiskspaceWarning index=4 component="rule manager" file=/etc/prometheus/alerting/ceph_alerts.yml group=nodes msg="Evaluating rule failed" rule="alert: CephNodeDiskspaceWarning\nexpr: predict_linear(node_filesystem_free_bytes{device=~\"/.*\"}[2d], 3600 * 24 * 5)\n * on (instance) group_left (nodename) node_uname_info < 0\nlabels:\n oid: 1.3.6.1.4.1.50495.1.2.1.8.4\n severity: warning\n type: ceph_default\nannotations:\n description: Mountpoint {{ $labels.mountpoint }} on {{ $labels.nodename }} will\n be full in less than 5 days based on the 48 hour trailing fill rate.\n summary: Host filesystem free space is getting low\n" err="found duplicate series for the match group {instance=\"vm05\"} on the right hand-side of the operation: [{__name__=\"node_uname_info\", cluster=\"107483ae-1c44-11f1-b530-c1172cd6122a\", domainname=\"(none)\", instance=\"vm05\", job=\"node\", machine=\"x86_64\", nodename=\"vm05\", release=\"5.15.0-1092-kvm\", sysname=\"Linux\", version=\"#97-Ubuntu SMP Fri Jan 23 15:00:24 UTC 2026\"}, {__name__=\"node_uname_info\", domainname=\"(none)\", instance=\"vm05\", job=\"node\", machine=\"x86_64\", nodename=\"vm05\", release=\"5.15.0-1092-kvm\", sysname=\"Linux\", version=\"#97-Ubuntu SMP Fri Jan 23 15:00:24 UTC 2026\"}];many-to-many matching not allowed: matching labels must be unique on one side" 2026-03-10T05:53:28.084 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:27 vm02 bash[56371]: cluster 2026-03-10T05:53:26.828699+0000 mgr.y (mgr.24992) 30 : cluster [DBG] pgmap v11: 161 pgs: 161 active+clean; 457 KiB data, 103 MiB used, 160 GiB / 160 GiB avail; 15 KiB/s rd, 0 B/s wr, 6 op/s 2026-03-10T05:53:28.084 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:27 vm02 bash[56371]: cluster 2026-03-10T05:53:26.828699+0000 mgr.y (mgr.24992) 30 : cluster [DBG] pgmap v11: 161 pgs: 161 active+clean; 457 KiB data, 103 MiB used, 160 GiB / 160 GiB avail; 15 KiB/s rd, 0 B/s wr, 6 op/s 2026-03-10T05:53:28.084 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:27 vm02 bash[56371]: audit 2026-03-10T05:53:26.878195+0000 mgr.y (mgr.24992) 31 : audit [DBG] from='client.25048 -' entity='client.iscsi.foo.vm02.mxbwmh' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T05:53:28.084 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:27 vm02 bash[56371]: audit 2026-03-10T05:53:26.878195+0000 mgr.y (mgr.24992) 31 : audit [DBG] from='client.25048 -' entity='client.iscsi.foo.vm02.mxbwmh' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T05:53:28.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:27 vm02 bash[55303]: cluster 2026-03-10T05:53:26.828699+0000 mgr.y (mgr.24992) 30 : cluster [DBG] pgmap v11: 161 pgs: 161 active+clean; 457 KiB data, 103 MiB used, 160 GiB / 160 GiB avail; 15 KiB/s rd, 0 B/s wr, 6 op/s 2026-03-10T05:53:28.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:27 vm02 bash[55303]: cluster 2026-03-10T05:53:26.828699+0000 mgr.y (mgr.24992) 30 : cluster [DBG] pgmap v11: 161 pgs: 161 active+clean; 457 KiB data, 103 MiB used, 160 GiB / 160 GiB avail; 15 KiB/s rd, 0 B/s wr, 6 op/s 2026-03-10T05:53:28.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:27 vm02 bash[55303]: audit 2026-03-10T05:53:26.878195+0000 mgr.y (mgr.24992) 31 : audit [DBG] from='client.25048 -' entity='client.iscsi.foo.vm02.mxbwmh' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T05:53:28.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:27 vm02 bash[55303]: audit 2026-03-10T05:53:26.878195+0000 mgr.y (mgr.24992) 31 : audit [DBG] from='client.25048 -' entity='client.iscsi.foo.vm02.mxbwmh' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T05:53:28.250 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:27 vm05 bash[43541]: cluster 2026-03-10T05:53:26.828699+0000 mgr.y (mgr.24992) 30 : cluster [DBG] pgmap v11: 161 pgs: 161 active+clean; 457 KiB data, 103 MiB used, 160 GiB / 160 GiB avail; 15 KiB/s rd, 0 B/s wr, 6 op/s 2026-03-10T05:53:28.250 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:27 vm05 bash[43541]: cluster 2026-03-10T05:53:26.828699+0000 mgr.y (mgr.24992) 30 : cluster [DBG] pgmap v11: 161 pgs: 161 active+clean; 457 KiB data, 103 MiB used, 160 GiB / 160 GiB avail; 15 KiB/s rd, 0 B/s wr, 6 op/s 2026-03-10T05:53:28.250 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:27 vm05 bash[43541]: audit 2026-03-10T05:53:26.878195+0000 mgr.y (mgr.24992) 31 : audit [DBG] from='client.25048 -' entity='client.iscsi.foo.vm02.mxbwmh' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T05:53:28.250 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:27 vm05 bash[43541]: audit 2026-03-10T05:53:26.878195+0000 mgr.y (mgr.24992) 31 : audit [DBG] from='client.25048 -' entity='client.iscsi.foo.vm02.mxbwmh' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T05:53:30.334 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:30 vm02 bash[56371]: cluster 2026-03-10T05:53:28.829143+0000 mgr.y (mgr.24992) 32 : cluster [DBG] pgmap v12: 161 pgs: 161 active+clean; 457 KiB data, 103 MiB used, 160 GiB / 160 GiB avail; 16 KiB/s rd, 0 B/s wr, 7 op/s 2026-03-10T05:53:30.334 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:30 vm02 bash[56371]: cluster 2026-03-10T05:53:28.829143+0000 mgr.y (mgr.24992) 32 : cluster [DBG] pgmap v12: 161 pgs: 161 active+clean; 457 KiB data, 103 MiB used, 160 GiB / 160 GiB avail; 16 KiB/s rd, 0 B/s wr, 7 op/s 2026-03-10T05:53:30.334 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:30 vm02 bash[55303]: cluster 2026-03-10T05:53:28.829143+0000 mgr.y (mgr.24992) 32 : cluster [DBG] pgmap v12: 161 pgs: 161 active+clean; 457 KiB data, 103 MiB used, 160 GiB / 160 GiB avail; 16 KiB/s rd, 0 B/s wr, 7 op/s 2026-03-10T05:53:30.334 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:30 vm02 bash[55303]: cluster 2026-03-10T05:53:28.829143+0000 mgr.y (mgr.24992) 32 : cluster [DBG] pgmap v12: 161 pgs: 161 active+clean; 457 KiB data, 103 MiB used, 160 GiB / 160 GiB avail; 16 KiB/s rd, 0 B/s wr, 7 op/s 2026-03-10T05:53:30.500 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:30 vm05 bash[43541]: cluster 2026-03-10T05:53:28.829143+0000 mgr.y (mgr.24992) 32 : cluster [DBG] pgmap v12: 161 pgs: 161 active+clean; 457 KiB data, 103 MiB used, 160 GiB / 160 GiB avail; 16 KiB/s rd, 0 B/s wr, 7 op/s 2026-03-10T05:53:30.500 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:30 vm05 bash[43541]: cluster 2026-03-10T05:53:28.829143+0000 mgr.y (mgr.24992) 32 : cluster [DBG] pgmap v12: 161 pgs: 161 active+clean; 457 KiB data, 103 MiB used, 160 GiB / 160 GiB avail; 16 KiB/s rd, 0 B/s wr, 7 op/s 2026-03-10T05:53:31.084 INFO:journalctl@ceph.mgr.y.vm02.stdout:Mar 10 05:53:30 vm02 bash[52264]: debug 2026-03-10T05:53:30.671+0000 7fd64b74b640 -1 mgr.server handle_report got status from non-daemon mon.b 2026-03-10T05:53:32.501 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:32 vm05 bash[43541]: cluster 2026-03-10T05:53:30.829461+0000 mgr.y (mgr.24992) 33 : cluster [DBG] pgmap v13: 161 pgs: 161 active+clean; 457 KiB data, 103 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T05:53:32.501 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:32 vm05 bash[43541]: cluster 2026-03-10T05:53:30.829461+0000 mgr.y (mgr.24992) 33 : cluster [DBG] pgmap v13: 161 pgs: 161 active+clean; 457 KiB data, 103 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T05:53:32.501 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:32 vm05 bash[43541]: audit 2026-03-10T05:53:31.116090+0000 mon.a (mon.0) 90 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:53:32.501 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:32 vm05 bash[43541]: audit 2026-03-10T05:53:31.116090+0000 mon.a (mon.0) 90 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:53:32.501 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:32 vm05 bash[43541]: audit 2026-03-10T05:53:31.121712+0000 mon.a (mon.0) 91 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:53:32.501 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:32 vm05 bash[43541]: audit 2026-03-10T05:53:31.121712+0000 mon.a (mon.0) 91 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:53:32.501 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:32 vm05 bash[43541]: audit 2026-03-10T05:53:31.672623+0000 mon.a (mon.0) 92 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:53:32.501 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:32 vm05 bash[43541]: audit 2026-03-10T05:53:31.672623+0000 mon.a (mon.0) 92 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:53:32.501 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:32 vm05 bash[43541]: audit 2026-03-10T05:53:31.680370+0000 mon.a (mon.0) 93 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:53:32.501 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:32 vm05 bash[43541]: audit 2026-03-10T05:53:31.680370+0000 mon.a (mon.0) 93 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:53:32.584 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:32 vm02 bash[56371]: cluster 2026-03-10T05:53:30.829461+0000 mgr.y (mgr.24992) 33 : cluster [DBG] pgmap v13: 161 pgs: 161 active+clean; 457 KiB data, 103 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T05:53:32.584 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:32 vm02 bash[56371]: cluster 2026-03-10T05:53:30.829461+0000 mgr.y (mgr.24992) 33 : cluster [DBG] pgmap v13: 161 pgs: 161 active+clean; 457 KiB data, 103 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T05:53:32.584 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:32 vm02 bash[56371]: audit 2026-03-10T05:53:31.116090+0000 mon.a (mon.0) 90 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:53:32.584 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:32 vm02 bash[56371]: audit 2026-03-10T05:53:31.116090+0000 mon.a (mon.0) 90 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:53:32.584 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:32 vm02 bash[56371]: audit 2026-03-10T05:53:31.121712+0000 mon.a (mon.0) 91 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:53:32.584 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:32 vm02 bash[56371]: audit 2026-03-10T05:53:31.121712+0000 mon.a (mon.0) 91 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:53:32.584 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:32 vm02 bash[56371]: audit 2026-03-10T05:53:31.672623+0000 mon.a (mon.0) 92 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:53:32.584 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:32 vm02 bash[56371]: audit 2026-03-10T05:53:31.672623+0000 mon.a (mon.0) 92 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:53:32.584 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:32 vm02 bash[56371]: audit 2026-03-10T05:53:31.680370+0000 mon.a (mon.0) 93 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:53:32.584 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:32 vm02 bash[56371]: audit 2026-03-10T05:53:31.680370+0000 mon.a (mon.0) 93 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:53:32.584 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:32 vm02 bash[55303]: cluster 2026-03-10T05:53:30.829461+0000 mgr.y (mgr.24992) 33 : cluster [DBG] pgmap v13: 161 pgs: 161 active+clean; 457 KiB data, 103 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T05:53:32.584 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:32 vm02 bash[55303]: cluster 2026-03-10T05:53:30.829461+0000 mgr.y (mgr.24992) 33 : cluster [DBG] pgmap v13: 161 pgs: 161 active+clean; 457 KiB data, 103 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T05:53:32.584 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:32 vm02 bash[55303]: audit 2026-03-10T05:53:31.116090+0000 mon.a (mon.0) 90 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:53:32.585 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:32 vm02 bash[55303]: audit 2026-03-10T05:53:31.116090+0000 mon.a (mon.0) 90 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:53:32.585 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:32 vm02 bash[55303]: audit 2026-03-10T05:53:31.121712+0000 mon.a (mon.0) 91 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:53:32.585 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:32 vm02 bash[55303]: audit 2026-03-10T05:53:31.121712+0000 mon.a (mon.0) 91 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:53:32.585 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:32 vm02 bash[55303]: audit 2026-03-10T05:53:31.672623+0000 mon.a (mon.0) 92 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:53:32.585 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:32 vm02 bash[55303]: audit 2026-03-10T05:53:31.672623+0000 mon.a (mon.0) 92 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:53:32.585 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:32 vm02 bash[55303]: audit 2026-03-10T05:53:31.680370+0000 mon.a (mon.0) 93 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:53:32.585 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:32 vm02 bash[55303]: audit 2026-03-10T05:53:31.680370+0000 mon.a (mon.0) 93 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:53:33.334 INFO:journalctl@ceph.mgr.y.vm02.stdout:Mar 10 05:53:32 vm02 bash[52264]: ::ffff:192.168.123.105 - - [10/Mar/2026:05:53:32] "GET /metrics HTTP/1.1" 200 37751 "" "Prometheus/2.51.0" 2026-03-10T05:53:34.501 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:34 vm05 bash[43541]: cluster 2026-03-10T05:53:32.829903+0000 mgr.y (mgr.24992) 34 : cluster [DBG] pgmap v14: 161 pgs: 161 active+clean; 457 KiB data, 103 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T05:53:34.501 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:34 vm05 bash[43541]: cluster 2026-03-10T05:53:32.829903+0000 mgr.y (mgr.24992) 34 : cluster [DBG] pgmap v14: 161 pgs: 161 active+clean; 457 KiB data, 103 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T05:53:34.501 INFO:journalctl@ceph.prometheus.a.vm05.stdout:Mar 10 05:53:34 vm05 bash[41269]: ts=2026-03-10T05:53:34.147Z caller=group.go:483 level=warn name=CephOSDFlapping index=13 component="rule manager" file=/etc/prometheus/alerting/ceph_alerts.yml group=osd msg="Evaluating rule failed" rule="alert: CephOSDFlapping\nexpr: (rate(ceph_osd_up[5m]) * on (ceph_daemon) group_left (hostname) ceph_osd_metadata)\n * 60 > 1\nlabels:\n oid: 1.3.6.1.4.1.50495.1.2.1.4.4\n severity: warning\n type: ceph_default\nannotations:\n description: OSD {{ $labels.ceph_daemon }} on {{ $labels.hostname }} was marked\n down and back up {{ $value | humanize }} times once a minute for 5 minutes. This\n may indicate a network issue (latency, packet loss, MTU mismatch) on the cluster\n network, or the public network if no cluster network is deployed. Check the network\n stats on the listed host(s).\n documentation: https://docs.ceph.com/en/latest/rados/troubleshooting/troubleshooting-osd#flapping-osds\n summary: Network issues are causing OSDs to flap (mark each other down)\n" err="found duplicate series for the match group {ceph_daemon=\"osd.0\"} on the right hand-side of the operation: [{__name__=\"ceph_osd_metadata\", ceph_daemon=\"osd.0\", ceph_version=\"ceph version 17.2.0 (43e2e60a7559d3f46c9d53f1ca875fd499a1e35e) quincy (stable)\", cluster=\"107483ae-1c44-11f1-b530-c1172cd6122a\", cluster_addr=\"192.168.123.102\", device_class=\"hdd\", hostname=\"vm02\", instance=\"ceph_cluster\", job=\"ceph\", objectstore=\"bluestore\", public_addr=\"192.168.123.102\"}, {__name__=\"ceph_osd_metadata\", ceph_daemon=\"osd.0\", ceph_version=\"ceph version 17.2.0 (43e2e60a7559d3f46c9d53f1ca875fd499a1e35e) quincy (stable)\", cluster_addr=\"192.168.123.102\", device_class=\"hdd\", hostname=\"vm02\", instance=\"192.168.123.105:9283\", job=\"ceph\", objectstore=\"bluestore\", public_addr=\"192.168.123.102\"}];many-to-many matching not allowed: matching labels must be unique on one side" 2026-03-10T05:53:34.584 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:34 vm02 bash[56371]: cluster 2026-03-10T05:53:32.829903+0000 mgr.y (mgr.24992) 34 : cluster [DBG] pgmap v14: 161 pgs: 161 active+clean; 457 KiB data, 103 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T05:53:34.584 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:34 vm02 bash[56371]: cluster 2026-03-10T05:53:32.829903+0000 mgr.y (mgr.24992) 34 : cluster [DBG] pgmap v14: 161 pgs: 161 active+clean; 457 KiB data, 103 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T05:53:34.584 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:34 vm02 bash[55303]: cluster 2026-03-10T05:53:32.829903+0000 mgr.y (mgr.24992) 34 : cluster [DBG] pgmap v14: 161 pgs: 161 active+clean; 457 KiB data, 103 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T05:53:34.585 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:34 vm02 bash[55303]: cluster 2026-03-10T05:53:32.829903+0000 mgr.y (mgr.24992) 34 : cluster [DBG] pgmap v14: 161 pgs: 161 active+clean; 457 KiB data, 103 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T05:53:36.459 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:36 vm05 bash[43541]: cluster 2026-03-10T05:53:34.830200+0000 mgr.y (mgr.24992) 35 : cluster [DBG] pgmap v15: 161 pgs: 161 active+clean; 457 KiB data, 103 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T05:53:36.459 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:36 vm05 bash[43541]: cluster 2026-03-10T05:53:34.830200+0000 mgr.y (mgr.24992) 35 : cluster [DBG] pgmap v15: 161 pgs: 161 active+clean; 457 KiB data, 103 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T05:53:36.584 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:36 vm02 bash[56371]: cluster 2026-03-10T05:53:34.830200+0000 mgr.y (mgr.24992) 35 : cluster [DBG] pgmap v15: 161 pgs: 161 active+clean; 457 KiB data, 103 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T05:53:36.584 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:36 vm02 bash[56371]: cluster 2026-03-10T05:53:34.830200+0000 mgr.y (mgr.24992) 35 : cluster [DBG] pgmap v15: 161 pgs: 161 active+clean; 457 KiB data, 103 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T05:53:36.585 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:36 vm02 bash[55303]: cluster 2026-03-10T05:53:34.830200+0000 mgr.y (mgr.24992) 35 : cluster [DBG] pgmap v15: 161 pgs: 161 active+clean; 457 KiB data, 103 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T05:53:36.585 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:36 vm02 bash[55303]: cluster 2026-03-10T05:53:34.830200+0000 mgr.y (mgr.24992) 35 : cluster [DBG] pgmap v15: 161 pgs: 161 active+clean; 457 KiB data, 103 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T05:53:37.251 INFO:journalctl@ceph.prometheus.a.vm05.stdout:Mar 10 05:53:36 vm05 bash[41269]: ts=2026-03-10T05:53:36.949Z caller=group.go:483 level=warn name=CephNodeDiskspaceWarning index=4 component="rule manager" file=/etc/prometheus/alerting/ceph_alerts.yml group=nodes msg="Evaluating rule failed" rule="alert: CephNodeDiskspaceWarning\nexpr: predict_linear(node_filesystem_free_bytes{device=~\"/.*\"}[2d], 3600 * 24 * 5)\n * on (instance) group_left (nodename) node_uname_info < 0\nlabels:\n oid: 1.3.6.1.4.1.50495.1.2.1.8.4\n severity: warning\n type: ceph_default\nannotations:\n description: Mountpoint {{ $labels.mountpoint }} on {{ $labels.nodename }} will\n be full in less than 5 days based on the 48 hour trailing fill rate.\n summary: Host filesystem free space is getting low\n" err="found duplicate series for the match group {instance=\"vm05\"} on the right hand-side of the operation: [{__name__=\"node_uname_info\", cluster=\"107483ae-1c44-11f1-b530-c1172cd6122a\", domainname=\"(none)\", instance=\"vm05\", job=\"node\", machine=\"x86_64\", nodename=\"vm05\", release=\"5.15.0-1092-kvm\", sysname=\"Linux\", version=\"#97-Ubuntu SMP Fri Jan 23 15:00:24 UTC 2026\"}, {__name__=\"node_uname_info\", domainname=\"(none)\", instance=\"vm05\", job=\"node\", machine=\"x86_64\", nodename=\"vm05\", release=\"5.15.0-1092-kvm\", sysname=\"Linux\", version=\"#97-Ubuntu SMP Fri Jan 23 15:00:24 UTC 2026\"}];many-to-many matching not allowed: matching labels must be unique on one side" 2026-03-10T05:53:38.500 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:38 vm05 bash[43541]: cluster 2026-03-10T05:53:36.830503+0000 mgr.y (mgr.24992) 36 : cluster [DBG] pgmap v16: 161 pgs: 161 active+clean; 457 KiB data, 103 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T05:53:38.501 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:38 vm05 bash[43541]: cluster 2026-03-10T05:53:36.830503+0000 mgr.y (mgr.24992) 36 : cluster [DBG] pgmap v16: 161 pgs: 161 active+clean; 457 KiB data, 103 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T05:53:38.501 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:38 vm05 bash[43541]: audit 2026-03-10T05:53:36.884730+0000 mgr.y (mgr.24992) 37 : audit [DBG] from='client.25048 -' entity='client.iscsi.foo.vm02.mxbwmh' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T05:53:38.501 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:38 vm05 bash[43541]: audit 2026-03-10T05:53:36.884730+0000 mgr.y (mgr.24992) 37 : audit [DBG] from='client.25048 -' entity='client.iscsi.foo.vm02.mxbwmh' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T05:53:38.529 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:38 vm02 bash[56371]: cluster 2026-03-10T05:53:36.830503+0000 mgr.y (mgr.24992) 36 : cluster [DBG] pgmap v16: 161 pgs: 161 active+clean; 457 KiB data, 103 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T05:53:38.529 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:38 vm02 bash[56371]: cluster 2026-03-10T05:53:36.830503+0000 mgr.y (mgr.24992) 36 : cluster [DBG] pgmap v16: 161 pgs: 161 active+clean; 457 KiB data, 103 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T05:53:38.529 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:38 vm02 bash[56371]: audit 2026-03-10T05:53:36.884730+0000 mgr.y (mgr.24992) 37 : audit [DBG] from='client.25048 -' entity='client.iscsi.foo.vm02.mxbwmh' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T05:53:38.529 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:38 vm02 bash[56371]: audit 2026-03-10T05:53:36.884730+0000 mgr.y (mgr.24992) 37 : audit [DBG] from='client.25048 -' entity='client.iscsi.foo.vm02.mxbwmh' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T05:53:38.529 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:38 vm02 bash[55303]: cluster 2026-03-10T05:53:36.830503+0000 mgr.y (mgr.24992) 36 : cluster [DBG] pgmap v16: 161 pgs: 161 active+clean; 457 KiB data, 103 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T05:53:38.529 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:38 vm02 bash[55303]: cluster 2026-03-10T05:53:36.830503+0000 mgr.y (mgr.24992) 36 : cluster [DBG] pgmap v16: 161 pgs: 161 active+clean; 457 KiB data, 103 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T05:53:38.530 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:38 vm02 bash[55303]: audit 2026-03-10T05:53:36.884730+0000 mgr.y (mgr.24992) 37 : audit [DBG] from='client.25048 -' entity='client.iscsi.foo.vm02.mxbwmh' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T05:53:38.530 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:38 vm02 bash[55303]: audit 2026-03-10T05:53:36.884730+0000 mgr.y (mgr.24992) 37 : audit [DBG] from='client.25048 -' entity='client.iscsi.foo.vm02.mxbwmh' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T05:53:39.331 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:39 vm02 bash[56371]: audit 2026-03-10T05:53:38.187164+0000 mon.a (mon.0) 94 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:53:39.331 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:39 vm02 bash[56371]: audit 2026-03-10T05:53:38.187164+0000 mon.a (mon.0) 94 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:53:39.331 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:39 vm02 bash[56371]: audit 2026-03-10T05:53:38.192183+0000 mon.a (mon.0) 95 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:53:39.331 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:39 vm02 bash[56371]: audit 2026-03-10T05:53:38.192183+0000 mon.a (mon.0) 95 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:53:39.331 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:39 vm02 bash[56371]: audit 2026-03-10T05:53:38.193180+0000 mon.a (mon.0) 96 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:53:39.331 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:39 vm02 bash[56371]: audit 2026-03-10T05:53:38.193180+0000 mon.a (mon.0) 96 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:53:39.331 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:39 vm02 bash[56371]: audit 2026-03-10T05:53:38.193728+0000 mon.a (mon.0) 97 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T05:53:39.331 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:39 vm02 bash[56371]: audit 2026-03-10T05:53:38.193728+0000 mon.a (mon.0) 97 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T05:53:39.331 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:39 vm02 bash[56371]: audit 2026-03-10T05:53:38.198228+0000 mon.a (mon.0) 98 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:53:39.332 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:39 vm02 bash[56371]: audit 2026-03-10T05:53:38.198228+0000 mon.a (mon.0) 98 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:53:39.332 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:39 vm02 bash[56371]: cephadm 2026-03-10T05:53:38.209349+0000 mgr.y (mgr.24992) 38 : cephadm [INF] Reconfiguring osd.3 (monmap changed)... 2026-03-10T05:53:39.332 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:39 vm02 bash[56371]: cephadm 2026-03-10T05:53:38.209349+0000 mgr.y (mgr.24992) 38 : cephadm [INF] Reconfiguring osd.3 (monmap changed)... 2026-03-10T05:53:39.332 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:39 vm02 bash[56371]: audit 2026-03-10T05:53:38.209616+0000 mon.a (mon.0) 99 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.3"}]: dispatch 2026-03-10T05:53:39.332 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:39 vm02 bash[56371]: audit 2026-03-10T05:53:38.209616+0000 mon.a (mon.0) 99 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.3"}]: dispatch 2026-03-10T05:53:39.332 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:39 vm02 bash[56371]: audit 2026-03-10T05:53:38.210193+0000 mon.a (mon.0) 100 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:53:39.332 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:39 vm02 bash[56371]: audit 2026-03-10T05:53:38.210193+0000 mon.a (mon.0) 100 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:53:39.332 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:39 vm02 bash[56371]: cephadm 2026-03-10T05:53:38.211891+0000 mgr.y (mgr.24992) 39 : cephadm [INF] Reconfiguring daemon osd.3 on vm02 2026-03-10T05:53:39.332 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:39 vm02 bash[56371]: cephadm 2026-03-10T05:53:38.211891+0000 mgr.y (mgr.24992) 39 : cephadm [INF] Reconfiguring daemon osd.3 on vm02 2026-03-10T05:53:39.332 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:39 vm02 bash[56371]: audit 2026-03-10T05:53:38.626861+0000 mon.a (mon.0) 101 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:53:39.332 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:39 vm02 bash[56371]: audit 2026-03-10T05:53:38.626861+0000 mon.a (mon.0) 101 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:53:39.332 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:39 vm02 bash[56371]: audit 2026-03-10T05:53:38.631328+0000 mon.a (mon.0) 102 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:53:39.332 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:39 vm02 bash[56371]: audit 2026-03-10T05:53:38.631328+0000 mon.a (mon.0) 102 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:53:39.332 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:39 vm02 bash[56371]: cephadm 2026-03-10T05:53:38.632062+0000 mgr.y (mgr.24992) 40 : cephadm [INF] Reconfiguring osd.2 (monmap changed)... 2026-03-10T05:53:39.332 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:39 vm02 bash[56371]: cephadm 2026-03-10T05:53:38.632062+0000 mgr.y (mgr.24992) 40 : cephadm [INF] Reconfiguring osd.2 (monmap changed)... 2026-03-10T05:53:39.332 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:39 vm02 bash[56371]: audit 2026-03-10T05:53:38.632677+0000 mon.a (mon.0) 103 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.2"}]: dispatch 2026-03-10T05:53:39.332 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:39 vm02 bash[56371]: audit 2026-03-10T05:53:38.632677+0000 mon.a (mon.0) 103 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.2"}]: dispatch 2026-03-10T05:53:39.332 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:39 vm02 bash[56371]: audit 2026-03-10T05:53:38.633193+0000 mon.a (mon.0) 104 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:53:39.332 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:39 vm02 bash[56371]: audit 2026-03-10T05:53:38.633193+0000 mon.a (mon.0) 104 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:53:39.332 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:39 vm02 bash[56371]: cephadm 2026-03-10T05:53:38.634277+0000 mgr.y (mgr.24992) 41 : cephadm [INF] Reconfiguring daemon osd.2 on vm02 2026-03-10T05:53:39.332 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:39 vm02 bash[56371]: cephadm 2026-03-10T05:53:38.634277+0000 mgr.y (mgr.24992) 41 : cephadm [INF] Reconfiguring daemon osd.2 on vm02 2026-03-10T05:53:39.332 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:39 vm02 bash[56371]: audit 2026-03-10T05:53:39.041453+0000 mon.a (mon.0) 105 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:53:39.332 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:39 vm02 bash[56371]: audit 2026-03-10T05:53:39.041453+0000 mon.a (mon.0) 105 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:53:39.332 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:39 vm02 bash[56371]: audit 2026-03-10T05:53:39.047660+0000 mon.a (mon.0) 106 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:53:39.332 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:39 vm02 bash[56371]: audit 2026-03-10T05:53:39.047660+0000 mon.a (mon.0) 106 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:53:39.332 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:39 vm02 bash[56371]: audit 2026-03-10T05:53:39.048724+0000 mon.a (mon.0) 107 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.foo.vm02.pbogjd", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch 2026-03-10T05:53:39.332 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:39 vm02 bash[56371]: audit 2026-03-10T05:53:39.048724+0000 mon.a (mon.0) 107 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.foo.vm02.pbogjd", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch 2026-03-10T05:53:39.332 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:39 vm02 bash[56371]: audit 2026-03-10T05:53:39.049785+0000 mon.a (mon.0) 108 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:53:39.332 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:39 vm02 bash[56371]: audit 2026-03-10T05:53:39.049785+0000 mon.a (mon.0) 108 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:53:39.332 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:39 vm02 bash[55303]: audit 2026-03-10T05:53:38.187164+0000 mon.a (mon.0) 94 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:53:39.332 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:39 vm02 bash[55303]: audit 2026-03-10T05:53:38.187164+0000 mon.a (mon.0) 94 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:53:39.332 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:39 vm02 bash[55303]: audit 2026-03-10T05:53:38.192183+0000 mon.a (mon.0) 95 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:53:39.332 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:39 vm02 bash[55303]: audit 2026-03-10T05:53:38.192183+0000 mon.a (mon.0) 95 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:53:39.332 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:39 vm02 bash[55303]: audit 2026-03-10T05:53:38.193180+0000 mon.a (mon.0) 96 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:53:39.332 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:39 vm02 bash[55303]: audit 2026-03-10T05:53:38.193180+0000 mon.a (mon.0) 96 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:53:39.332 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:39 vm02 bash[55303]: audit 2026-03-10T05:53:38.193728+0000 mon.a (mon.0) 97 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T05:53:39.332 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:39 vm02 bash[55303]: audit 2026-03-10T05:53:38.193728+0000 mon.a (mon.0) 97 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T05:53:39.332 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:39 vm02 bash[55303]: audit 2026-03-10T05:53:38.198228+0000 mon.a (mon.0) 98 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:53:39.332 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:39 vm02 bash[55303]: audit 2026-03-10T05:53:38.198228+0000 mon.a (mon.0) 98 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:53:39.332 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:39 vm02 bash[55303]: cephadm 2026-03-10T05:53:38.209349+0000 mgr.y (mgr.24992) 38 : cephadm [INF] Reconfiguring osd.3 (monmap changed)... 2026-03-10T05:53:39.332 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:39 vm02 bash[55303]: cephadm 2026-03-10T05:53:38.209349+0000 mgr.y (mgr.24992) 38 : cephadm [INF] Reconfiguring osd.3 (monmap changed)... 2026-03-10T05:53:39.332 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:39 vm02 bash[55303]: audit 2026-03-10T05:53:38.209616+0000 mon.a (mon.0) 99 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.3"}]: dispatch 2026-03-10T05:53:39.332 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:39 vm02 bash[55303]: audit 2026-03-10T05:53:38.209616+0000 mon.a (mon.0) 99 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.3"}]: dispatch 2026-03-10T05:53:39.332 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:39 vm02 bash[55303]: audit 2026-03-10T05:53:38.210193+0000 mon.a (mon.0) 100 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:53:39.332 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:39 vm02 bash[55303]: audit 2026-03-10T05:53:38.210193+0000 mon.a (mon.0) 100 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:53:39.332 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:39 vm02 bash[55303]: cephadm 2026-03-10T05:53:38.211891+0000 mgr.y (mgr.24992) 39 : cephadm [INF] Reconfiguring daemon osd.3 on vm02 2026-03-10T05:53:39.332 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:39 vm02 bash[55303]: cephadm 2026-03-10T05:53:38.211891+0000 mgr.y (mgr.24992) 39 : cephadm [INF] Reconfiguring daemon osd.3 on vm02 2026-03-10T05:53:39.332 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:39 vm02 bash[55303]: audit 2026-03-10T05:53:38.626861+0000 mon.a (mon.0) 101 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:53:39.332 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:39 vm02 bash[55303]: audit 2026-03-10T05:53:38.626861+0000 mon.a (mon.0) 101 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:53:39.332 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:39 vm02 bash[55303]: audit 2026-03-10T05:53:38.631328+0000 mon.a (mon.0) 102 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:53:39.332 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:39 vm02 bash[55303]: audit 2026-03-10T05:53:38.631328+0000 mon.a (mon.0) 102 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:53:39.332 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:39 vm02 bash[55303]: cephadm 2026-03-10T05:53:38.632062+0000 mgr.y (mgr.24992) 40 : cephadm [INF] Reconfiguring osd.2 (monmap changed)... 2026-03-10T05:53:39.332 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:39 vm02 bash[55303]: cephadm 2026-03-10T05:53:38.632062+0000 mgr.y (mgr.24992) 40 : cephadm [INF] Reconfiguring osd.2 (monmap changed)... 2026-03-10T05:53:39.332 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:39 vm02 bash[55303]: audit 2026-03-10T05:53:38.632677+0000 mon.a (mon.0) 103 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.2"}]: dispatch 2026-03-10T05:53:39.333 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:39 vm02 bash[55303]: audit 2026-03-10T05:53:38.632677+0000 mon.a (mon.0) 103 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.2"}]: dispatch 2026-03-10T05:53:39.333 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:39 vm02 bash[55303]: audit 2026-03-10T05:53:38.633193+0000 mon.a (mon.0) 104 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:53:39.333 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:39 vm02 bash[55303]: audit 2026-03-10T05:53:38.633193+0000 mon.a (mon.0) 104 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:53:39.333 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:39 vm02 bash[55303]: cephadm 2026-03-10T05:53:38.634277+0000 mgr.y (mgr.24992) 41 : cephadm [INF] Reconfiguring daemon osd.2 on vm02 2026-03-10T05:53:39.333 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:39 vm02 bash[55303]: cephadm 2026-03-10T05:53:38.634277+0000 mgr.y (mgr.24992) 41 : cephadm [INF] Reconfiguring daemon osd.2 on vm02 2026-03-10T05:53:39.333 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:39 vm02 bash[55303]: audit 2026-03-10T05:53:39.041453+0000 mon.a (mon.0) 105 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:53:39.333 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:39 vm02 bash[55303]: audit 2026-03-10T05:53:39.041453+0000 mon.a (mon.0) 105 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:53:39.333 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:39 vm02 bash[55303]: audit 2026-03-10T05:53:39.047660+0000 mon.a (mon.0) 106 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:53:39.333 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:39 vm02 bash[55303]: audit 2026-03-10T05:53:39.047660+0000 mon.a (mon.0) 106 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:53:39.333 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:39 vm02 bash[55303]: audit 2026-03-10T05:53:39.048724+0000 mon.a (mon.0) 107 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.foo.vm02.pbogjd", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch 2026-03-10T05:53:39.333 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:39 vm02 bash[55303]: audit 2026-03-10T05:53:39.048724+0000 mon.a (mon.0) 107 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.foo.vm02.pbogjd", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch 2026-03-10T05:53:39.333 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:39 vm02 bash[55303]: audit 2026-03-10T05:53:39.049785+0000 mon.a (mon.0) 108 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:53:39.333 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:39 vm02 bash[55303]: audit 2026-03-10T05:53:39.049785+0000 mon.a (mon.0) 108 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:53:39.501 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:39 vm05 bash[43541]: audit 2026-03-10T05:53:38.187164+0000 mon.a (mon.0) 94 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:53:39.501 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:39 vm05 bash[43541]: audit 2026-03-10T05:53:38.187164+0000 mon.a (mon.0) 94 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:53:39.501 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:39 vm05 bash[43541]: audit 2026-03-10T05:53:38.192183+0000 mon.a (mon.0) 95 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:53:39.501 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:39 vm05 bash[43541]: audit 2026-03-10T05:53:38.192183+0000 mon.a (mon.0) 95 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:53:39.501 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:39 vm05 bash[43541]: audit 2026-03-10T05:53:38.193180+0000 mon.a (mon.0) 96 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:53:39.501 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:39 vm05 bash[43541]: audit 2026-03-10T05:53:38.193180+0000 mon.a (mon.0) 96 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:53:39.501 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:39 vm05 bash[43541]: audit 2026-03-10T05:53:38.193728+0000 mon.a (mon.0) 97 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T05:53:39.501 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:39 vm05 bash[43541]: audit 2026-03-10T05:53:38.193728+0000 mon.a (mon.0) 97 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T05:53:39.501 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:39 vm05 bash[43541]: audit 2026-03-10T05:53:38.198228+0000 mon.a (mon.0) 98 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:53:39.501 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:39 vm05 bash[43541]: audit 2026-03-10T05:53:38.198228+0000 mon.a (mon.0) 98 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:53:39.501 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:39 vm05 bash[43541]: cephadm 2026-03-10T05:53:38.209349+0000 mgr.y (mgr.24992) 38 : cephadm [INF] Reconfiguring osd.3 (monmap changed)... 2026-03-10T05:53:39.501 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:39 vm05 bash[43541]: cephadm 2026-03-10T05:53:38.209349+0000 mgr.y (mgr.24992) 38 : cephadm [INF] Reconfiguring osd.3 (monmap changed)... 2026-03-10T05:53:39.501 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:39 vm05 bash[43541]: audit 2026-03-10T05:53:38.209616+0000 mon.a (mon.0) 99 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.3"}]: dispatch 2026-03-10T05:53:39.501 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:39 vm05 bash[43541]: audit 2026-03-10T05:53:38.209616+0000 mon.a (mon.0) 99 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.3"}]: dispatch 2026-03-10T05:53:39.501 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:39 vm05 bash[43541]: audit 2026-03-10T05:53:38.210193+0000 mon.a (mon.0) 100 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:53:39.501 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:39 vm05 bash[43541]: audit 2026-03-10T05:53:38.210193+0000 mon.a (mon.0) 100 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:53:39.501 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:39 vm05 bash[43541]: cephadm 2026-03-10T05:53:38.211891+0000 mgr.y (mgr.24992) 39 : cephadm [INF] Reconfiguring daemon osd.3 on vm02 2026-03-10T05:53:39.501 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:39 vm05 bash[43541]: cephadm 2026-03-10T05:53:38.211891+0000 mgr.y (mgr.24992) 39 : cephadm [INF] Reconfiguring daemon osd.3 on vm02 2026-03-10T05:53:39.501 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:39 vm05 bash[43541]: audit 2026-03-10T05:53:38.626861+0000 mon.a (mon.0) 101 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:53:39.501 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:39 vm05 bash[43541]: audit 2026-03-10T05:53:38.626861+0000 mon.a (mon.0) 101 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:53:39.501 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:39 vm05 bash[43541]: audit 2026-03-10T05:53:38.631328+0000 mon.a (mon.0) 102 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:53:39.501 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:39 vm05 bash[43541]: audit 2026-03-10T05:53:38.631328+0000 mon.a (mon.0) 102 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:53:39.501 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:39 vm05 bash[43541]: cephadm 2026-03-10T05:53:38.632062+0000 mgr.y (mgr.24992) 40 : cephadm [INF] Reconfiguring osd.2 (monmap changed)... 2026-03-10T05:53:39.501 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:39 vm05 bash[43541]: cephadm 2026-03-10T05:53:38.632062+0000 mgr.y (mgr.24992) 40 : cephadm [INF] Reconfiguring osd.2 (monmap changed)... 2026-03-10T05:53:39.501 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:39 vm05 bash[43541]: audit 2026-03-10T05:53:38.632677+0000 mon.a (mon.0) 103 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.2"}]: dispatch 2026-03-10T05:53:39.501 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:39 vm05 bash[43541]: audit 2026-03-10T05:53:38.632677+0000 mon.a (mon.0) 103 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.2"}]: dispatch 2026-03-10T05:53:39.501 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:39 vm05 bash[43541]: audit 2026-03-10T05:53:38.633193+0000 mon.a (mon.0) 104 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:53:39.501 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:39 vm05 bash[43541]: audit 2026-03-10T05:53:38.633193+0000 mon.a (mon.0) 104 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:53:39.501 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:39 vm05 bash[43541]: cephadm 2026-03-10T05:53:38.634277+0000 mgr.y (mgr.24992) 41 : cephadm [INF] Reconfiguring daemon osd.2 on vm02 2026-03-10T05:53:39.501 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:39 vm05 bash[43541]: cephadm 2026-03-10T05:53:38.634277+0000 mgr.y (mgr.24992) 41 : cephadm [INF] Reconfiguring daemon osd.2 on vm02 2026-03-10T05:53:39.501 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:39 vm05 bash[43541]: audit 2026-03-10T05:53:39.041453+0000 mon.a (mon.0) 105 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:53:39.501 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:39 vm05 bash[43541]: audit 2026-03-10T05:53:39.041453+0000 mon.a (mon.0) 105 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:53:39.501 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:39 vm05 bash[43541]: audit 2026-03-10T05:53:39.047660+0000 mon.a (mon.0) 106 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:53:39.501 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:39 vm05 bash[43541]: audit 2026-03-10T05:53:39.047660+0000 mon.a (mon.0) 106 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:53:39.501 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:39 vm05 bash[43541]: audit 2026-03-10T05:53:39.048724+0000 mon.a (mon.0) 107 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.foo.vm02.pbogjd", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch 2026-03-10T05:53:39.501 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:39 vm05 bash[43541]: audit 2026-03-10T05:53:39.048724+0000 mon.a (mon.0) 107 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.foo.vm02.pbogjd", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch 2026-03-10T05:53:39.501 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:39 vm05 bash[43541]: audit 2026-03-10T05:53:39.049785+0000 mon.a (mon.0) 108 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:53:39.501 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:39 vm05 bash[43541]: audit 2026-03-10T05:53:39.049785+0000 mon.a (mon.0) 108 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:53:40.751 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:40 vm05 bash[43541]: cluster 2026-03-10T05:53:38.830919+0000 mgr.y (mgr.24992) 42 : cluster [DBG] pgmap v17: 161 pgs: 161 active+clean; 457 KiB data, 103 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T05:53:40.751 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:40 vm05 bash[43541]: cluster 2026-03-10T05:53:38.830919+0000 mgr.y (mgr.24992) 42 : cluster [DBG] pgmap v17: 161 pgs: 161 active+clean; 457 KiB data, 103 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T05:53:40.751 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:40 vm05 bash[43541]: cephadm 2026-03-10T05:53:39.048505+0000 mgr.y (mgr.24992) 43 : cephadm [INF] Reconfiguring rgw.foo.vm02.pbogjd (monmap changed)... 2026-03-10T05:53:40.751 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:40 vm05 bash[43541]: cephadm 2026-03-10T05:53:39.048505+0000 mgr.y (mgr.24992) 43 : cephadm [INF] Reconfiguring rgw.foo.vm02.pbogjd (monmap changed)... 2026-03-10T05:53:40.751 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:40 vm05 bash[43541]: cephadm 2026-03-10T05:53:39.050416+0000 mgr.y (mgr.24992) 44 : cephadm [INF] Reconfiguring daemon rgw.foo.vm02.pbogjd on vm02 2026-03-10T05:53:40.751 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:40 vm05 bash[43541]: cephadm 2026-03-10T05:53:39.050416+0000 mgr.y (mgr.24992) 44 : cephadm [INF] Reconfiguring daemon rgw.foo.vm02.pbogjd on vm02 2026-03-10T05:53:40.751 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:40 vm05 bash[43541]: audit 2026-03-10T05:53:39.403371+0000 mon.a (mon.0) 109 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:53:40.751 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:40 vm05 bash[43541]: audit 2026-03-10T05:53:39.403371+0000 mon.a (mon.0) 109 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:53:40.751 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:40 vm05 bash[43541]: audit 2026-03-10T05:53:39.408898+0000 mon.a (mon.0) 110 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:53:40.751 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:40 vm05 bash[43541]: audit 2026-03-10T05:53:39.408898+0000 mon.a (mon.0) 110 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:53:40.751 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:40 vm05 bash[43541]: cephadm 2026-03-10T05:53:39.410423+0000 mgr.y (mgr.24992) 45 : cephadm [INF] Reconfiguring mon.c (monmap changed)... 2026-03-10T05:53:40.751 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:40 vm05 bash[43541]: cephadm 2026-03-10T05:53:39.410423+0000 mgr.y (mgr.24992) 45 : cephadm [INF] Reconfiguring mon.c (monmap changed)... 2026-03-10T05:53:40.751 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:40 vm05 bash[43541]: audit 2026-03-10T05:53:39.410567+0000 mon.a (mon.0) 111 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-10T05:53:40.751 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:40 vm05 bash[43541]: audit 2026-03-10T05:53:39.410567+0000 mon.a (mon.0) 111 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-10T05:53:40.751 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:40 vm05 bash[43541]: audit 2026-03-10T05:53:39.410948+0000 mon.a (mon.0) 112 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch 2026-03-10T05:53:40.751 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:40 vm05 bash[43541]: audit 2026-03-10T05:53:39.410948+0000 mon.a (mon.0) 112 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch 2026-03-10T05:53:40.751 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:40 vm05 bash[43541]: audit 2026-03-10T05:53:39.411304+0000 mon.a (mon.0) 113 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:53:40.751 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:40 vm05 bash[43541]: audit 2026-03-10T05:53:39.411304+0000 mon.a (mon.0) 113 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:53:40.751 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:40 vm05 bash[43541]: cephadm 2026-03-10T05:53:39.411731+0000 mgr.y (mgr.24992) 46 : cephadm [INF] Reconfiguring daemon mon.c on vm02 2026-03-10T05:53:40.751 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:40 vm05 bash[43541]: cephadm 2026-03-10T05:53:39.411731+0000 mgr.y (mgr.24992) 46 : cephadm [INF] Reconfiguring daemon mon.c on vm02 2026-03-10T05:53:40.751 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:40 vm05 bash[43541]: audit 2026-03-10T05:53:39.819032+0000 mon.a (mon.0) 114 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:53:40.751 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:40 vm05 bash[43541]: audit 2026-03-10T05:53:39.819032+0000 mon.a (mon.0) 114 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:53:40.751 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:40 vm05 bash[43541]: audit 2026-03-10T05:53:39.824379+0000 mon.a (mon.0) 115 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:53:40.751 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:40 vm05 bash[43541]: audit 2026-03-10T05:53:39.824379+0000 mon.a (mon.0) 115 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:53:40.751 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:40 vm05 bash[43541]: audit 2026-03-10T05:53:39.825559+0000 mon.a (mon.0) 116 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.0"}]: dispatch 2026-03-10T05:53:40.751 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:40 vm05 bash[43541]: audit 2026-03-10T05:53:39.825559+0000 mon.a (mon.0) 116 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.0"}]: dispatch 2026-03-10T05:53:40.751 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:40 vm05 bash[43541]: audit 2026-03-10T05:53:39.826097+0000 mon.a (mon.0) 117 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:53:40.751 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:40 vm05 bash[43541]: audit 2026-03-10T05:53:39.826097+0000 mon.a (mon.0) 117 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:53:40.751 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:40 vm05 bash[43541]: audit 2026-03-10T05:53:40.201504+0000 mon.a (mon.0) 118 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:53:40.751 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:40 vm05 bash[43541]: audit 2026-03-10T05:53:40.201504+0000 mon.a (mon.0) 118 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:53:40.751 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:40 vm05 bash[43541]: audit 2026-03-10T05:53:40.207211+0000 mon.a (mon.0) 119 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:53:40.751 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:40 vm05 bash[43541]: audit 2026-03-10T05:53:40.207211+0000 mon.a (mon.0) 119 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:53:40.751 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:40 vm05 bash[43541]: audit 2026-03-10T05:53:40.208430+0000 mon.a (mon.0) 120 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-10T05:53:40.751 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:40 vm05 bash[43541]: audit 2026-03-10T05:53:40.208430+0000 mon.a (mon.0) 120 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-10T05:53:40.751 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:40 vm05 bash[43541]: audit 2026-03-10T05:53:40.209068+0000 mon.a (mon.0) 121 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch 2026-03-10T05:53:40.751 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:40 vm05 bash[43541]: audit 2026-03-10T05:53:40.209068+0000 mon.a (mon.0) 121 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch 2026-03-10T05:53:40.751 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:40 vm05 bash[43541]: audit 2026-03-10T05:53:40.209416+0000 mon.a (mon.0) 122 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:53:40.751 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:40 vm05 bash[43541]: audit 2026-03-10T05:53:40.209416+0000 mon.a (mon.0) 122 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:53:40.822 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:40 vm02 bash[56371]: cluster 2026-03-10T05:53:38.830919+0000 mgr.y (mgr.24992) 42 : cluster [DBG] pgmap v17: 161 pgs: 161 active+clean; 457 KiB data, 103 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T05:53:40.822 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:40 vm02 bash[56371]: cluster 2026-03-10T05:53:38.830919+0000 mgr.y (mgr.24992) 42 : cluster [DBG] pgmap v17: 161 pgs: 161 active+clean; 457 KiB data, 103 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T05:53:40.822 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:40 vm02 bash[56371]: cephadm 2026-03-10T05:53:39.048505+0000 mgr.y (mgr.24992) 43 : cephadm [INF] Reconfiguring rgw.foo.vm02.pbogjd (monmap changed)... 2026-03-10T05:53:40.822 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:40 vm02 bash[56371]: cephadm 2026-03-10T05:53:39.048505+0000 mgr.y (mgr.24992) 43 : cephadm [INF] Reconfiguring rgw.foo.vm02.pbogjd (monmap changed)... 2026-03-10T05:53:40.822 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:40 vm02 bash[56371]: cephadm 2026-03-10T05:53:39.050416+0000 mgr.y (mgr.24992) 44 : cephadm [INF] Reconfiguring daemon rgw.foo.vm02.pbogjd on vm02 2026-03-10T05:53:40.822 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:40 vm02 bash[56371]: cephadm 2026-03-10T05:53:39.050416+0000 mgr.y (mgr.24992) 44 : cephadm [INF] Reconfiguring daemon rgw.foo.vm02.pbogjd on vm02 2026-03-10T05:53:40.822 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:40 vm02 bash[56371]: audit 2026-03-10T05:53:39.403371+0000 mon.a (mon.0) 109 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:53:40.822 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:40 vm02 bash[56371]: audit 2026-03-10T05:53:39.403371+0000 mon.a (mon.0) 109 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:53:40.822 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:40 vm02 bash[56371]: audit 2026-03-10T05:53:39.408898+0000 mon.a (mon.0) 110 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:53:40.822 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:40 vm02 bash[56371]: audit 2026-03-10T05:53:39.408898+0000 mon.a (mon.0) 110 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:53:40.822 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:40 vm02 bash[56371]: cephadm 2026-03-10T05:53:39.410423+0000 mgr.y (mgr.24992) 45 : cephadm [INF] Reconfiguring mon.c (monmap changed)... 2026-03-10T05:53:40.822 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:40 vm02 bash[56371]: cephadm 2026-03-10T05:53:39.410423+0000 mgr.y (mgr.24992) 45 : cephadm [INF] Reconfiguring mon.c (monmap changed)... 2026-03-10T05:53:40.822 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:40 vm02 bash[56371]: audit 2026-03-10T05:53:39.410567+0000 mon.a (mon.0) 111 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-10T05:53:40.822 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:40 vm02 bash[56371]: audit 2026-03-10T05:53:39.410567+0000 mon.a (mon.0) 111 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-10T05:53:40.822 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:40 vm02 bash[56371]: audit 2026-03-10T05:53:39.410948+0000 mon.a (mon.0) 112 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch 2026-03-10T05:53:40.822 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:40 vm02 bash[56371]: audit 2026-03-10T05:53:39.410948+0000 mon.a (mon.0) 112 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch 2026-03-10T05:53:40.822 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:40 vm02 bash[56371]: audit 2026-03-10T05:53:39.411304+0000 mon.a (mon.0) 113 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:53:40.822 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:40 vm02 bash[56371]: audit 2026-03-10T05:53:39.411304+0000 mon.a (mon.0) 113 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:53:40.822 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:40 vm02 bash[56371]: cephadm 2026-03-10T05:53:39.411731+0000 mgr.y (mgr.24992) 46 : cephadm [INF] Reconfiguring daemon mon.c on vm02 2026-03-10T05:53:40.822 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:40 vm02 bash[56371]: cephadm 2026-03-10T05:53:39.411731+0000 mgr.y (mgr.24992) 46 : cephadm [INF] Reconfiguring daemon mon.c on vm02 2026-03-10T05:53:40.822 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:40 vm02 bash[56371]: audit 2026-03-10T05:53:39.819032+0000 mon.a (mon.0) 114 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:53:40.822 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:40 vm02 bash[56371]: audit 2026-03-10T05:53:39.819032+0000 mon.a (mon.0) 114 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:53:40.822 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:40 vm02 bash[56371]: audit 2026-03-10T05:53:39.824379+0000 mon.a (mon.0) 115 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:53:40.822 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:40 vm02 bash[56371]: audit 2026-03-10T05:53:39.824379+0000 mon.a (mon.0) 115 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:53:40.822 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:40 vm02 bash[56371]: audit 2026-03-10T05:53:39.825559+0000 mon.a (mon.0) 116 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.0"}]: dispatch 2026-03-10T05:53:40.822 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:40 vm02 bash[56371]: audit 2026-03-10T05:53:39.825559+0000 mon.a (mon.0) 116 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.0"}]: dispatch 2026-03-10T05:53:40.822 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:40 vm02 bash[56371]: audit 2026-03-10T05:53:39.826097+0000 mon.a (mon.0) 117 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:53:40.822 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:40 vm02 bash[56371]: audit 2026-03-10T05:53:39.826097+0000 mon.a (mon.0) 117 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:53:40.822 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:40 vm02 bash[56371]: audit 2026-03-10T05:53:40.201504+0000 mon.a (mon.0) 118 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:53:40.822 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:40 vm02 bash[56371]: audit 2026-03-10T05:53:40.201504+0000 mon.a (mon.0) 118 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:53:40.823 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:40 vm02 bash[56371]: audit 2026-03-10T05:53:40.207211+0000 mon.a (mon.0) 119 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:53:40.823 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:40 vm02 bash[56371]: audit 2026-03-10T05:53:40.207211+0000 mon.a (mon.0) 119 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:53:40.823 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:40 vm02 bash[56371]: audit 2026-03-10T05:53:40.208430+0000 mon.a (mon.0) 120 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-10T05:53:40.823 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:40 vm02 bash[56371]: audit 2026-03-10T05:53:40.208430+0000 mon.a (mon.0) 120 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-10T05:53:40.823 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:40 vm02 bash[56371]: audit 2026-03-10T05:53:40.209068+0000 mon.a (mon.0) 121 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch 2026-03-10T05:53:40.823 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:40 vm02 bash[56371]: audit 2026-03-10T05:53:40.209068+0000 mon.a (mon.0) 121 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch 2026-03-10T05:53:40.823 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:40 vm02 bash[56371]: audit 2026-03-10T05:53:40.209416+0000 mon.a (mon.0) 122 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:53:40.823 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:40 vm02 bash[56371]: audit 2026-03-10T05:53:40.209416+0000 mon.a (mon.0) 122 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:53:40.823 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:40 vm02 bash[55303]: cluster 2026-03-10T05:53:38.830919+0000 mgr.y (mgr.24992) 42 : cluster [DBG] pgmap v17: 161 pgs: 161 active+clean; 457 KiB data, 103 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T05:53:40.823 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:40 vm02 bash[55303]: cluster 2026-03-10T05:53:38.830919+0000 mgr.y (mgr.24992) 42 : cluster [DBG] pgmap v17: 161 pgs: 161 active+clean; 457 KiB data, 103 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T05:53:40.823 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:40 vm02 bash[55303]: cephadm 2026-03-10T05:53:39.048505+0000 mgr.y (mgr.24992) 43 : cephadm [INF] Reconfiguring rgw.foo.vm02.pbogjd (monmap changed)... 2026-03-10T05:53:40.823 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:40 vm02 bash[55303]: cephadm 2026-03-10T05:53:39.048505+0000 mgr.y (mgr.24992) 43 : cephadm [INF] Reconfiguring rgw.foo.vm02.pbogjd (monmap changed)... 2026-03-10T05:53:40.823 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:40 vm02 bash[55303]: cephadm 2026-03-10T05:53:39.050416+0000 mgr.y (mgr.24992) 44 : cephadm [INF] Reconfiguring daemon rgw.foo.vm02.pbogjd on vm02 2026-03-10T05:53:40.823 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:40 vm02 bash[55303]: cephadm 2026-03-10T05:53:39.050416+0000 mgr.y (mgr.24992) 44 : cephadm [INF] Reconfiguring daemon rgw.foo.vm02.pbogjd on vm02 2026-03-10T05:53:40.823 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:40 vm02 bash[55303]: audit 2026-03-10T05:53:39.403371+0000 mon.a (mon.0) 109 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:53:40.823 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:40 vm02 bash[55303]: audit 2026-03-10T05:53:39.403371+0000 mon.a (mon.0) 109 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:53:40.823 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:40 vm02 bash[55303]: audit 2026-03-10T05:53:39.408898+0000 mon.a (mon.0) 110 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:53:40.823 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:40 vm02 bash[55303]: audit 2026-03-10T05:53:39.408898+0000 mon.a (mon.0) 110 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:53:40.823 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:40 vm02 bash[55303]: cephadm 2026-03-10T05:53:39.410423+0000 mgr.y (mgr.24992) 45 : cephadm [INF] Reconfiguring mon.c (monmap changed)... 2026-03-10T05:53:40.823 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:40 vm02 bash[55303]: cephadm 2026-03-10T05:53:39.410423+0000 mgr.y (mgr.24992) 45 : cephadm [INF] Reconfiguring mon.c (monmap changed)... 2026-03-10T05:53:40.823 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:40 vm02 bash[55303]: audit 2026-03-10T05:53:39.410567+0000 mon.a (mon.0) 111 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-10T05:53:40.823 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:40 vm02 bash[55303]: audit 2026-03-10T05:53:39.410567+0000 mon.a (mon.0) 111 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-10T05:53:40.823 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:40 vm02 bash[55303]: audit 2026-03-10T05:53:39.410948+0000 mon.a (mon.0) 112 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch 2026-03-10T05:53:40.823 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:40 vm02 bash[55303]: audit 2026-03-10T05:53:39.410948+0000 mon.a (mon.0) 112 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch 2026-03-10T05:53:40.823 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:40 vm02 bash[55303]: audit 2026-03-10T05:53:39.411304+0000 mon.a (mon.0) 113 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:53:40.823 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:40 vm02 bash[55303]: audit 2026-03-10T05:53:39.411304+0000 mon.a (mon.0) 113 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:53:40.823 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:40 vm02 bash[55303]: cephadm 2026-03-10T05:53:39.411731+0000 mgr.y (mgr.24992) 46 : cephadm [INF] Reconfiguring daemon mon.c on vm02 2026-03-10T05:53:40.823 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:40 vm02 bash[55303]: cephadm 2026-03-10T05:53:39.411731+0000 mgr.y (mgr.24992) 46 : cephadm [INF] Reconfiguring daemon mon.c on vm02 2026-03-10T05:53:40.823 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:40 vm02 bash[55303]: audit 2026-03-10T05:53:39.819032+0000 mon.a (mon.0) 114 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:53:40.823 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:40 vm02 bash[55303]: audit 2026-03-10T05:53:39.819032+0000 mon.a (mon.0) 114 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:53:40.823 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:40 vm02 bash[55303]: audit 2026-03-10T05:53:39.824379+0000 mon.a (mon.0) 115 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:53:40.823 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:40 vm02 bash[55303]: audit 2026-03-10T05:53:39.824379+0000 mon.a (mon.0) 115 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:53:40.823 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:40 vm02 bash[55303]: audit 2026-03-10T05:53:39.825559+0000 mon.a (mon.0) 116 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.0"}]: dispatch 2026-03-10T05:53:40.823 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:40 vm02 bash[55303]: audit 2026-03-10T05:53:39.825559+0000 mon.a (mon.0) 116 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.0"}]: dispatch 2026-03-10T05:53:40.823 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:40 vm02 bash[55303]: audit 2026-03-10T05:53:39.826097+0000 mon.a (mon.0) 117 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:53:40.823 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:40 vm02 bash[55303]: audit 2026-03-10T05:53:39.826097+0000 mon.a (mon.0) 117 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:53:40.823 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:40 vm02 bash[55303]: audit 2026-03-10T05:53:40.201504+0000 mon.a (mon.0) 118 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:53:40.823 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:40 vm02 bash[55303]: audit 2026-03-10T05:53:40.201504+0000 mon.a (mon.0) 118 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:53:40.823 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:40 vm02 bash[55303]: audit 2026-03-10T05:53:40.207211+0000 mon.a (mon.0) 119 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:53:40.823 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:40 vm02 bash[55303]: audit 2026-03-10T05:53:40.207211+0000 mon.a (mon.0) 119 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:53:40.823 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:40 vm02 bash[55303]: audit 2026-03-10T05:53:40.208430+0000 mon.a (mon.0) 120 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-10T05:53:40.823 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:40 vm02 bash[55303]: audit 2026-03-10T05:53:40.208430+0000 mon.a (mon.0) 120 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-10T05:53:40.823 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:40 vm02 bash[55303]: audit 2026-03-10T05:53:40.209068+0000 mon.a (mon.0) 121 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch 2026-03-10T05:53:40.823 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:40 vm02 bash[55303]: audit 2026-03-10T05:53:40.209068+0000 mon.a (mon.0) 121 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch 2026-03-10T05:53:40.823 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:40 vm02 bash[55303]: audit 2026-03-10T05:53:40.209416+0000 mon.a (mon.0) 122 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:53:40.823 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:40 vm02 bash[55303]: audit 2026-03-10T05:53:40.209416+0000 mon.a (mon.0) 122 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:53:41.577 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:41 vm02 bash[56371]: cephadm 2026-03-10T05:53:39.825407+0000 mgr.y (mgr.24992) 47 : cephadm [INF] Reconfiguring osd.0 (monmap changed)... 2026-03-10T05:53:41.577 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:41 vm02 bash[56371]: cephadm 2026-03-10T05:53:39.825407+0000 mgr.y (mgr.24992) 47 : cephadm [INF] Reconfiguring osd.0 (monmap changed)... 2026-03-10T05:53:41.577 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:41 vm02 bash[56371]: cephadm 2026-03-10T05:53:39.827517+0000 mgr.y (mgr.24992) 48 : cephadm [INF] Reconfiguring daemon osd.0 on vm02 2026-03-10T05:53:41.577 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:41 vm02 bash[56371]: cephadm 2026-03-10T05:53:39.827517+0000 mgr.y (mgr.24992) 48 : cephadm [INF] Reconfiguring daemon osd.0 on vm02 2026-03-10T05:53:41.577 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:41 vm02 bash[56371]: cephadm 2026-03-10T05:53:40.207938+0000 mgr.y (mgr.24992) 49 : cephadm [INF] Reconfiguring mon.a (monmap changed)... 2026-03-10T05:53:41.577 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:41 vm02 bash[56371]: cephadm 2026-03-10T05:53:40.207938+0000 mgr.y (mgr.24992) 49 : cephadm [INF] Reconfiguring mon.a (monmap changed)... 2026-03-10T05:53:41.577 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:41 vm02 bash[56371]: cephadm 2026-03-10T05:53:40.209866+0000 mgr.y (mgr.24992) 50 : cephadm [INF] Reconfiguring daemon mon.a on vm02 2026-03-10T05:53:41.577 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:41 vm02 bash[56371]: cephadm 2026-03-10T05:53:40.209866+0000 mgr.y (mgr.24992) 50 : cephadm [INF] Reconfiguring daemon mon.a on vm02 2026-03-10T05:53:41.577 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:41 vm02 bash[56371]: audit 2026-03-10T05:53:40.565409+0000 mon.a (mon.0) 123 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:53:41.577 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:41 vm02 bash[56371]: audit 2026-03-10T05:53:40.565409+0000 mon.a (mon.0) 123 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:53:41.577 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:41 vm02 bash[56371]: audit 2026-03-10T05:53:40.571255+0000 mon.a (mon.0) 124 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:53:41.577 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:41 vm02 bash[56371]: audit 2026-03-10T05:53:40.571255+0000 mon.a (mon.0) 124 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:53:41.577 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:41 vm02 bash[56371]: cephadm 2026-03-10T05:53:40.572082+0000 mgr.y (mgr.24992) 51 : cephadm [INF] Reconfiguring rgw.smpl.vm02.pglcfm (monmap changed)... 2026-03-10T05:53:41.577 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:41 vm02 bash[56371]: cephadm 2026-03-10T05:53:40.572082+0000 mgr.y (mgr.24992) 51 : cephadm [INF] Reconfiguring rgw.smpl.vm02.pglcfm (monmap changed)... 2026-03-10T05:53:41.577 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:41 vm02 bash[56371]: audit 2026-03-10T05:53:40.573344+0000 mon.a (mon.0) 125 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.smpl.vm02.pglcfm", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch 2026-03-10T05:53:41.577 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:41 vm02 bash[56371]: audit 2026-03-10T05:53:40.573344+0000 mon.a (mon.0) 125 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.smpl.vm02.pglcfm", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch 2026-03-10T05:53:41.577 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:41 vm02 bash[56371]: audit 2026-03-10T05:53:40.574385+0000 mon.a (mon.0) 126 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:53:41.577 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:41 vm02 bash[56371]: audit 2026-03-10T05:53:40.574385+0000 mon.a (mon.0) 126 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:53:41.577 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:41 vm02 bash[56371]: cephadm 2026-03-10T05:53:40.574956+0000 mgr.y (mgr.24992) 52 : cephadm [INF] Reconfiguring daemon rgw.smpl.vm02.pglcfm on vm02 2026-03-10T05:53:41.578 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:41 vm02 bash[56371]: cephadm 2026-03-10T05:53:40.574956+0000 mgr.y (mgr.24992) 52 : cephadm [INF] Reconfiguring daemon rgw.smpl.vm02.pglcfm on vm02 2026-03-10T05:53:41.578 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:41 vm02 bash[56371]: audit 2026-03-10T05:53:40.879810+0000 mon.a (mon.0) 127 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T05:53:41.578 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:41 vm02 bash[56371]: audit 2026-03-10T05:53:40.879810+0000 mon.a (mon.0) 127 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T05:53:41.578 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:41 vm02 bash[56371]: audit 2026-03-10T05:53:40.945759+0000 mon.a (mon.0) 128 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:53:41.578 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:41 vm02 bash[56371]: audit 2026-03-10T05:53:40.945759+0000 mon.a (mon.0) 128 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:53:41.578 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:41 vm02 bash[56371]: audit 2026-03-10T05:53:40.951569+0000 mon.a (mon.0) 129 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:53:41.578 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:41 vm02 bash[56371]: audit 2026-03-10T05:53:40.951569+0000 mon.a (mon.0) 129 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:53:41.578 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:41 vm02 bash[56371]: audit 2026-03-10T05:53:40.952493+0000 mon.a (mon.0) 130 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.1"}]: dispatch 2026-03-10T05:53:41.578 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:41 vm02 bash[56371]: audit 2026-03-10T05:53:40.952493+0000 mon.a (mon.0) 130 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.1"}]: dispatch 2026-03-10T05:53:41.578 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:41 vm02 bash[56371]: audit 2026-03-10T05:53:40.953881+0000 mon.a (mon.0) 131 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:53:41.578 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:41 vm02 bash[56371]: audit 2026-03-10T05:53:40.953881+0000 mon.a (mon.0) 131 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:53:41.578 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:41 vm02 bash[56371]: audit 2026-03-10T05:53:41.336020+0000 mon.a (mon.0) 132 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:53:41.578 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:41 vm02 bash[56371]: audit 2026-03-10T05:53:41.336020+0000 mon.a (mon.0) 132 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:53:41.578 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:41 vm02 bash[56371]: audit 2026-03-10T05:53:41.341649+0000 mon.a (mon.0) 133 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:53:41.578 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:41 vm02 bash[56371]: audit 2026-03-10T05:53:41.341649+0000 mon.a (mon.0) 133 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:53:41.578 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:41 vm02 bash[56371]: audit 2026-03-10T05:53:41.342599+0000 mon.a (mon.0) 134 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.y", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch 2026-03-10T05:53:41.578 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:41 vm02 bash[56371]: audit 2026-03-10T05:53:41.342599+0000 mon.a (mon.0) 134 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.y", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch 2026-03-10T05:53:41.578 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:41 vm02 bash[56371]: audit 2026-03-10T05:53:41.343980+0000 mon.a (mon.0) 135 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "mgr services"}]: dispatch 2026-03-10T05:53:41.578 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:41 vm02 bash[56371]: audit 2026-03-10T05:53:41.343980+0000 mon.a (mon.0) 135 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "mgr services"}]: dispatch 2026-03-10T05:53:41.578 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:41 vm02 bash[56371]: audit 2026-03-10T05:53:41.344442+0000 mon.a (mon.0) 136 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:53:41.578 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:41 vm02 bash[56371]: audit 2026-03-10T05:53:41.344442+0000 mon.a (mon.0) 136 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:53:41.578 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:41 vm02 bash[55303]: cephadm 2026-03-10T05:53:39.825407+0000 mgr.y (mgr.24992) 47 : cephadm [INF] Reconfiguring osd.0 (monmap changed)... 2026-03-10T05:53:41.578 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:41 vm02 bash[55303]: cephadm 2026-03-10T05:53:39.825407+0000 mgr.y (mgr.24992) 47 : cephadm [INF] Reconfiguring osd.0 (monmap changed)... 2026-03-10T05:53:41.578 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:41 vm02 bash[55303]: cephadm 2026-03-10T05:53:39.827517+0000 mgr.y (mgr.24992) 48 : cephadm [INF] Reconfiguring daemon osd.0 on vm02 2026-03-10T05:53:41.578 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:41 vm02 bash[55303]: cephadm 2026-03-10T05:53:39.827517+0000 mgr.y (mgr.24992) 48 : cephadm [INF] Reconfiguring daemon osd.0 on vm02 2026-03-10T05:53:41.578 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:41 vm02 bash[55303]: cephadm 2026-03-10T05:53:40.207938+0000 mgr.y (mgr.24992) 49 : cephadm [INF] Reconfiguring mon.a (monmap changed)... 2026-03-10T05:53:41.578 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:41 vm02 bash[55303]: cephadm 2026-03-10T05:53:40.207938+0000 mgr.y (mgr.24992) 49 : cephadm [INF] Reconfiguring mon.a (monmap changed)... 2026-03-10T05:53:41.578 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:41 vm02 bash[55303]: cephadm 2026-03-10T05:53:40.209866+0000 mgr.y (mgr.24992) 50 : cephadm [INF] Reconfiguring daemon mon.a on vm02 2026-03-10T05:53:41.578 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:41 vm02 bash[55303]: cephadm 2026-03-10T05:53:40.209866+0000 mgr.y (mgr.24992) 50 : cephadm [INF] Reconfiguring daemon mon.a on vm02 2026-03-10T05:53:41.578 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:41 vm02 bash[55303]: audit 2026-03-10T05:53:40.565409+0000 mon.a (mon.0) 123 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:53:41.578 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:41 vm02 bash[55303]: audit 2026-03-10T05:53:40.565409+0000 mon.a (mon.0) 123 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:53:41.578 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:41 vm02 bash[55303]: audit 2026-03-10T05:53:40.571255+0000 mon.a (mon.0) 124 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:53:41.578 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:41 vm02 bash[55303]: audit 2026-03-10T05:53:40.571255+0000 mon.a (mon.0) 124 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:53:41.578 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:41 vm02 bash[55303]: cephadm 2026-03-10T05:53:40.572082+0000 mgr.y (mgr.24992) 51 : cephadm [INF] Reconfiguring rgw.smpl.vm02.pglcfm (monmap changed)... 2026-03-10T05:53:41.578 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:41 vm02 bash[55303]: cephadm 2026-03-10T05:53:40.572082+0000 mgr.y (mgr.24992) 51 : cephadm [INF] Reconfiguring rgw.smpl.vm02.pglcfm (monmap changed)... 2026-03-10T05:53:41.578 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:41 vm02 bash[55303]: audit 2026-03-10T05:53:40.573344+0000 mon.a (mon.0) 125 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.smpl.vm02.pglcfm", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch 2026-03-10T05:53:41.578 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:41 vm02 bash[55303]: audit 2026-03-10T05:53:40.573344+0000 mon.a (mon.0) 125 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.smpl.vm02.pglcfm", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch 2026-03-10T05:53:41.578 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:41 vm02 bash[55303]: audit 2026-03-10T05:53:40.574385+0000 mon.a (mon.0) 126 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:53:41.579 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:41 vm02 bash[55303]: audit 2026-03-10T05:53:40.574385+0000 mon.a (mon.0) 126 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:53:41.579 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:41 vm02 bash[55303]: cephadm 2026-03-10T05:53:40.574956+0000 mgr.y (mgr.24992) 52 : cephadm [INF] Reconfiguring daemon rgw.smpl.vm02.pglcfm on vm02 2026-03-10T05:53:41.579 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:41 vm02 bash[55303]: cephadm 2026-03-10T05:53:40.574956+0000 mgr.y (mgr.24992) 52 : cephadm [INF] Reconfiguring daemon rgw.smpl.vm02.pglcfm on vm02 2026-03-10T05:53:41.579 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:41 vm02 bash[55303]: audit 2026-03-10T05:53:40.879810+0000 mon.a (mon.0) 127 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T05:53:41.579 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:41 vm02 bash[55303]: audit 2026-03-10T05:53:40.879810+0000 mon.a (mon.0) 127 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T05:53:41.579 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:41 vm02 bash[55303]: audit 2026-03-10T05:53:40.945759+0000 mon.a (mon.0) 128 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:53:41.579 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:41 vm02 bash[55303]: audit 2026-03-10T05:53:40.945759+0000 mon.a (mon.0) 128 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:53:41.579 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:41 vm02 bash[55303]: audit 2026-03-10T05:53:40.951569+0000 mon.a (mon.0) 129 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:53:41.579 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:41 vm02 bash[55303]: audit 2026-03-10T05:53:40.951569+0000 mon.a (mon.0) 129 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:53:41.579 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:41 vm02 bash[55303]: audit 2026-03-10T05:53:40.952493+0000 mon.a (mon.0) 130 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.1"}]: dispatch 2026-03-10T05:53:41.579 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:41 vm02 bash[55303]: audit 2026-03-10T05:53:40.952493+0000 mon.a (mon.0) 130 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.1"}]: dispatch 2026-03-10T05:53:41.579 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:41 vm02 bash[55303]: audit 2026-03-10T05:53:40.953881+0000 mon.a (mon.0) 131 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:53:41.579 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:41 vm02 bash[55303]: audit 2026-03-10T05:53:40.953881+0000 mon.a (mon.0) 131 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:53:41.579 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:41 vm02 bash[55303]: audit 2026-03-10T05:53:41.336020+0000 mon.a (mon.0) 132 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:53:41.579 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:41 vm02 bash[55303]: audit 2026-03-10T05:53:41.336020+0000 mon.a (mon.0) 132 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:53:41.579 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:41 vm02 bash[55303]: audit 2026-03-10T05:53:41.341649+0000 mon.a (mon.0) 133 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:53:41.579 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:41 vm02 bash[55303]: audit 2026-03-10T05:53:41.341649+0000 mon.a (mon.0) 133 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:53:41.579 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:41 vm02 bash[55303]: audit 2026-03-10T05:53:41.342599+0000 mon.a (mon.0) 134 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.y", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch 2026-03-10T05:53:41.579 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:41 vm02 bash[55303]: audit 2026-03-10T05:53:41.342599+0000 mon.a (mon.0) 134 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.y", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch 2026-03-10T05:53:41.579 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:41 vm02 bash[55303]: audit 2026-03-10T05:53:41.343980+0000 mon.a (mon.0) 135 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "mgr services"}]: dispatch 2026-03-10T05:53:41.579 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:41 vm02 bash[55303]: audit 2026-03-10T05:53:41.343980+0000 mon.a (mon.0) 135 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "mgr services"}]: dispatch 2026-03-10T05:53:41.579 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:41 vm02 bash[55303]: audit 2026-03-10T05:53:41.344442+0000 mon.a (mon.0) 136 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:53:41.579 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:41 vm02 bash[55303]: audit 2026-03-10T05:53:41.344442+0000 mon.a (mon.0) 136 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:53:41.985 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:41 vm05 bash[43541]: cephadm 2026-03-10T05:53:39.825407+0000 mgr.y (mgr.24992) 47 : cephadm [INF] Reconfiguring osd.0 (monmap changed)... 2026-03-10T05:53:41.986 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:41 vm05 bash[43541]: cephadm 2026-03-10T05:53:39.825407+0000 mgr.y (mgr.24992) 47 : cephadm [INF] Reconfiguring osd.0 (monmap changed)... 2026-03-10T05:53:41.986 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:41 vm05 bash[43541]: cephadm 2026-03-10T05:53:39.827517+0000 mgr.y (mgr.24992) 48 : cephadm [INF] Reconfiguring daemon osd.0 on vm02 2026-03-10T05:53:41.986 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:41 vm05 bash[43541]: cephadm 2026-03-10T05:53:39.827517+0000 mgr.y (mgr.24992) 48 : cephadm [INF] Reconfiguring daemon osd.0 on vm02 2026-03-10T05:53:41.986 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:41 vm05 bash[43541]: cephadm 2026-03-10T05:53:40.207938+0000 mgr.y (mgr.24992) 49 : cephadm [INF] Reconfiguring mon.a (monmap changed)... 2026-03-10T05:53:41.986 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:41 vm05 bash[43541]: cephadm 2026-03-10T05:53:40.207938+0000 mgr.y (mgr.24992) 49 : cephadm [INF] Reconfiguring mon.a (monmap changed)... 2026-03-10T05:53:41.986 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:41 vm05 bash[43541]: cephadm 2026-03-10T05:53:40.209866+0000 mgr.y (mgr.24992) 50 : cephadm [INF] Reconfiguring daemon mon.a on vm02 2026-03-10T05:53:41.986 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:41 vm05 bash[43541]: cephadm 2026-03-10T05:53:40.209866+0000 mgr.y (mgr.24992) 50 : cephadm [INF] Reconfiguring daemon mon.a on vm02 2026-03-10T05:53:41.986 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:41 vm05 bash[43541]: audit 2026-03-10T05:53:40.565409+0000 mon.a (mon.0) 123 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:53:41.986 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:41 vm05 bash[43541]: audit 2026-03-10T05:53:40.565409+0000 mon.a (mon.0) 123 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:53:41.986 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:41 vm05 bash[43541]: audit 2026-03-10T05:53:40.571255+0000 mon.a (mon.0) 124 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:53:41.986 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:41 vm05 bash[43541]: audit 2026-03-10T05:53:40.571255+0000 mon.a (mon.0) 124 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:53:41.986 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:41 vm05 bash[43541]: cephadm 2026-03-10T05:53:40.572082+0000 mgr.y (mgr.24992) 51 : cephadm [INF] Reconfiguring rgw.smpl.vm02.pglcfm (monmap changed)... 2026-03-10T05:53:41.986 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:41 vm05 bash[43541]: cephadm 2026-03-10T05:53:40.572082+0000 mgr.y (mgr.24992) 51 : cephadm [INF] Reconfiguring rgw.smpl.vm02.pglcfm (monmap changed)... 2026-03-10T05:53:41.986 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:41 vm05 bash[43541]: audit 2026-03-10T05:53:40.573344+0000 mon.a (mon.0) 125 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.smpl.vm02.pglcfm", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch 2026-03-10T05:53:41.986 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:41 vm05 bash[43541]: audit 2026-03-10T05:53:40.573344+0000 mon.a (mon.0) 125 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.smpl.vm02.pglcfm", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch 2026-03-10T05:53:41.986 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:41 vm05 bash[43541]: audit 2026-03-10T05:53:40.574385+0000 mon.a (mon.0) 126 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:53:41.986 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:41 vm05 bash[43541]: audit 2026-03-10T05:53:40.574385+0000 mon.a (mon.0) 126 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:53:41.986 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:41 vm05 bash[43541]: cephadm 2026-03-10T05:53:40.574956+0000 mgr.y (mgr.24992) 52 : cephadm [INF] Reconfiguring daemon rgw.smpl.vm02.pglcfm on vm02 2026-03-10T05:53:41.986 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:41 vm05 bash[43541]: cephadm 2026-03-10T05:53:40.574956+0000 mgr.y (mgr.24992) 52 : cephadm [INF] Reconfiguring daemon rgw.smpl.vm02.pglcfm on vm02 2026-03-10T05:53:41.986 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:41 vm05 bash[43541]: audit 2026-03-10T05:53:40.879810+0000 mon.a (mon.0) 127 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T05:53:41.986 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:41 vm05 bash[43541]: audit 2026-03-10T05:53:40.879810+0000 mon.a (mon.0) 127 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T05:53:41.986 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:41 vm05 bash[43541]: audit 2026-03-10T05:53:40.945759+0000 mon.a (mon.0) 128 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:53:41.986 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:41 vm05 bash[43541]: audit 2026-03-10T05:53:40.945759+0000 mon.a (mon.0) 128 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:53:41.986 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:41 vm05 bash[43541]: audit 2026-03-10T05:53:40.951569+0000 mon.a (mon.0) 129 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:53:41.986 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:41 vm05 bash[43541]: audit 2026-03-10T05:53:40.951569+0000 mon.a (mon.0) 129 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:53:41.986 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:41 vm05 bash[43541]: audit 2026-03-10T05:53:40.952493+0000 mon.a (mon.0) 130 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.1"}]: dispatch 2026-03-10T05:53:41.986 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:41 vm05 bash[43541]: audit 2026-03-10T05:53:40.952493+0000 mon.a (mon.0) 130 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.1"}]: dispatch 2026-03-10T05:53:41.986 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:41 vm05 bash[43541]: audit 2026-03-10T05:53:40.953881+0000 mon.a (mon.0) 131 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:53:41.986 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:41 vm05 bash[43541]: audit 2026-03-10T05:53:40.953881+0000 mon.a (mon.0) 131 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:53:41.986 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:41 vm05 bash[43541]: audit 2026-03-10T05:53:41.336020+0000 mon.a (mon.0) 132 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:53:41.986 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:41 vm05 bash[43541]: audit 2026-03-10T05:53:41.336020+0000 mon.a (mon.0) 132 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:53:41.986 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:41 vm05 bash[43541]: audit 2026-03-10T05:53:41.341649+0000 mon.a (mon.0) 133 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:53:41.986 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:41 vm05 bash[43541]: audit 2026-03-10T05:53:41.341649+0000 mon.a (mon.0) 133 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:53:41.986 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:41 vm05 bash[43541]: audit 2026-03-10T05:53:41.342599+0000 mon.a (mon.0) 134 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.y", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch 2026-03-10T05:53:41.986 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:41 vm05 bash[43541]: audit 2026-03-10T05:53:41.342599+0000 mon.a (mon.0) 134 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.y", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch 2026-03-10T05:53:41.986 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:41 vm05 bash[43541]: audit 2026-03-10T05:53:41.343980+0000 mon.a (mon.0) 135 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "mgr services"}]: dispatch 2026-03-10T05:53:41.986 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:41 vm05 bash[43541]: audit 2026-03-10T05:53:41.343980+0000 mon.a (mon.0) 135 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "mgr services"}]: dispatch 2026-03-10T05:53:41.986 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:41 vm05 bash[43541]: audit 2026-03-10T05:53:41.344442+0000 mon.a (mon.0) 136 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:53:41.986 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:41 vm05 bash[43541]: audit 2026-03-10T05:53:41.344442+0000 mon.a (mon.0) 136 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:53:42.751 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:42 vm05 bash[43541]: cluster 2026-03-10T05:53:40.831244+0000 mgr.y (mgr.24992) 53 : cluster [DBG] pgmap v18: 161 pgs: 161 active+clean; 457 KiB data, 103 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T05:53:42.751 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:42 vm05 bash[43541]: cluster 2026-03-10T05:53:40.831244+0000 mgr.y (mgr.24992) 53 : cluster [DBG] pgmap v18: 161 pgs: 161 active+clean; 457 KiB data, 103 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T05:53:42.751 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:42 vm05 bash[43541]: cephadm 2026-03-10T05:53:40.952257+0000 mgr.y (mgr.24992) 54 : cephadm [INF] Reconfiguring osd.1 (monmap changed)... 2026-03-10T05:53:42.751 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:42 vm05 bash[43541]: cephadm 2026-03-10T05:53:40.952257+0000 mgr.y (mgr.24992) 54 : cephadm [INF] Reconfiguring osd.1 (monmap changed)... 2026-03-10T05:53:42.751 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:42 vm05 bash[43541]: cephadm 2026-03-10T05:53:40.955151+0000 mgr.y (mgr.24992) 55 : cephadm [INF] Reconfiguring daemon osd.1 on vm02 2026-03-10T05:53:42.751 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:42 vm05 bash[43541]: cephadm 2026-03-10T05:53:40.955151+0000 mgr.y (mgr.24992) 55 : cephadm [INF] Reconfiguring daemon osd.1 on vm02 2026-03-10T05:53:42.751 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:42 vm05 bash[43541]: cephadm 2026-03-10T05:53:41.342418+0000 mgr.y (mgr.24992) 56 : cephadm [INF] Reconfiguring mgr.y (monmap changed)... 2026-03-10T05:53:42.751 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:42 vm05 bash[43541]: cephadm 2026-03-10T05:53:41.342418+0000 mgr.y (mgr.24992) 56 : cephadm [INF] Reconfiguring mgr.y (monmap changed)... 2026-03-10T05:53:42.751 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:42 vm05 bash[43541]: cephadm 2026-03-10T05:53:41.344909+0000 mgr.y (mgr.24992) 57 : cephadm [INF] Reconfiguring daemon mgr.y on vm02 2026-03-10T05:53:42.751 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:42 vm05 bash[43541]: cephadm 2026-03-10T05:53:41.344909+0000 mgr.y (mgr.24992) 57 : cephadm [INF] Reconfiguring daemon mgr.y on vm02 2026-03-10T05:53:42.751 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:42 vm05 bash[43541]: audit 2026-03-10T05:53:41.727647+0000 mon.a (mon.0) 137 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:53:42.751 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:42 vm05 bash[43541]: audit 2026-03-10T05:53:41.727647+0000 mon.a (mon.0) 137 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:53:42.751 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:42 vm05 bash[43541]: audit 2026-03-10T05:53:41.734889+0000 mon.a (mon.0) 138 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:53:42.751 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:42 vm05 bash[43541]: audit 2026-03-10T05:53:41.734889+0000 mon.a (mon.0) 138 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:53:42.751 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:42 vm05 bash[43541]: audit 2026-03-10T05:53:41.737351+0000 mon.a (mon.0) 139 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.4"}]: dispatch 2026-03-10T05:53:42.751 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:42 vm05 bash[43541]: audit 2026-03-10T05:53:41.737351+0000 mon.a (mon.0) 139 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.4"}]: dispatch 2026-03-10T05:53:42.751 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:42 vm05 bash[43541]: audit 2026-03-10T05:53:41.737762+0000 mon.a (mon.0) 140 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:53:42.751 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:42 vm05 bash[43541]: audit 2026-03-10T05:53:41.737762+0000 mon.a (mon.0) 140 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:53:42.751 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:42 vm05 bash[43541]: audit 2026-03-10T05:53:42.123142+0000 mon.a (mon.0) 141 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:53:42.751 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:42 vm05 bash[43541]: audit 2026-03-10T05:53:42.123142+0000 mon.a (mon.0) 141 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:53:42.751 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:42 vm05 bash[43541]: audit 2026-03-10T05:53:42.127465+0000 mon.a (mon.0) 142 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:53:42.751 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:42 vm05 bash[43541]: audit 2026-03-10T05:53:42.127465+0000 mon.a (mon.0) 142 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:53:42.751 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:42 vm05 bash[43541]: audit 2026-03-10T05:53:42.129321+0000 mon.a (mon.0) 143 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.5"}]: dispatch 2026-03-10T05:53:42.751 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:42 vm05 bash[43541]: audit 2026-03-10T05:53:42.129321+0000 mon.a (mon.0) 143 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.5"}]: dispatch 2026-03-10T05:53:42.751 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:42 vm05 bash[43541]: audit 2026-03-10T05:53:42.129806+0000 mon.a (mon.0) 144 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:53:42.751 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:42 vm05 bash[43541]: audit 2026-03-10T05:53:42.129806+0000 mon.a (mon.0) 144 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:53:42.751 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:42 vm05 bash[43541]: audit 2026-03-10T05:53:42.514776+0000 mon.a (mon.0) 145 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:53:42.751 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:42 vm05 bash[43541]: audit 2026-03-10T05:53:42.514776+0000 mon.a (mon.0) 145 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:53:42.751 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:42 vm05 bash[43541]: audit 2026-03-10T05:53:42.521006+0000 mon.a (mon.0) 146 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:53:42.751 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:42 vm05 bash[43541]: audit 2026-03-10T05:53:42.521006+0000 mon.a (mon.0) 146 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:53:42.751 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:42 vm05 bash[43541]: audit 2026-03-10T05:53:42.522805+0000 mon.a (mon.0) 147 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.x", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch 2026-03-10T05:53:42.751 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:42 vm05 bash[43541]: audit 2026-03-10T05:53:42.522805+0000 mon.a (mon.0) 147 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.x", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch 2026-03-10T05:53:42.751 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:42 vm05 bash[43541]: audit 2026-03-10T05:53:42.523426+0000 mon.a (mon.0) 148 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "mgr services"}]: dispatch 2026-03-10T05:53:42.751 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:42 vm05 bash[43541]: audit 2026-03-10T05:53:42.523426+0000 mon.a (mon.0) 148 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "mgr services"}]: dispatch 2026-03-10T05:53:42.751 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:42 vm05 bash[43541]: audit 2026-03-10T05:53:42.523888+0000 mon.a (mon.0) 149 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:53:42.751 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:42 vm05 bash[43541]: audit 2026-03-10T05:53:42.523888+0000 mon.a (mon.0) 149 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:53:42.835 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:42 vm02 bash[56371]: cluster 2026-03-10T05:53:40.831244+0000 mgr.y (mgr.24992) 53 : cluster [DBG] pgmap v18: 161 pgs: 161 active+clean; 457 KiB data, 103 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T05:53:42.835 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:42 vm02 bash[56371]: cluster 2026-03-10T05:53:40.831244+0000 mgr.y (mgr.24992) 53 : cluster [DBG] pgmap v18: 161 pgs: 161 active+clean; 457 KiB data, 103 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T05:53:42.835 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:42 vm02 bash[56371]: cephadm 2026-03-10T05:53:40.952257+0000 mgr.y (mgr.24992) 54 : cephadm [INF] Reconfiguring osd.1 (monmap changed)... 2026-03-10T05:53:42.835 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:42 vm02 bash[56371]: cephadm 2026-03-10T05:53:40.952257+0000 mgr.y (mgr.24992) 54 : cephadm [INF] Reconfiguring osd.1 (monmap changed)... 2026-03-10T05:53:42.835 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:42 vm02 bash[56371]: cephadm 2026-03-10T05:53:40.955151+0000 mgr.y (mgr.24992) 55 : cephadm [INF] Reconfiguring daemon osd.1 on vm02 2026-03-10T05:53:42.835 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:42 vm02 bash[56371]: cephadm 2026-03-10T05:53:40.955151+0000 mgr.y (mgr.24992) 55 : cephadm [INF] Reconfiguring daemon osd.1 on vm02 2026-03-10T05:53:42.835 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:42 vm02 bash[56371]: cephadm 2026-03-10T05:53:41.342418+0000 mgr.y (mgr.24992) 56 : cephadm [INF] Reconfiguring mgr.y (monmap changed)... 2026-03-10T05:53:42.835 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:42 vm02 bash[56371]: cephadm 2026-03-10T05:53:41.342418+0000 mgr.y (mgr.24992) 56 : cephadm [INF] Reconfiguring mgr.y (monmap changed)... 2026-03-10T05:53:42.835 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:42 vm02 bash[56371]: cephadm 2026-03-10T05:53:41.344909+0000 mgr.y (mgr.24992) 57 : cephadm [INF] Reconfiguring daemon mgr.y on vm02 2026-03-10T05:53:42.835 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:42 vm02 bash[56371]: cephadm 2026-03-10T05:53:41.344909+0000 mgr.y (mgr.24992) 57 : cephadm [INF] Reconfiguring daemon mgr.y on vm02 2026-03-10T05:53:42.835 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:42 vm02 bash[56371]: audit 2026-03-10T05:53:41.727647+0000 mon.a (mon.0) 137 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:53:42.835 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:42 vm02 bash[56371]: audit 2026-03-10T05:53:41.727647+0000 mon.a (mon.0) 137 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:53:42.835 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:42 vm02 bash[56371]: audit 2026-03-10T05:53:41.734889+0000 mon.a (mon.0) 138 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:53:42.835 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:42 vm02 bash[56371]: audit 2026-03-10T05:53:41.734889+0000 mon.a (mon.0) 138 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:53:42.835 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:42 vm02 bash[56371]: audit 2026-03-10T05:53:41.737351+0000 mon.a (mon.0) 139 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.4"}]: dispatch 2026-03-10T05:53:42.835 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:42 vm02 bash[56371]: audit 2026-03-10T05:53:41.737351+0000 mon.a (mon.0) 139 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.4"}]: dispatch 2026-03-10T05:53:42.835 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:42 vm02 bash[56371]: audit 2026-03-10T05:53:41.737762+0000 mon.a (mon.0) 140 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:53:42.835 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:42 vm02 bash[56371]: audit 2026-03-10T05:53:41.737762+0000 mon.a (mon.0) 140 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:53:42.835 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:42 vm02 bash[56371]: audit 2026-03-10T05:53:42.123142+0000 mon.a (mon.0) 141 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:53:42.835 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:42 vm02 bash[56371]: audit 2026-03-10T05:53:42.123142+0000 mon.a (mon.0) 141 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:53:42.835 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:42 vm02 bash[56371]: audit 2026-03-10T05:53:42.127465+0000 mon.a (mon.0) 142 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:53:42.835 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:42 vm02 bash[56371]: audit 2026-03-10T05:53:42.127465+0000 mon.a (mon.0) 142 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:53:42.835 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:42 vm02 bash[56371]: audit 2026-03-10T05:53:42.129321+0000 mon.a (mon.0) 143 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.5"}]: dispatch 2026-03-10T05:53:42.835 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:42 vm02 bash[55303]: cluster 2026-03-10T05:53:40.831244+0000 mgr.y (mgr.24992) 53 : cluster [DBG] pgmap v18: 161 pgs: 161 active+clean; 457 KiB data, 103 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T05:53:42.835 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:42 vm02 bash[55303]: cluster 2026-03-10T05:53:40.831244+0000 mgr.y (mgr.24992) 53 : cluster [DBG] pgmap v18: 161 pgs: 161 active+clean; 457 KiB data, 103 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T05:53:42.835 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:42 vm02 bash[55303]: cephadm 2026-03-10T05:53:40.952257+0000 mgr.y (mgr.24992) 54 : cephadm [INF] Reconfiguring osd.1 (monmap changed)... 2026-03-10T05:53:42.835 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:42 vm02 bash[55303]: cephadm 2026-03-10T05:53:40.952257+0000 mgr.y (mgr.24992) 54 : cephadm [INF] Reconfiguring osd.1 (monmap changed)... 2026-03-10T05:53:42.835 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:42 vm02 bash[55303]: cephadm 2026-03-10T05:53:40.955151+0000 mgr.y (mgr.24992) 55 : cephadm [INF] Reconfiguring daemon osd.1 on vm02 2026-03-10T05:53:42.835 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:42 vm02 bash[55303]: cephadm 2026-03-10T05:53:40.955151+0000 mgr.y (mgr.24992) 55 : cephadm [INF] Reconfiguring daemon osd.1 on vm02 2026-03-10T05:53:42.835 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:42 vm02 bash[55303]: cephadm 2026-03-10T05:53:41.342418+0000 mgr.y (mgr.24992) 56 : cephadm [INF] Reconfiguring mgr.y (monmap changed)... 2026-03-10T05:53:42.835 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:42 vm02 bash[55303]: cephadm 2026-03-10T05:53:41.342418+0000 mgr.y (mgr.24992) 56 : cephadm [INF] Reconfiguring mgr.y (monmap changed)... 2026-03-10T05:53:42.835 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:42 vm02 bash[55303]: cephadm 2026-03-10T05:53:41.344909+0000 mgr.y (mgr.24992) 57 : cephadm [INF] Reconfiguring daemon mgr.y on vm02 2026-03-10T05:53:42.835 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:42 vm02 bash[55303]: cephadm 2026-03-10T05:53:41.344909+0000 mgr.y (mgr.24992) 57 : cephadm [INF] Reconfiguring daemon mgr.y on vm02 2026-03-10T05:53:42.835 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:42 vm02 bash[55303]: audit 2026-03-10T05:53:41.727647+0000 mon.a (mon.0) 137 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:53:42.835 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:42 vm02 bash[55303]: audit 2026-03-10T05:53:41.727647+0000 mon.a (mon.0) 137 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:53:42.835 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:42 vm02 bash[55303]: audit 2026-03-10T05:53:41.734889+0000 mon.a (mon.0) 138 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:53:42.835 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:42 vm02 bash[55303]: audit 2026-03-10T05:53:41.734889+0000 mon.a (mon.0) 138 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:53:42.835 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:42 vm02 bash[55303]: audit 2026-03-10T05:53:41.737351+0000 mon.a (mon.0) 139 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.4"}]: dispatch 2026-03-10T05:53:42.835 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:42 vm02 bash[55303]: audit 2026-03-10T05:53:41.737351+0000 mon.a (mon.0) 139 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.4"}]: dispatch 2026-03-10T05:53:42.835 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:42 vm02 bash[55303]: audit 2026-03-10T05:53:41.737762+0000 mon.a (mon.0) 140 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:53:42.835 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:42 vm02 bash[55303]: audit 2026-03-10T05:53:41.737762+0000 mon.a (mon.0) 140 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:53:42.836 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:42 vm02 bash[55303]: audit 2026-03-10T05:53:42.123142+0000 mon.a (mon.0) 141 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:53:42.836 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:42 vm02 bash[55303]: audit 2026-03-10T05:53:42.123142+0000 mon.a (mon.0) 141 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:53:42.836 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:42 vm02 bash[55303]: audit 2026-03-10T05:53:42.127465+0000 mon.a (mon.0) 142 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:53:42.836 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:42 vm02 bash[55303]: audit 2026-03-10T05:53:42.127465+0000 mon.a (mon.0) 142 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:53:42.836 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:42 vm02 bash[55303]: audit 2026-03-10T05:53:42.129321+0000 mon.a (mon.0) 143 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.5"}]: dispatch 2026-03-10T05:53:42.836 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:42 vm02 bash[55303]: audit 2026-03-10T05:53:42.129321+0000 mon.a (mon.0) 143 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.5"}]: dispatch 2026-03-10T05:53:42.836 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:42 vm02 bash[55303]: audit 2026-03-10T05:53:42.129806+0000 mon.a (mon.0) 144 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:53:42.836 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:42 vm02 bash[55303]: audit 2026-03-10T05:53:42.129806+0000 mon.a (mon.0) 144 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:53:42.836 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:42 vm02 bash[55303]: audit 2026-03-10T05:53:42.514776+0000 mon.a (mon.0) 145 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:53:42.836 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:42 vm02 bash[55303]: audit 2026-03-10T05:53:42.514776+0000 mon.a (mon.0) 145 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:53:42.836 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:42 vm02 bash[55303]: audit 2026-03-10T05:53:42.521006+0000 mon.a (mon.0) 146 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:53:42.836 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:42 vm02 bash[55303]: audit 2026-03-10T05:53:42.521006+0000 mon.a (mon.0) 146 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:53:42.836 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:42 vm02 bash[55303]: audit 2026-03-10T05:53:42.522805+0000 mon.a (mon.0) 147 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.x", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch 2026-03-10T05:53:42.836 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:42 vm02 bash[55303]: audit 2026-03-10T05:53:42.522805+0000 mon.a (mon.0) 147 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.x", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch 2026-03-10T05:53:42.836 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:42 vm02 bash[55303]: audit 2026-03-10T05:53:42.523426+0000 mon.a (mon.0) 148 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "mgr services"}]: dispatch 2026-03-10T05:53:42.836 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:42 vm02 bash[55303]: audit 2026-03-10T05:53:42.523426+0000 mon.a (mon.0) 148 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "mgr services"}]: dispatch 2026-03-10T05:53:42.836 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:42 vm02 bash[55303]: audit 2026-03-10T05:53:42.523888+0000 mon.a (mon.0) 149 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:53:42.836 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:42 vm02 bash[55303]: audit 2026-03-10T05:53:42.523888+0000 mon.a (mon.0) 149 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:53:42.836 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:42 vm02 bash[56371]: audit 2026-03-10T05:53:42.129321+0000 mon.a (mon.0) 143 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.5"}]: dispatch 2026-03-10T05:53:42.836 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:42 vm02 bash[56371]: audit 2026-03-10T05:53:42.129806+0000 mon.a (mon.0) 144 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:53:42.836 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:42 vm02 bash[56371]: audit 2026-03-10T05:53:42.129806+0000 mon.a (mon.0) 144 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:53:42.836 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:42 vm02 bash[56371]: audit 2026-03-10T05:53:42.514776+0000 mon.a (mon.0) 145 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:53:42.836 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:42 vm02 bash[56371]: audit 2026-03-10T05:53:42.514776+0000 mon.a (mon.0) 145 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:53:42.836 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:42 vm02 bash[56371]: audit 2026-03-10T05:53:42.521006+0000 mon.a (mon.0) 146 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:53:42.836 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:42 vm02 bash[56371]: audit 2026-03-10T05:53:42.521006+0000 mon.a (mon.0) 146 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:53:42.836 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:42 vm02 bash[56371]: audit 2026-03-10T05:53:42.522805+0000 mon.a (mon.0) 147 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.x", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch 2026-03-10T05:53:42.836 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:42 vm02 bash[56371]: audit 2026-03-10T05:53:42.522805+0000 mon.a (mon.0) 147 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.x", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch 2026-03-10T05:53:42.836 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:42 vm02 bash[56371]: audit 2026-03-10T05:53:42.523426+0000 mon.a (mon.0) 148 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "mgr services"}]: dispatch 2026-03-10T05:53:42.836 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:42 vm02 bash[56371]: audit 2026-03-10T05:53:42.523426+0000 mon.a (mon.0) 148 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "mgr services"}]: dispatch 2026-03-10T05:53:42.836 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:42 vm02 bash[56371]: audit 2026-03-10T05:53:42.523888+0000 mon.a (mon.0) 149 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:53:42.836 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:42 vm02 bash[56371]: audit 2026-03-10T05:53:42.523888+0000 mon.a (mon.0) 149 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:53:43.335 INFO:journalctl@ceph.mgr.y.vm02.stdout:Mar 10 05:53:42 vm02 bash[52264]: ::ffff:192.168.123.105 - - [10/Mar/2026:05:53:42] "GET /metrics HTTP/1.1" 200 37814 "" "Prometheus/2.51.0" 2026-03-10T05:53:43.950 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:43 vm05 bash[43541]: cephadm 2026-03-10T05:53:41.737089+0000 mgr.y (mgr.24992) 58 : cephadm [INF] Reconfiguring osd.4 (monmap changed)... 2026-03-10T05:53:43.950 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:43 vm05 bash[43541]: cephadm 2026-03-10T05:53:41.737089+0000 mgr.y (mgr.24992) 58 : cephadm [INF] Reconfiguring osd.4 (monmap changed)... 2026-03-10T05:53:43.950 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:43 vm05 bash[43541]: cephadm 2026-03-10T05:53:41.738864+0000 mgr.y (mgr.24992) 59 : cephadm [INF] Reconfiguring daemon osd.4 on vm05 2026-03-10T05:53:43.950 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:43 vm05 bash[43541]: cephadm 2026-03-10T05:53:41.738864+0000 mgr.y (mgr.24992) 59 : cephadm [INF] Reconfiguring daemon osd.4 on vm05 2026-03-10T05:53:43.950 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:43 vm05 bash[43541]: cephadm 2026-03-10T05:53:42.128168+0000 mgr.y (mgr.24992) 60 : cephadm [INF] Reconfiguring osd.5 (monmap changed)... 2026-03-10T05:53:43.950 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:43 vm05 bash[43541]: cephadm 2026-03-10T05:53:42.128168+0000 mgr.y (mgr.24992) 60 : cephadm [INF] Reconfiguring osd.5 (monmap changed)... 2026-03-10T05:53:43.950 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:43 vm05 bash[43541]: cephadm 2026-03-10T05:53:42.130839+0000 mgr.y (mgr.24992) 61 : cephadm [INF] Reconfiguring daemon osd.5 on vm05 2026-03-10T05:53:43.950 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:43 vm05 bash[43541]: cephadm 2026-03-10T05:53:42.130839+0000 mgr.y (mgr.24992) 61 : cephadm [INF] Reconfiguring daemon osd.5 on vm05 2026-03-10T05:53:43.950 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:43 vm05 bash[43541]: cephadm 2026-03-10T05:53:42.521800+0000 mgr.y (mgr.24992) 62 : cephadm [INF] Reconfiguring mgr.x (monmap changed)... 2026-03-10T05:53:43.950 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:43 vm05 bash[43541]: cephadm 2026-03-10T05:53:42.521800+0000 mgr.y (mgr.24992) 62 : cephadm [INF] Reconfiguring mgr.x (monmap changed)... 2026-03-10T05:53:43.950 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:43 vm05 bash[43541]: cephadm 2026-03-10T05:53:42.524447+0000 mgr.y (mgr.24992) 63 : cephadm [INF] Reconfiguring daemon mgr.x on vm05 2026-03-10T05:53:43.950 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:43 vm05 bash[43541]: cephadm 2026-03-10T05:53:42.524447+0000 mgr.y (mgr.24992) 63 : cephadm [INF] Reconfiguring daemon mgr.x on vm05 2026-03-10T05:53:43.950 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:43 vm05 bash[43541]: audit 2026-03-10T05:53:42.866272+0000 mon.a (mon.0) 150 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:53:43.950 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:43 vm05 bash[43541]: audit 2026-03-10T05:53:42.866272+0000 mon.a (mon.0) 150 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:53:43.950 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:43 vm05 bash[43541]: audit 2026-03-10T05:53:42.873521+0000 mon.a (mon.0) 151 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:53:43.950 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:43 vm05 bash[43541]: audit 2026-03-10T05:53:42.873521+0000 mon.a (mon.0) 151 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:53:43.950 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:43 vm05 bash[43541]: audit 2026-03-10T05:53:42.875104+0000 mon.a (mon.0) 152 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.6"}]: dispatch 2026-03-10T05:53:43.950 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:43 vm05 bash[43541]: audit 2026-03-10T05:53:42.875104+0000 mon.a (mon.0) 152 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.6"}]: dispatch 2026-03-10T05:53:43.950 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:43 vm05 bash[43541]: audit 2026-03-10T05:53:42.875695+0000 mon.a (mon.0) 153 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:53:43.950 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:43 vm05 bash[43541]: audit 2026-03-10T05:53:42.875695+0000 mon.a (mon.0) 153 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:53:43.950 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:43 vm05 bash[43541]: audit 2026-03-10T05:53:43.269021+0000 mon.a (mon.0) 154 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:53:43.951 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:43 vm05 bash[43541]: audit 2026-03-10T05:53:43.269021+0000 mon.a (mon.0) 154 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:53:43.951 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:43 vm05 bash[43541]: audit 2026-03-10T05:53:43.274675+0000 mon.a (mon.0) 155 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:53:43.951 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:43 vm05 bash[43541]: audit 2026-03-10T05:53:43.274675+0000 mon.a (mon.0) 155 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:53:43.951 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:43 vm05 bash[43541]: audit 2026-03-10T05:53:43.276168+0000 mon.a (mon.0) 156 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.foo.vm05.hvmsxl", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch 2026-03-10T05:53:43.951 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:43 vm05 bash[43541]: audit 2026-03-10T05:53:43.276168+0000 mon.a (mon.0) 156 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.foo.vm05.hvmsxl", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch 2026-03-10T05:53:43.951 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:43 vm05 bash[43541]: audit 2026-03-10T05:53:43.277105+0000 mon.a (mon.0) 157 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:53:43.951 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:43 vm05 bash[43541]: audit 2026-03-10T05:53:43.277105+0000 mon.a (mon.0) 157 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:53:44.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:43 vm02 bash[56371]: cephadm 2026-03-10T05:53:41.737089+0000 mgr.y (mgr.24992) 58 : cephadm [INF] Reconfiguring osd.4 (monmap changed)... 2026-03-10T05:53:44.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:43 vm02 bash[56371]: cephadm 2026-03-10T05:53:41.737089+0000 mgr.y (mgr.24992) 58 : cephadm [INF] Reconfiguring osd.4 (monmap changed)... 2026-03-10T05:53:44.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:43 vm02 bash[56371]: cephadm 2026-03-10T05:53:41.738864+0000 mgr.y (mgr.24992) 59 : cephadm [INF] Reconfiguring daemon osd.4 on vm05 2026-03-10T05:53:44.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:43 vm02 bash[56371]: cephadm 2026-03-10T05:53:41.738864+0000 mgr.y (mgr.24992) 59 : cephadm [INF] Reconfiguring daemon osd.4 on vm05 2026-03-10T05:53:44.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:43 vm02 bash[56371]: cephadm 2026-03-10T05:53:42.128168+0000 mgr.y (mgr.24992) 60 : cephadm [INF] Reconfiguring osd.5 (monmap changed)... 2026-03-10T05:53:44.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:43 vm02 bash[56371]: cephadm 2026-03-10T05:53:42.128168+0000 mgr.y (mgr.24992) 60 : cephadm [INF] Reconfiguring osd.5 (monmap changed)... 2026-03-10T05:53:44.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:43 vm02 bash[56371]: cephadm 2026-03-10T05:53:42.130839+0000 mgr.y (mgr.24992) 61 : cephadm [INF] Reconfiguring daemon osd.5 on vm05 2026-03-10T05:53:44.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:43 vm02 bash[56371]: cephadm 2026-03-10T05:53:42.130839+0000 mgr.y (mgr.24992) 61 : cephadm [INF] Reconfiguring daemon osd.5 on vm05 2026-03-10T05:53:44.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:43 vm02 bash[56371]: cephadm 2026-03-10T05:53:42.521800+0000 mgr.y (mgr.24992) 62 : cephadm [INF] Reconfiguring mgr.x (monmap changed)... 2026-03-10T05:53:44.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:43 vm02 bash[56371]: cephadm 2026-03-10T05:53:42.521800+0000 mgr.y (mgr.24992) 62 : cephadm [INF] Reconfiguring mgr.x (monmap changed)... 2026-03-10T05:53:44.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:43 vm02 bash[56371]: cephadm 2026-03-10T05:53:42.524447+0000 mgr.y (mgr.24992) 63 : cephadm [INF] Reconfiguring daemon mgr.x on vm05 2026-03-10T05:53:44.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:43 vm02 bash[56371]: cephadm 2026-03-10T05:53:42.524447+0000 mgr.y (mgr.24992) 63 : cephadm [INF] Reconfiguring daemon mgr.x on vm05 2026-03-10T05:53:44.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:43 vm02 bash[56371]: audit 2026-03-10T05:53:42.866272+0000 mon.a (mon.0) 150 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:53:44.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:43 vm02 bash[56371]: audit 2026-03-10T05:53:42.866272+0000 mon.a (mon.0) 150 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:53:44.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:43 vm02 bash[56371]: audit 2026-03-10T05:53:42.873521+0000 mon.a (mon.0) 151 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:53:44.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:43 vm02 bash[56371]: audit 2026-03-10T05:53:42.873521+0000 mon.a (mon.0) 151 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:53:44.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:43 vm02 bash[56371]: audit 2026-03-10T05:53:42.875104+0000 mon.a (mon.0) 152 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.6"}]: dispatch 2026-03-10T05:53:44.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:43 vm02 bash[56371]: audit 2026-03-10T05:53:42.875104+0000 mon.a (mon.0) 152 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.6"}]: dispatch 2026-03-10T05:53:44.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:43 vm02 bash[56371]: audit 2026-03-10T05:53:42.875695+0000 mon.a (mon.0) 153 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:53:44.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:43 vm02 bash[56371]: audit 2026-03-10T05:53:42.875695+0000 mon.a (mon.0) 153 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:53:44.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:43 vm02 bash[56371]: audit 2026-03-10T05:53:43.269021+0000 mon.a (mon.0) 154 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:53:44.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:43 vm02 bash[56371]: audit 2026-03-10T05:53:43.269021+0000 mon.a (mon.0) 154 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:53:44.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:43 vm02 bash[56371]: audit 2026-03-10T05:53:43.274675+0000 mon.a (mon.0) 155 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:53:44.086 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:43 vm02 bash[56371]: audit 2026-03-10T05:53:43.274675+0000 mon.a (mon.0) 155 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:53:44.086 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:43 vm02 bash[56371]: audit 2026-03-10T05:53:43.276168+0000 mon.a (mon.0) 156 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.foo.vm05.hvmsxl", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch 2026-03-10T05:53:44.086 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:43 vm02 bash[56371]: audit 2026-03-10T05:53:43.276168+0000 mon.a (mon.0) 156 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.foo.vm05.hvmsxl", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch 2026-03-10T05:53:44.086 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:43 vm02 bash[56371]: audit 2026-03-10T05:53:43.277105+0000 mon.a (mon.0) 157 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:53:44.086 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:43 vm02 bash[56371]: audit 2026-03-10T05:53:43.277105+0000 mon.a (mon.0) 157 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:53:44.086 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:43 vm02 bash[55303]: cephadm 2026-03-10T05:53:41.737089+0000 mgr.y (mgr.24992) 58 : cephadm [INF] Reconfiguring osd.4 (monmap changed)... 2026-03-10T05:53:44.086 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:43 vm02 bash[55303]: cephadm 2026-03-10T05:53:41.737089+0000 mgr.y (mgr.24992) 58 : cephadm [INF] Reconfiguring osd.4 (monmap changed)... 2026-03-10T05:53:44.086 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:43 vm02 bash[55303]: cephadm 2026-03-10T05:53:41.738864+0000 mgr.y (mgr.24992) 59 : cephadm [INF] Reconfiguring daemon osd.4 on vm05 2026-03-10T05:53:44.086 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:43 vm02 bash[55303]: cephadm 2026-03-10T05:53:41.738864+0000 mgr.y (mgr.24992) 59 : cephadm [INF] Reconfiguring daemon osd.4 on vm05 2026-03-10T05:53:44.086 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:43 vm02 bash[55303]: cephadm 2026-03-10T05:53:42.128168+0000 mgr.y (mgr.24992) 60 : cephadm [INF] Reconfiguring osd.5 (monmap changed)... 2026-03-10T05:53:44.086 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:43 vm02 bash[55303]: cephadm 2026-03-10T05:53:42.128168+0000 mgr.y (mgr.24992) 60 : cephadm [INF] Reconfiguring osd.5 (monmap changed)... 2026-03-10T05:53:44.086 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:43 vm02 bash[55303]: cephadm 2026-03-10T05:53:42.130839+0000 mgr.y (mgr.24992) 61 : cephadm [INF] Reconfiguring daemon osd.5 on vm05 2026-03-10T05:53:44.086 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:43 vm02 bash[55303]: cephadm 2026-03-10T05:53:42.130839+0000 mgr.y (mgr.24992) 61 : cephadm [INF] Reconfiguring daemon osd.5 on vm05 2026-03-10T05:53:44.086 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:43 vm02 bash[55303]: cephadm 2026-03-10T05:53:42.521800+0000 mgr.y (mgr.24992) 62 : cephadm [INF] Reconfiguring mgr.x (monmap changed)... 2026-03-10T05:53:44.086 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:43 vm02 bash[55303]: cephadm 2026-03-10T05:53:42.521800+0000 mgr.y (mgr.24992) 62 : cephadm [INF] Reconfiguring mgr.x (monmap changed)... 2026-03-10T05:53:44.086 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:43 vm02 bash[55303]: cephadm 2026-03-10T05:53:42.524447+0000 mgr.y (mgr.24992) 63 : cephadm [INF] Reconfiguring daemon mgr.x on vm05 2026-03-10T05:53:44.086 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:43 vm02 bash[55303]: cephadm 2026-03-10T05:53:42.524447+0000 mgr.y (mgr.24992) 63 : cephadm [INF] Reconfiguring daemon mgr.x on vm05 2026-03-10T05:53:44.086 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:43 vm02 bash[55303]: audit 2026-03-10T05:53:42.866272+0000 mon.a (mon.0) 150 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:53:44.086 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:43 vm02 bash[55303]: audit 2026-03-10T05:53:42.866272+0000 mon.a (mon.0) 150 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:53:44.086 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:43 vm02 bash[55303]: audit 2026-03-10T05:53:42.873521+0000 mon.a (mon.0) 151 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:53:44.086 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:43 vm02 bash[55303]: audit 2026-03-10T05:53:42.873521+0000 mon.a (mon.0) 151 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:53:44.086 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:43 vm02 bash[55303]: audit 2026-03-10T05:53:42.875104+0000 mon.a (mon.0) 152 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.6"}]: dispatch 2026-03-10T05:53:44.086 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:43 vm02 bash[55303]: audit 2026-03-10T05:53:42.875104+0000 mon.a (mon.0) 152 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.6"}]: dispatch 2026-03-10T05:53:44.086 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:43 vm02 bash[55303]: audit 2026-03-10T05:53:42.875695+0000 mon.a (mon.0) 153 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:53:44.086 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:43 vm02 bash[55303]: audit 2026-03-10T05:53:42.875695+0000 mon.a (mon.0) 153 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:53:44.086 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:43 vm02 bash[55303]: audit 2026-03-10T05:53:43.269021+0000 mon.a (mon.0) 154 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:53:44.086 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:43 vm02 bash[55303]: audit 2026-03-10T05:53:43.269021+0000 mon.a (mon.0) 154 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:53:44.086 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:43 vm02 bash[55303]: audit 2026-03-10T05:53:43.274675+0000 mon.a (mon.0) 155 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:53:44.086 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:43 vm02 bash[55303]: audit 2026-03-10T05:53:43.274675+0000 mon.a (mon.0) 155 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:53:44.086 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:43 vm02 bash[55303]: audit 2026-03-10T05:53:43.276168+0000 mon.a (mon.0) 156 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.foo.vm05.hvmsxl", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch 2026-03-10T05:53:44.086 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:43 vm02 bash[55303]: audit 2026-03-10T05:53:43.276168+0000 mon.a (mon.0) 156 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.foo.vm05.hvmsxl", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch 2026-03-10T05:53:44.086 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:43 vm02 bash[55303]: audit 2026-03-10T05:53:43.277105+0000 mon.a (mon.0) 157 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:53:44.086 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:43 vm02 bash[55303]: audit 2026-03-10T05:53:43.277105+0000 mon.a (mon.0) 157 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:53:44.250 INFO:journalctl@ceph.prometheus.a.vm05.stdout:Mar 10 05:53:44 vm05 bash[41269]: ts=2026-03-10T05:53:44.147Z caller=group.go:483 level=warn name=CephOSDFlapping index=13 component="rule manager" file=/etc/prometheus/alerting/ceph_alerts.yml group=osd msg="Evaluating rule failed" rule="alert: CephOSDFlapping\nexpr: (rate(ceph_osd_up[5m]) * on (ceph_daemon) group_left (hostname) ceph_osd_metadata)\n * 60 > 1\nlabels:\n oid: 1.3.6.1.4.1.50495.1.2.1.4.4\n severity: warning\n type: ceph_default\nannotations:\n description: OSD {{ $labels.ceph_daemon }} on {{ $labels.hostname }} was marked\n down and back up {{ $value | humanize }} times once a minute for 5 minutes. This\n may indicate a network issue (latency, packet loss, MTU mismatch) on the cluster\n network, or the public network if no cluster network is deployed. Check the network\n stats on the listed host(s).\n documentation: https://docs.ceph.com/en/latest/rados/troubleshooting/troubleshooting-osd#flapping-osds\n summary: Network issues are causing OSDs to flap (mark each other down)\n" err="found duplicate series for the match group {ceph_daemon=\"osd.0\"} on the right hand-side of the operation: [{__name__=\"ceph_osd_metadata\", ceph_daemon=\"osd.0\", ceph_version=\"ceph version 17.2.0 (43e2e60a7559d3f46c9d53f1ca875fd499a1e35e) quincy (stable)\", cluster=\"107483ae-1c44-11f1-b530-c1172cd6122a\", cluster_addr=\"192.168.123.102\", device_class=\"hdd\", hostname=\"vm02\", instance=\"ceph_cluster\", job=\"ceph\", objectstore=\"bluestore\", public_addr=\"192.168.123.102\"}, {__name__=\"ceph_osd_metadata\", ceph_daemon=\"osd.0\", ceph_version=\"ceph version 17.2.0 (43e2e60a7559d3f46c9d53f1ca875fd499a1e35e) quincy (stable)\", cluster_addr=\"192.168.123.102\", device_class=\"hdd\", hostname=\"vm02\", instance=\"192.168.123.105:9283\", job=\"ceph\", objectstore=\"bluestore\", public_addr=\"192.168.123.102\"}];many-to-many matching not allowed: matching labels must be unique on one side" 2026-03-10T05:53:45.001 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:44 vm05 bash[43541]: cluster 2026-03-10T05:53:42.831730+0000 mgr.y (mgr.24992) 64 : cluster [DBG] pgmap v19: 161 pgs: 161 active+clean; 457 KiB data, 103 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T05:53:45.001 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:44 vm05 bash[43541]: cluster 2026-03-10T05:53:42.831730+0000 mgr.y (mgr.24992) 64 : cluster [DBG] pgmap v19: 161 pgs: 161 active+clean; 457 KiB data, 103 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T05:53:45.001 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:44 vm05 bash[43541]: cephadm 2026-03-10T05:53:42.874898+0000 mgr.y (mgr.24992) 65 : cephadm [INF] Reconfiguring osd.6 (monmap changed)... 2026-03-10T05:53:45.001 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:44 vm05 bash[43541]: cephadm 2026-03-10T05:53:42.874898+0000 mgr.y (mgr.24992) 65 : cephadm [INF] Reconfiguring osd.6 (monmap changed)... 2026-03-10T05:53:45.001 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:44 vm05 bash[43541]: cephadm 2026-03-10T05:53:42.877011+0000 mgr.y (mgr.24992) 66 : cephadm [INF] Reconfiguring daemon osd.6 on vm05 2026-03-10T05:53:45.001 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:44 vm05 bash[43541]: cephadm 2026-03-10T05:53:42.877011+0000 mgr.y (mgr.24992) 66 : cephadm [INF] Reconfiguring daemon osd.6 on vm05 2026-03-10T05:53:45.001 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:44 vm05 bash[43541]: cephadm 2026-03-10T05:53:43.275957+0000 mgr.y (mgr.24992) 67 : cephadm [INF] Reconfiguring rgw.foo.vm05.hvmsxl (monmap changed)... 2026-03-10T05:53:45.001 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:44 vm05 bash[43541]: cephadm 2026-03-10T05:53:43.275957+0000 mgr.y (mgr.24992) 67 : cephadm [INF] Reconfiguring rgw.foo.vm05.hvmsxl (monmap changed)... 2026-03-10T05:53:45.001 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:44 vm05 bash[43541]: cephadm 2026-03-10T05:53:43.277584+0000 mgr.y (mgr.24992) 68 : cephadm [INF] Reconfiguring daemon rgw.foo.vm05.hvmsxl on vm05 2026-03-10T05:53:45.001 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:44 vm05 bash[43541]: cephadm 2026-03-10T05:53:43.277584+0000 mgr.y (mgr.24992) 68 : cephadm [INF] Reconfiguring daemon rgw.foo.vm05.hvmsxl on vm05 2026-03-10T05:53:45.001 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:44 vm05 bash[43541]: audit 2026-03-10T05:53:43.639047+0000 mon.a (mon.0) 158 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:53:45.001 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:44 vm05 bash[43541]: audit 2026-03-10T05:53:43.639047+0000 mon.a (mon.0) 158 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:53:45.001 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:44 vm05 bash[43541]: audit 2026-03-10T05:53:43.645121+0000 mon.a (mon.0) 159 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:53:45.001 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:44 vm05 bash[43541]: audit 2026-03-10T05:53:43.645121+0000 mon.a (mon.0) 159 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:53:45.001 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:44 vm05 bash[43541]: cephadm 2026-03-10T05:53:43.646119+0000 mgr.y (mgr.24992) 69 : cephadm [INF] Reconfiguring mon.b (monmap changed)... 2026-03-10T05:53:45.001 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:44 vm05 bash[43541]: cephadm 2026-03-10T05:53:43.646119+0000 mgr.y (mgr.24992) 69 : cephadm [INF] Reconfiguring mon.b (monmap changed)... 2026-03-10T05:53:45.001 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:44 vm05 bash[43541]: audit 2026-03-10T05:53:43.646724+0000 mon.a (mon.0) 160 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-10T05:53:45.001 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:44 vm05 bash[43541]: audit 2026-03-10T05:53:43.646724+0000 mon.a (mon.0) 160 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-10T05:53:45.001 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:44 vm05 bash[43541]: audit 2026-03-10T05:53:43.647582+0000 mon.a (mon.0) 161 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch 2026-03-10T05:53:45.001 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:44 vm05 bash[43541]: audit 2026-03-10T05:53:43.647582+0000 mon.a (mon.0) 161 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch 2026-03-10T05:53:45.001 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:44 vm05 bash[43541]: audit 2026-03-10T05:53:43.648361+0000 mon.a (mon.0) 162 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:53:45.001 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:44 vm05 bash[43541]: audit 2026-03-10T05:53:43.648361+0000 mon.a (mon.0) 162 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:53:45.001 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:44 vm05 bash[43541]: cephadm 2026-03-10T05:53:43.649166+0000 mgr.y (mgr.24992) 70 : cephadm [INF] Reconfiguring daemon mon.b on vm05 2026-03-10T05:53:45.001 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:44 vm05 bash[43541]: cephadm 2026-03-10T05:53:43.649166+0000 mgr.y (mgr.24992) 70 : cephadm [INF] Reconfiguring daemon mon.b on vm05 2026-03-10T05:53:45.001 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:44 vm05 bash[43541]: audit 2026-03-10T05:53:44.034880+0000 mon.a (mon.0) 163 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:53:45.001 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:44 vm05 bash[43541]: audit 2026-03-10T05:53:44.034880+0000 mon.a (mon.0) 163 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:53:45.001 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:44 vm05 bash[43541]: audit 2026-03-10T05:53:44.041609+0000 mon.a (mon.0) 164 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:53:45.001 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:44 vm05 bash[43541]: audit 2026-03-10T05:53:44.041609+0000 mon.a (mon.0) 164 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:53:45.001 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:44 vm05 bash[43541]: audit 2026-03-10T05:53:44.043385+0000 mon.a (mon.0) 165 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.smpl.vm05.hqqmap", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch 2026-03-10T05:53:45.001 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:44 vm05 bash[43541]: audit 2026-03-10T05:53:44.043385+0000 mon.a (mon.0) 165 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.smpl.vm05.hqqmap", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch 2026-03-10T05:53:45.001 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:44 vm05 bash[43541]: audit 2026-03-10T05:53:44.044942+0000 mon.a (mon.0) 166 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:53:45.001 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:44 vm05 bash[43541]: audit 2026-03-10T05:53:44.044942+0000 mon.a (mon.0) 166 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:53:45.001 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:44 vm05 bash[43541]: audit 2026-03-10T05:53:44.423999+0000 mon.a (mon.0) 167 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:53:45.001 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:44 vm05 bash[43541]: audit 2026-03-10T05:53:44.423999+0000 mon.a (mon.0) 167 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:53:45.001 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:44 vm05 bash[43541]: audit 2026-03-10T05:53:44.431339+0000 mon.a (mon.0) 168 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:53:45.001 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:44 vm05 bash[43541]: audit 2026-03-10T05:53:44.431339+0000 mon.a (mon.0) 168 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:53:45.002 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:44 vm05 bash[43541]: audit 2026-03-10T05:53:44.432584+0000 mon.a (mon.0) 169 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.7"}]: dispatch 2026-03-10T05:53:45.002 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:44 vm05 bash[43541]: audit 2026-03-10T05:53:44.432584+0000 mon.a (mon.0) 169 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.7"}]: dispatch 2026-03-10T05:53:45.002 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:44 vm05 bash[43541]: audit 2026-03-10T05:53:44.433117+0000 mon.a (mon.0) 170 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:53:45.002 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:44 vm05 bash[43541]: audit 2026-03-10T05:53:44.433117+0000 mon.a (mon.0) 170 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:53:45.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:44 vm02 bash[56371]: cluster 2026-03-10T05:53:42.831730+0000 mgr.y (mgr.24992) 64 : cluster [DBG] pgmap v19: 161 pgs: 161 active+clean; 457 KiB data, 103 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T05:53:45.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:44 vm02 bash[56371]: cluster 2026-03-10T05:53:42.831730+0000 mgr.y (mgr.24992) 64 : cluster [DBG] pgmap v19: 161 pgs: 161 active+clean; 457 KiB data, 103 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T05:53:45.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:44 vm02 bash[56371]: cephadm 2026-03-10T05:53:42.874898+0000 mgr.y (mgr.24992) 65 : cephadm [INF] Reconfiguring osd.6 (monmap changed)... 2026-03-10T05:53:45.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:44 vm02 bash[56371]: cephadm 2026-03-10T05:53:42.874898+0000 mgr.y (mgr.24992) 65 : cephadm [INF] Reconfiguring osd.6 (monmap changed)... 2026-03-10T05:53:45.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:44 vm02 bash[56371]: cephadm 2026-03-10T05:53:42.877011+0000 mgr.y (mgr.24992) 66 : cephadm [INF] Reconfiguring daemon osd.6 on vm05 2026-03-10T05:53:45.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:44 vm02 bash[56371]: cephadm 2026-03-10T05:53:42.877011+0000 mgr.y (mgr.24992) 66 : cephadm [INF] Reconfiguring daemon osd.6 on vm05 2026-03-10T05:53:45.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:44 vm02 bash[56371]: cephadm 2026-03-10T05:53:43.275957+0000 mgr.y (mgr.24992) 67 : cephadm [INF] Reconfiguring rgw.foo.vm05.hvmsxl (monmap changed)... 2026-03-10T05:53:45.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:44 vm02 bash[56371]: cephadm 2026-03-10T05:53:43.275957+0000 mgr.y (mgr.24992) 67 : cephadm [INF] Reconfiguring rgw.foo.vm05.hvmsxl (monmap changed)... 2026-03-10T05:53:45.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:44 vm02 bash[56371]: cephadm 2026-03-10T05:53:43.277584+0000 mgr.y (mgr.24992) 68 : cephadm [INF] Reconfiguring daemon rgw.foo.vm05.hvmsxl on vm05 2026-03-10T05:53:45.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:44 vm02 bash[56371]: cephadm 2026-03-10T05:53:43.277584+0000 mgr.y (mgr.24992) 68 : cephadm [INF] Reconfiguring daemon rgw.foo.vm05.hvmsxl on vm05 2026-03-10T05:53:45.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:44 vm02 bash[56371]: audit 2026-03-10T05:53:43.639047+0000 mon.a (mon.0) 158 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:53:45.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:44 vm02 bash[56371]: audit 2026-03-10T05:53:43.639047+0000 mon.a (mon.0) 158 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:53:45.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:44 vm02 bash[56371]: audit 2026-03-10T05:53:43.645121+0000 mon.a (mon.0) 159 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:53:45.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:44 vm02 bash[56371]: audit 2026-03-10T05:53:43.645121+0000 mon.a (mon.0) 159 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:53:45.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:44 vm02 bash[56371]: cephadm 2026-03-10T05:53:43.646119+0000 mgr.y (mgr.24992) 69 : cephadm [INF] Reconfiguring mon.b (monmap changed)... 2026-03-10T05:53:45.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:44 vm02 bash[56371]: cephadm 2026-03-10T05:53:43.646119+0000 mgr.y (mgr.24992) 69 : cephadm [INF] Reconfiguring mon.b (monmap changed)... 2026-03-10T05:53:45.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:44 vm02 bash[56371]: audit 2026-03-10T05:53:43.646724+0000 mon.a (mon.0) 160 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-10T05:53:45.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:44 vm02 bash[56371]: audit 2026-03-10T05:53:43.646724+0000 mon.a (mon.0) 160 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-10T05:53:45.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:44 vm02 bash[56371]: audit 2026-03-10T05:53:43.647582+0000 mon.a (mon.0) 161 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch 2026-03-10T05:53:45.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:44 vm02 bash[56371]: audit 2026-03-10T05:53:43.647582+0000 mon.a (mon.0) 161 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch 2026-03-10T05:53:45.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:44 vm02 bash[56371]: audit 2026-03-10T05:53:43.648361+0000 mon.a (mon.0) 162 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:53:45.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:44 vm02 bash[56371]: audit 2026-03-10T05:53:43.648361+0000 mon.a (mon.0) 162 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:53:45.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:44 vm02 bash[56371]: cephadm 2026-03-10T05:53:43.649166+0000 mgr.y (mgr.24992) 70 : cephadm [INF] Reconfiguring daemon mon.b on vm05 2026-03-10T05:53:45.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:44 vm02 bash[56371]: cephadm 2026-03-10T05:53:43.649166+0000 mgr.y (mgr.24992) 70 : cephadm [INF] Reconfiguring daemon mon.b on vm05 2026-03-10T05:53:45.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:44 vm02 bash[56371]: audit 2026-03-10T05:53:44.034880+0000 mon.a (mon.0) 163 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:53:45.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:44 vm02 bash[56371]: audit 2026-03-10T05:53:44.034880+0000 mon.a (mon.0) 163 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:53:45.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:44 vm02 bash[56371]: audit 2026-03-10T05:53:44.041609+0000 mon.a (mon.0) 164 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:53:45.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:44 vm02 bash[56371]: audit 2026-03-10T05:53:44.041609+0000 mon.a (mon.0) 164 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:53:45.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:44 vm02 bash[56371]: audit 2026-03-10T05:53:44.043385+0000 mon.a (mon.0) 165 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.smpl.vm05.hqqmap", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch 2026-03-10T05:53:45.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:44 vm02 bash[56371]: audit 2026-03-10T05:53:44.043385+0000 mon.a (mon.0) 165 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.smpl.vm05.hqqmap", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch 2026-03-10T05:53:45.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:44 vm02 bash[56371]: audit 2026-03-10T05:53:44.044942+0000 mon.a (mon.0) 166 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:53:45.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:44 vm02 bash[56371]: audit 2026-03-10T05:53:44.044942+0000 mon.a (mon.0) 166 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:53:45.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:44 vm02 bash[56371]: audit 2026-03-10T05:53:44.423999+0000 mon.a (mon.0) 167 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:53:45.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:44 vm02 bash[56371]: audit 2026-03-10T05:53:44.423999+0000 mon.a (mon.0) 167 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:53:45.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:44 vm02 bash[56371]: audit 2026-03-10T05:53:44.431339+0000 mon.a (mon.0) 168 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:53:45.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:44 vm02 bash[56371]: audit 2026-03-10T05:53:44.431339+0000 mon.a (mon.0) 168 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:53:45.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:44 vm02 bash[56371]: audit 2026-03-10T05:53:44.432584+0000 mon.a (mon.0) 169 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.7"}]: dispatch 2026-03-10T05:53:45.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:44 vm02 bash[56371]: audit 2026-03-10T05:53:44.432584+0000 mon.a (mon.0) 169 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.7"}]: dispatch 2026-03-10T05:53:45.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:44 vm02 bash[56371]: audit 2026-03-10T05:53:44.433117+0000 mon.a (mon.0) 170 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:53:45.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:44 vm02 bash[56371]: audit 2026-03-10T05:53:44.433117+0000 mon.a (mon.0) 170 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:53:45.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:44 vm02 bash[55303]: cluster 2026-03-10T05:53:42.831730+0000 mgr.y (mgr.24992) 64 : cluster [DBG] pgmap v19: 161 pgs: 161 active+clean; 457 KiB data, 103 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T05:53:45.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:44 vm02 bash[55303]: cluster 2026-03-10T05:53:42.831730+0000 mgr.y (mgr.24992) 64 : cluster [DBG] pgmap v19: 161 pgs: 161 active+clean; 457 KiB data, 103 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T05:53:45.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:44 vm02 bash[55303]: cephadm 2026-03-10T05:53:42.874898+0000 mgr.y (mgr.24992) 65 : cephadm [INF] Reconfiguring osd.6 (monmap changed)... 2026-03-10T05:53:45.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:44 vm02 bash[55303]: cephadm 2026-03-10T05:53:42.874898+0000 mgr.y (mgr.24992) 65 : cephadm [INF] Reconfiguring osd.6 (monmap changed)... 2026-03-10T05:53:45.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:44 vm02 bash[55303]: cephadm 2026-03-10T05:53:42.877011+0000 mgr.y (mgr.24992) 66 : cephadm [INF] Reconfiguring daemon osd.6 on vm05 2026-03-10T05:53:45.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:44 vm02 bash[55303]: cephadm 2026-03-10T05:53:42.877011+0000 mgr.y (mgr.24992) 66 : cephadm [INF] Reconfiguring daemon osd.6 on vm05 2026-03-10T05:53:45.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:44 vm02 bash[55303]: cephadm 2026-03-10T05:53:43.275957+0000 mgr.y (mgr.24992) 67 : cephadm [INF] Reconfiguring rgw.foo.vm05.hvmsxl (monmap changed)... 2026-03-10T05:53:45.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:44 vm02 bash[55303]: cephadm 2026-03-10T05:53:43.275957+0000 mgr.y (mgr.24992) 67 : cephadm [INF] Reconfiguring rgw.foo.vm05.hvmsxl (monmap changed)... 2026-03-10T05:53:45.086 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:44 vm02 bash[55303]: cephadm 2026-03-10T05:53:43.277584+0000 mgr.y (mgr.24992) 68 : cephadm [INF] Reconfiguring daemon rgw.foo.vm05.hvmsxl on vm05 2026-03-10T05:53:45.086 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:44 vm02 bash[55303]: cephadm 2026-03-10T05:53:43.277584+0000 mgr.y (mgr.24992) 68 : cephadm [INF] Reconfiguring daemon rgw.foo.vm05.hvmsxl on vm05 2026-03-10T05:53:45.086 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:44 vm02 bash[55303]: audit 2026-03-10T05:53:43.639047+0000 mon.a (mon.0) 158 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:53:45.086 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:44 vm02 bash[55303]: audit 2026-03-10T05:53:43.639047+0000 mon.a (mon.0) 158 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:53:45.086 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:44 vm02 bash[55303]: audit 2026-03-10T05:53:43.645121+0000 mon.a (mon.0) 159 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:53:45.086 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:44 vm02 bash[55303]: audit 2026-03-10T05:53:43.645121+0000 mon.a (mon.0) 159 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:53:45.086 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:44 vm02 bash[55303]: cephadm 2026-03-10T05:53:43.646119+0000 mgr.y (mgr.24992) 69 : cephadm [INF] Reconfiguring mon.b (monmap changed)... 2026-03-10T05:53:45.086 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:44 vm02 bash[55303]: cephadm 2026-03-10T05:53:43.646119+0000 mgr.y (mgr.24992) 69 : cephadm [INF] Reconfiguring mon.b (monmap changed)... 2026-03-10T05:53:45.086 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:44 vm02 bash[55303]: audit 2026-03-10T05:53:43.646724+0000 mon.a (mon.0) 160 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-10T05:53:45.086 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:44 vm02 bash[55303]: audit 2026-03-10T05:53:43.646724+0000 mon.a (mon.0) 160 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-10T05:53:45.086 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:44 vm02 bash[55303]: audit 2026-03-10T05:53:43.647582+0000 mon.a (mon.0) 161 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch 2026-03-10T05:53:45.086 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:44 vm02 bash[55303]: audit 2026-03-10T05:53:43.647582+0000 mon.a (mon.0) 161 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch 2026-03-10T05:53:45.086 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:44 vm02 bash[55303]: audit 2026-03-10T05:53:43.648361+0000 mon.a (mon.0) 162 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:53:45.086 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:44 vm02 bash[55303]: audit 2026-03-10T05:53:43.648361+0000 mon.a (mon.0) 162 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:53:45.086 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:44 vm02 bash[55303]: cephadm 2026-03-10T05:53:43.649166+0000 mgr.y (mgr.24992) 70 : cephadm [INF] Reconfiguring daemon mon.b on vm05 2026-03-10T05:53:45.086 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:44 vm02 bash[55303]: cephadm 2026-03-10T05:53:43.649166+0000 mgr.y (mgr.24992) 70 : cephadm [INF] Reconfiguring daemon mon.b on vm05 2026-03-10T05:53:45.086 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:44 vm02 bash[55303]: audit 2026-03-10T05:53:44.034880+0000 mon.a (mon.0) 163 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:53:45.086 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:44 vm02 bash[55303]: audit 2026-03-10T05:53:44.034880+0000 mon.a (mon.0) 163 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:53:45.086 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:44 vm02 bash[55303]: audit 2026-03-10T05:53:44.041609+0000 mon.a (mon.0) 164 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:53:45.086 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:44 vm02 bash[55303]: audit 2026-03-10T05:53:44.041609+0000 mon.a (mon.0) 164 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:53:45.086 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:44 vm02 bash[55303]: audit 2026-03-10T05:53:44.043385+0000 mon.a (mon.0) 165 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.smpl.vm05.hqqmap", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch 2026-03-10T05:53:45.086 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:44 vm02 bash[55303]: audit 2026-03-10T05:53:44.043385+0000 mon.a (mon.0) 165 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.smpl.vm05.hqqmap", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch 2026-03-10T05:53:45.086 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:44 vm02 bash[55303]: audit 2026-03-10T05:53:44.044942+0000 mon.a (mon.0) 166 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:53:45.086 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:44 vm02 bash[55303]: audit 2026-03-10T05:53:44.044942+0000 mon.a (mon.0) 166 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:53:45.086 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:44 vm02 bash[55303]: audit 2026-03-10T05:53:44.423999+0000 mon.a (mon.0) 167 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:53:45.086 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:44 vm02 bash[55303]: audit 2026-03-10T05:53:44.423999+0000 mon.a (mon.0) 167 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:53:45.086 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:44 vm02 bash[55303]: audit 2026-03-10T05:53:44.431339+0000 mon.a (mon.0) 168 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:53:45.086 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:44 vm02 bash[55303]: audit 2026-03-10T05:53:44.431339+0000 mon.a (mon.0) 168 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:53:45.086 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:44 vm02 bash[55303]: audit 2026-03-10T05:53:44.432584+0000 mon.a (mon.0) 169 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.7"}]: dispatch 2026-03-10T05:53:45.086 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:44 vm02 bash[55303]: audit 2026-03-10T05:53:44.432584+0000 mon.a (mon.0) 169 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.7"}]: dispatch 2026-03-10T05:53:45.086 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:44 vm02 bash[55303]: audit 2026-03-10T05:53:44.433117+0000 mon.a (mon.0) 170 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:53:45.086 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:44 vm02 bash[55303]: audit 2026-03-10T05:53:44.433117+0000 mon.a (mon.0) 170 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:53:45.664 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:45 vm02 bash[56371]: cephadm 2026-03-10T05:53:44.042662+0000 mgr.y (mgr.24992) 71 : cephadm [INF] Reconfiguring rgw.smpl.vm05.hqqmap (monmap changed)... 2026-03-10T05:53:45.664 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:45 vm02 bash[56371]: cephadm 2026-03-10T05:53:44.042662+0000 mgr.y (mgr.24992) 71 : cephadm [INF] Reconfiguring rgw.smpl.vm05.hqqmap (monmap changed)... 2026-03-10T05:53:45.664 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:45 vm02 bash[56371]: cephadm 2026-03-10T05:53:44.045710+0000 mgr.y (mgr.24992) 72 : cephadm [INF] Reconfiguring daemon rgw.smpl.vm05.hqqmap on vm05 2026-03-10T05:53:45.664 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:45 vm02 bash[56371]: cephadm 2026-03-10T05:53:44.045710+0000 mgr.y (mgr.24992) 72 : cephadm [INF] Reconfiguring daemon rgw.smpl.vm05.hqqmap on vm05 2026-03-10T05:53:45.664 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:45 vm02 bash[56371]: cephadm 2026-03-10T05:53:44.432202+0000 mgr.y (mgr.24992) 73 : cephadm [INF] Reconfiguring osd.7 (monmap changed)... 2026-03-10T05:53:45.664 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:45 vm02 bash[56371]: cephadm 2026-03-10T05:53:44.432202+0000 mgr.y (mgr.24992) 73 : cephadm [INF] Reconfiguring osd.7 (monmap changed)... 2026-03-10T05:53:45.664 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:45 vm02 bash[56371]: cephadm 2026-03-10T05:53:44.434311+0000 mgr.y (mgr.24992) 74 : cephadm [INF] Reconfiguring daemon osd.7 on vm05 2026-03-10T05:53:45.664 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:45 vm02 bash[56371]: cephadm 2026-03-10T05:53:44.434311+0000 mgr.y (mgr.24992) 74 : cephadm [INF] Reconfiguring daemon osd.7 on vm05 2026-03-10T05:53:45.664 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:45 vm02 bash[56371]: audit 2026-03-10T05:53:44.820428+0000 mon.a (mon.0) 171 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:53:45.664 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:45 vm02 bash[56371]: audit 2026-03-10T05:53:44.820428+0000 mon.a (mon.0) 171 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:53:45.664 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:45 vm02 bash[56371]: audit 2026-03-10T05:53:44.826763+0000 mon.a (mon.0) 172 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:53:45.664 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:45 vm02 bash[56371]: audit 2026-03-10T05:53:44.826763+0000 mon.a (mon.0) 172 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:53:45.664 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:45 vm02 bash[56371]: audit 2026-03-10T05:53:44.856399+0000 mon.a (mon.0) 173 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T05:53:45.664 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:45 vm02 bash[56371]: audit 2026-03-10T05:53:44.856399+0000 mon.a (mon.0) 173 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T05:53:45.664 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:45 vm02 bash[56371]: audit 2026-03-10T05:53:44.857721+0000 mon.a (mon.0) 174 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T05:53:45.664 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:45 vm02 bash[56371]: audit 2026-03-10T05:53:44.857721+0000 mon.a (mon.0) 174 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T05:53:45.664 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:45 vm02 bash[56371]: audit 2026-03-10T05:53:44.858708+0000 mon.a (mon.0) 175 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T05:53:45.664 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:45 vm02 bash[56371]: audit 2026-03-10T05:53:44.858708+0000 mon.a (mon.0) 175 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T05:53:45.664 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:45 vm02 bash[56371]: audit 2026-03-10T05:53:44.865937+0000 mon.a (mon.0) 176 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:53:45.664 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:45 vm02 bash[56371]: audit 2026-03-10T05:53:44.865937+0000 mon.a (mon.0) 176 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:53:45.664 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:45 vm02 bash[56371]: audit 2026-03-10T05:53:44.869008+0000 mon.a (mon.0) 177 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon.a"}]: dispatch 2026-03-10T05:53:45.664 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:45 vm02 bash[56371]: audit 2026-03-10T05:53:44.869008+0000 mon.a (mon.0) 177 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon.a"}]: dispatch 2026-03-10T05:53:45.664 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:45 vm02 bash[56371]: audit 2026-03-10T05:53:44.875396+0000 mon.a (mon.0) 178 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "mon.a"}]': finished 2026-03-10T05:53:45.665 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:45 vm02 bash[56371]: audit 2026-03-10T05:53:44.875396+0000 mon.a (mon.0) 178 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "mon.a"}]': finished 2026-03-10T05:53:45.665 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:45 vm02 bash[56371]: audit 2026-03-10T05:53:44.876833+0000 mon.a (mon.0) 179 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon.b"}]: dispatch 2026-03-10T05:53:45.665 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:45 vm02 bash[56371]: audit 2026-03-10T05:53:44.876833+0000 mon.a (mon.0) 179 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon.b"}]: dispatch 2026-03-10T05:53:45.665 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:45 vm02 bash[56371]: audit 2026-03-10T05:53:44.879837+0000 mon.a (mon.0) 180 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "mon.b"}]': finished 2026-03-10T05:53:45.665 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:45 vm02 bash[56371]: audit 2026-03-10T05:53:44.879837+0000 mon.a (mon.0) 180 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "mon.b"}]': finished 2026-03-10T05:53:45.665 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:45 vm02 bash[56371]: audit 2026-03-10T05:53:44.882590+0000 mon.a (mon.0) 181 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon.c"}]: dispatch 2026-03-10T05:53:45.665 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:45 vm02 bash[56371]: audit 2026-03-10T05:53:44.882590+0000 mon.a (mon.0) 181 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon.c"}]: dispatch 2026-03-10T05:53:45.665 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:45 vm02 bash[56371]: audit 2026-03-10T05:53:44.886077+0000 mon.a (mon.0) 182 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "mon.c"}]': finished 2026-03-10T05:53:45.665 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:45 vm02 bash[56371]: audit 2026-03-10T05:53:44.886077+0000 mon.a (mon.0) 182 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "mon.c"}]': finished 2026-03-10T05:53:45.665 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:45 vm02 bash[56371]: audit 2026-03-10T05:53:44.888872+0000 mon.a (mon.0) 183 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T05:53:45.665 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:45 vm02 bash[56371]: audit 2026-03-10T05:53:44.888872+0000 mon.a (mon.0) 183 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T05:53:45.665 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:45 vm02 bash[56371]: audit 2026-03-10T05:53:44.892326+0000 mon.a (mon.0) 184 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:53:45.665 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:45 vm02 bash[56371]: audit 2026-03-10T05:53:44.892326+0000 mon.a (mon.0) 184 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:53:45.665 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:45 vm02 bash[56371]: audit 2026-03-10T05:53:44.893931+0000 mon.a (mon.0) 185 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "osd ok-to-stop", "ids": ["3"], "max": 16}]: dispatch 2026-03-10T05:53:45.665 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:45 vm02 bash[56371]: audit 2026-03-10T05:53:44.893931+0000 mon.a (mon.0) 185 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "osd ok-to-stop", "ids": ["3"], "max": 16}]: dispatch 2026-03-10T05:53:45.665 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:45 vm02 bash[56371]: audit 2026-03-10T05:53:45.326226+0000 mon.a (mon.0) 186 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:53:45.665 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:45 vm02 bash[56371]: audit 2026-03-10T05:53:45.326226+0000 mon.a (mon.0) 186 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:53:45.665 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:45 vm02 bash[56371]: audit 2026-03-10T05:53:45.329698+0000 mon.a (mon.0) 187 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.3"}]: dispatch 2026-03-10T05:53:45.665 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:45 vm02 bash[56371]: audit 2026-03-10T05:53:45.329698+0000 mon.a (mon.0) 187 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.3"}]: dispatch 2026-03-10T05:53:45.665 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:45 vm02 bash[56371]: audit 2026-03-10T05:53:45.330206+0000 mon.a (mon.0) 188 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:53:45.665 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:45 vm02 bash[56371]: audit 2026-03-10T05:53:45.330206+0000 mon.a (mon.0) 188 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:53:45.665 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:45 vm02 bash[55303]: cephadm 2026-03-10T05:53:44.042662+0000 mgr.y (mgr.24992) 71 : cephadm [INF] Reconfiguring rgw.smpl.vm05.hqqmap (monmap changed)... 2026-03-10T05:53:45.665 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:45 vm02 bash[55303]: cephadm 2026-03-10T05:53:44.042662+0000 mgr.y (mgr.24992) 71 : cephadm [INF] Reconfiguring rgw.smpl.vm05.hqqmap (monmap changed)... 2026-03-10T05:53:45.665 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:45 vm02 bash[55303]: cephadm 2026-03-10T05:53:44.045710+0000 mgr.y (mgr.24992) 72 : cephadm [INF] Reconfiguring daemon rgw.smpl.vm05.hqqmap on vm05 2026-03-10T05:53:45.665 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:45 vm02 bash[55303]: cephadm 2026-03-10T05:53:44.045710+0000 mgr.y (mgr.24992) 72 : cephadm [INF] Reconfiguring daemon rgw.smpl.vm05.hqqmap on vm05 2026-03-10T05:53:45.665 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:45 vm02 bash[55303]: cephadm 2026-03-10T05:53:44.432202+0000 mgr.y (mgr.24992) 73 : cephadm [INF] Reconfiguring osd.7 (monmap changed)... 2026-03-10T05:53:45.665 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:45 vm02 bash[55303]: cephadm 2026-03-10T05:53:44.432202+0000 mgr.y (mgr.24992) 73 : cephadm [INF] Reconfiguring osd.7 (monmap changed)... 2026-03-10T05:53:45.665 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:45 vm02 bash[55303]: cephadm 2026-03-10T05:53:44.434311+0000 mgr.y (mgr.24992) 74 : cephadm [INF] Reconfiguring daemon osd.7 on vm05 2026-03-10T05:53:45.665 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:45 vm02 bash[55303]: cephadm 2026-03-10T05:53:44.434311+0000 mgr.y (mgr.24992) 74 : cephadm [INF] Reconfiguring daemon osd.7 on vm05 2026-03-10T05:53:45.665 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:45 vm02 bash[55303]: audit 2026-03-10T05:53:44.820428+0000 mon.a (mon.0) 171 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:53:45.665 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:45 vm02 bash[55303]: audit 2026-03-10T05:53:44.820428+0000 mon.a (mon.0) 171 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:53:45.665 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:45 vm02 bash[55303]: audit 2026-03-10T05:53:44.826763+0000 mon.a (mon.0) 172 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:53:45.665 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:45 vm02 bash[55303]: audit 2026-03-10T05:53:44.826763+0000 mon.a (mon.0) 172 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:53:45.665 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:45 vm02 bash[55303]: audit 2026-03-10T05:53:44.856399+0000 mon.a (mon.0) 173 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T05:53:45.665 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:45 vm02 bash[55303]: audit 2026-03-10T05:53:44.856399+0000 mon.a (mon.0) 173 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T05:53:45.665 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:45 vm02 bash[55303]: audit 2026-03-10T05:53:44.857721+0000 mon.a (mon.0) 174 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T05:53:45.665 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:45 vm02 bash[55303]: audit 2026-03-10T05:53:44.857721+0000 mon.a (mon.0) 174 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T05:53:45.665 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:45 vm02 bash[55303]: audit 2026-03-10T05:53:44.858708+0000 mon.a (mon.0) 175 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T05:53:45.665 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:45 vm02 bash[55303]: audit 2026-03-10T05:53:44.858708+0000 mon.a (mon.0) 175 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T05:53:45.665 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:45 vm02 bash[55303]: audit 2026-03-10T05:53:44.865937+0000 mon.a (mon.0) 176 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:53:45.665 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:45 vm02 bash[55303]: audit 2026-03-10T05:53:44.865937+0000 mon.a (mon.0) 176 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:53:45.665 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:45 vm02 bash[55303]: audit 2026-03-10T05:53:44.869008+0000 mon.a (mon.0) 177 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon.a"}]: dispatch 2026-03-10T05:53:45.665 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:45 vm02 bash[55303]: audit 2026-03-10T05:53:44.869008+0000 mon.a (mon.0) 177 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon.a"}]: dispatch 2026-03-10T05:53:45.665 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:45 vm02 bash[55303]: audit 2026-03-10T05:53:44.875396+0000 mon.a (mon.0) 178 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "mon.a"}]': finished 2026-03-10T05:53:45.665 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:45 vm02 bash[55303]: audit 2026-03-10T05:53:44.875396+0000 mon.a (mon.0) 178 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "mon.a"}]': finished 2026-03-10T05:53:45.665 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:45 vm02 bash[55303]: audit 2026-03-10T05:53:44.876833+0000 mon.a (mon.0) 179 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon.b"}]: dispatch 2026-03-10T05:53:45.665 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:45 vm02 bash[55303]: audit 2026-03-10T05:53:44.876833+0000 mon.a (mon.0) 179 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon.b"}]: dispatch 2026-03-10T05:53:45.665 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:45 vm02 bash[55303]: audit 2026-03-10T05:53:44.879837+0000 mon.a (mon.0) 180 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "mon.b"}]': finished 2026-03-10T05:53:45.665 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:45 vm02 bash[55303]: audit 2026-03-10T05:53:44.879837+0000 mon.a (mon.0) 180 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "mon.b"}]': finished 2026-03-10T05:53:45.665 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:45 vm02 bash[55303]: audit 2026-03-10T05:53:44.882590+0000 mon.a (mon.0) 181 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon.c"}]: dispatch 2026-03-10T05:53:45.665 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:45 vm02 bash[55303]: audit 2026-03-10T05:53:44.882590+0000 mon.a (mon.0) 181 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon.c"}]: dispatch 2026-03-10T05:53:45.665 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:45 vm02 bash[55303]: audit 2026-03-10T05:53:44.886077+0000 mon.a (mon.0) 182 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "mon.c"}]': finished 2026-03-10T05:53:45.665 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:45 vm02 bash[55303]: audit 2026-03-10T05:53:44.886077+0000 mon.a (mon.0) 182 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "mon.c"}]': finished 2026-03-10T05:53:45.665 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:45 vm02 bash[55303]: audit 2026-03-10T05:53:44.888872+0000 mon.a (mon.0) 183 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T05:53:45.665 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:45 vm02 bash[55303]: audit 2026-03-10T05:53:44.888872+0000 mon.a (mon.0) 183 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T05:53:45.666 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:45 vm02 bash[55303]: audit 2026-03-10T05:53:44.892326+0000 mon.a (mon.0) 184 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:53:45.666 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:45 vm02 bash[55303]: audit 2026-03-10T05:53:44.892326+0000 mon.a (mon.0) 184 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:53:45.666 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:45 vm02 bash[55303]: audit 2026-03-10T05:53:44.893931+0000 mon.a (mon.0) 185 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "osd ok-to-stop", "ids": ["3"], "max": 16}]: dispatch 2026-03-10T05:53:45.666 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:45 vm02 bash[55303]: audit 2026-03-10T05:53:44.893931+0000 mon.a (mon.0) 185 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "osd ok-to-stop", "ids": ["3"], "max": 16}]: dispatch 2026-03-10T05:53:45.666 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:45 vm02 bash[55303]: audit 2026-03-10T05:53:45.326226+0000 mon.a (mon.0) 186 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:53:45.666 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:45 vm02 bash[55303]: audit 2026-03-10T05:53:45.326226+0000 mon.a (mon.0) 186 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:53:45.666 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:45 vm02 bash[55303]: audit 2026-03-10T05:53:45.329698+0000 mon.a (mon.0) 187 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.3"}]: dispatch 2026-03-10T05:53:45.666 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:45 vm02 bash[55303]: audit 2026-03-10T05:53:45.329698+0000 mon.a (mon.0) 187 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.3"}]: dispatch 2026-03-10T05:53:45.666 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:45 vm02 bash[55303]: audit 2026-03-10T05:53:45.330206+0000 mon.a (mon.0) 188 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:53:45.666 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:45 vm02 bash[55303]: audit 2026-03-10T05:53:45.330206+0000 mon.a (mon.0) 188 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:53:46.001 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:45 vm05 bash[43541]: cephadm 2026-03-10T05:53:44.042662+0000 mgr.y (mgr.24992) 71 : cephadm [INF] Reconfiguring rgw.smpl.vm05.hqqmap (monmap changed)... 2026-03-10T05:53:46.001 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:45 vm05 bash[43541]: cephadm 2026-03-10T05:53:44.042662+0000 mgr.y (mgr.24992) 71 : cephadm [INF] Reconfiguring rgw.smpl.vm05.hqqmap (monmap changed)... 2026-03-10T05:53:46.001 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:45 vm05 bash[43541]: cephadm 2026-03-10T05:53:44.045710+0000 mgr.y (mgr.24992) 72 : cephadm [INF] Reconfiguring daemon rgw.smpl.vm05.hqqmap on vm05 2026-03-10T05:53:46.001 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:45 vm05 bash[43541]: cephadm 2026-03-10T05:53:44.045710+0000 mgr.y (mgr.24992) 72 : cephadm [INF] Reconfiguring daemon rgw.smpl.vm05.hqqmap on vm05 2026-03-10T05:53:46.001 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:45 vm05 bash[43541]: cephadm 2026-03-10T05:53:44.432202+0000 mgr.y (mgr.24992) 73 : cephadm [INF] Reconfiguring osd.7 (monmap changed)... 2026-03-10T05:53:46.001 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:45 vm05 bash[43541]: cephadm 2026-03-10T05:53:44.432202+0000 mgr.y (mgr.24992) 73 : cephadm [INF] Reconfiguring osd.7 (monmap changed)... 2026-03-10T05:53:46.001 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:45 vm05 bash[43541]: cephadm 2026-03-10T05:53:44.434311+0000 mgr.y (mgr.24992) 74 : cephadm [INF] Reconfiguring daemon osd.7 on vm05 2026-03-10T05:53:46.001 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:45 vm05 bash[43541]: cephadm 2026-03-10T05:53:44.434311+0000 mgr.y (mgr.24992) 74 : cephadm [INF] Reconfiguring daemon osd.7 on vm05 2026-03-10T05:53:46.001 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:45 vm05 bash[43541]: audit 2026-03-10T05:53:44.820428+0000 mon.a (mon.0) 171 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:53:46.001 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:45 vm05 bash[43541]: audit 2026-03-10T05:53:44.820428+0000 mon.a (mon.0) 171 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:53:46.001 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:45 vm05 bash[43541]: audit 2026-03-10T05:53:44.826763+0000 mon.a (mon.0) 172 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:53:46.001 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:45 vm05 bash[43541]: audit 2026-03-10T05:53:44.826763+0000 mon.a (mon.0) 172 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:53:46.001 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:45 vm05 bash[43541]: audit 2026-03-10T05:53:44.856399+0000 mon.a (mon.0) 173 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T05:53:46.001 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:45 vm05 bash[43541]: audit 2026-03-10T05:53:44.856399+0000 mon.a (mon.0) 173 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T05:53:46.001 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:45 vm05 bash[43541]: audit 2026-03-10T05:53:44.857721+0000 mon.a (mon.0) 174 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T05:53:46.001 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:45 vm05 bash[43541]: audit 2026-03-10T05:53:44.857721+0000 mon.a (mon.0) 174 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T05:53:46.001 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:45 vm05 bash[43541]: audit 2026-03-10T05:53:44.858708+0000 mon.a (mon.0) 175 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T05:53:46.001 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:45 vm05 bash[43541]: audit 2026-03-10T05:53:44.858708+0000 mon.a (mon.0) 175 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T05:53:46.001 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:45 vm05 bash[43541]: audit 2026-03-10T05:53:44.865937+0000 mon.a (mon.0) 176 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:53:46.001 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:45 vm05 bash[43541]: audit 2026-03-10T05:53:44.865937+0000 mon.a (mon.0) 176 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:53:46.001 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:45 vm05 bash[43541]: audit 2026-03-10T05:53:44.869008+0000 mon.a (mon.0) 177 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon.a"}]: dispatch 2026-03-10T05:53:46.001 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:45 vm05 bash[43541]: audit 2026-03-10T05:53:44.869008+0000 mon.a (mon.0) 177 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon.a"}]: dispatch 2026-03-10T05:53:46.001 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:45 vm05 bash[43541]: audit 2026-03-10T05:53:44.875396+0000 mon.a (mon.0) 178 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "mon.a"}]': finished 2026-03-10T05:53:46.001 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:45 vm05 bash[43541]: audit 2026-03-10T05:53:44.875396+0000 mon.a (mon.0) 178 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "mon.a"}]': finished 2026-03-10T05:53:46.001 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:45 vm05 bash[43541]: audit 2026-03-10T05:53:44.876833+0000 mon.a (mon.0) 179 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon.b"}]: dispatch 2026-03-10T05:53:46.001 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:45 vm05 bash[43541]: audit 2026-03-10T05:53:44.876833+0000 mon.a (mon.0) 179 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon.b"}]: dispatch 2026-03-10T05:53:46.001 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:45 vm05 bash[43541]: audit 2026-03-10T05:53:44.879837+0000 mon.a (mon.0) 180 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "mon.b"}]': finished 2026-03-10T05:53:46.001 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:45 vm05 bash[43541]: audit 2026-03-10T05:53:44.879837+0000 mon.a (mon.0) 180 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "mon.b"}]': finished 2026-03-10T05:53:46.002 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:45 vm05 bash[43541]: audit 2026-03-10T05:53:44.882590+0000 mon.a (mon.0) 181 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon.c"}]: dispatch 2026-03-10T05:53:46.002 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:45 vm05 bash[43541]: audit 2026-03-10T05:53:44.882590+0000 mon.a (mon.0) 181 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon.c"}]: dispatch 2026-03-10T05:53:46.002 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:45 vm05 bash[43541]: audit 2026-03-10T05:53:44.886077+0000 mon.a (mon.0) 182 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "mon.c"}]': finished 2026-03-10T05:53:46.002 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:45 vm05 bash[43541]: audit 2026-03-10T05:53:44.886077+0000 mon.a (mon.0) 182 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "mon.c"}]': finished 2026-03-10T05:53:46.002 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:45 vm05 bash[43541]: audit 2026-03-10T05:53:44.888872+0000 mon.a (mon.0) 183 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T05:53:46.002 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:45 vm05 bash[43541]: audit 2026-03-10T05:53:44.888872+0000 mon.a (mon.0) 183 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T05:53:46.002 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:45 vm05 bash[43541]: audit 2026-03-10T05:53:44.892326+0000 mon.a (mon.0) 184 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:53:46.002 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:45 vm05 bash[43541]: audit 2026-03-10T05:53:44.892326+0000 mon.a (mon.0) 184 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:53:46.002 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:45 vm05 bash[43541]: audit 2026-03-10T05:53:44.893931+0000 mon.a (mon.0) 185 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "osd ok-to-stop", "ids": ["3"], "max": 16}]: dispatch 2026-03-10T05:53:46.002 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:45 vm05 bash[43541]: audit 2026-03-10T05:53:44.893931+0000 mon.a (mon.0) 185 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "osd ok-to-stop", "ids": ["3"], "max": 16}]: dispatch 2026-03-10T05:53:46.002 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:45 vm05 bash[43541]: audit 2026-03-10T05:53:45.326226+0000 mon.a (mon.0) 186 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:53:46.002 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:45 vm05 bash[43541]: audit 2026-03-10T05:53:45.326226+0000 mon.a (mon.0) 186 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:53:46.002 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:45 vm05 bash[43541]: audit 2026-03-10T05:53:45.329698+0000 mon.a (mon.0) 187 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.3"}]: dispatch 2026-03-10T05:53:46.002 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:45 vm05 bash[43541]: audit 2026-03-10T05:53:45.329698+0000 mon.a (mon.0) 187 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.3"}]: dispatch 2026-03-10T05:53:46.002 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:45 vm05 bash[43541]: audit 2026-03-10T05:53:45.330206+0000 mon.a (mon.0) 188 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:53:46.002 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:45 vm05 bash[43541]: audit 2026-03-10T05:53:45.330206+0000 mon.a (mon.0) 188 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:53:46.208 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:46 vm02 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:53:46.208 INFO:journalctl@ceph.mgr.y.vm02.stdout:Mar 10 05:53:46 vm02 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:53:46.208 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:46 vm02 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:53:46.208 INFO:journalctl@ceph.osd.0.vm02.stdout:Mar 10 05:53:46 vm02 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:53:46.208 INFO:journalctl@ceph.osd.1.vm02.stdout:Mar 10 05:53:46 vm02 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:53:46.208 INFO:journalctl@ceph.osd.2.vm02.stdout:Mar 10 05:53:46 vm02 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:53:46.208 INFO:journalctl@ceph.osd.3.vm02.stdout:Mar 10 05:53:46 vm02 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:53:46.208 INFO:journalctl@ceph.osd.3.vm02.stdout:Mar 10 05:53:46 vm02 systemd[1]: Stopping Ceph osd.3 for 107483ae-1c44-11f1-b530-c1172cd6122a... 2026-03-10T05:53:46.208 INFO:journalctl@ceph.node-exporter.a.vm02.stdout:Mar 10 05:53:46 vm02 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:53:46.208 INFO:journalctl@ceph.alertmanager.a.vm02.stdout:Mar 10 05:53:46 vm02 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:53:46.584 INFO:journalctl@ceph.osd.3.vm02.stdout:Mar 10 05:53:46 vm02 bash[34760]: debug 2026-03-10T05:53:46.203+0000 7f4e4a5b9700 -1 received signal: Terminated from /sbin/docker-init -- /usr/bin/ceph-osd -n osd.3 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-stderr=true --default-log-stderr-prefix=debug (PID: 1) UID: 0 2026-03-10T05:53:46.585 INFO:journalctl@ceph.osd.3.vm02.stdout:Mar 10 05:53:46 vm02 bash[34760]: debug 2026-03-10T05:53:46.203+0000 7f4e4a5b9700 -1 osd.3 91 *** Got signal Terminated *** 2026-03-10T05:53:46.585 INFO:journalctl@ceph.osd.3.vm02.stdout:Mar 10 05:53:46 vm02 bash[34760]: debug 2026-03-10T05:53:46.203+0000 7f4e4a5b9700 -1 osd.3 91 *** Immediate shutdown (osd_fast_shutdown=true) *** 2026-03-10T05:53:46.905 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:46 vm02 bash[56371]: cluster 2026-03-10T05:53:44.832334+0000 mgr.y (mgr.24992) 75 : cluster [DBG] pgmap v20: 161 pgs: 161 active+clean; 457 KiB data, 103 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T05:53:46.905 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:46 vm02 bash[56371]: cluster 2026-03-10T05:53:44.832334+0000 mgr.y (mgr.24992) 75 : cluster [DBG] pgmap v20: 161 pgs: 161 active+clean; 457 KiB data, 103 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T05:53:46.905 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:46 vm02 bash[56371]: cephadm 2026-03-10T05:53:44.859121+0000 mgr.y (mgr.24992) 76 : cephadm [INF] Upgrade: Setting container_image for all mon 2026-03-10T05:53:46.905 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:46 vm02 bash[56371]: cephadm 2026-03-10T05:53:44.859121+0000 mgr.y (mgr.24992) 76 : cephadm [INF] Upgrade: Setting container_image for all mon 2026-03-10T05:53:46.905 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:46 vm02 bash[56371]: cephadm 2026-03-10T05:53:44.889367+0000 mgr.y (mgr.24992) 77 : cephadm [INF] Upgrade: Setting container_image for all crash 2026-03-10T05:53:46.905 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:46 vm02 bash[56371]: cephadm 2026-03-10T05:53:44.889367+0000 mgr.y (mgr.24992) 77 : cephadm [INF] Upgrade: Setting container_image for all crash 2026-03-10T05:53:46.905 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:46 vm02 bash[56371]: audit 2026-03-10T05:53:44.894080+0000 mgr.y (mgr.24992) 78 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "osd ok-to-stop", "ids": ["3"], "max": 16}]: dispatch 2026-03-10T05:53:46.905 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:46 vm02 bash[56371]: audit 2026-03-10T05:53:44.894080+0000 mgr.y (mgr.24992) 78 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "osd ok-to-stop", "ids": ["3"], "max": 16}]: dispatch 2026-03-10T05:53:46.905 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:46 vm02 bash[56371]: cephadm 2026-03-10T05:53:44.895509+0000 mgr.y (mgr.24992) 79 : cephadm [INF] Upgrade: osd.3 is safe to restart 2026-03-10T05:53:46.905 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:46 vm02 bash[56371]: cephadm 2026-03-10T05:53:44.895509+0000 mgr.y (mgr.24992) 79 : cephadm [INF] Upgrade: osd.3 is safe to restart 2026-03-10T05:53:46.905 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:46 vm02 bash[56371]: cephadm 2026-03-10T05:53:45.320566+0000 mgr.y (mgr.24992) 80 : cephadm [INF] Upgrade: Updating osd.3 2026-03-10T05:53:46.905 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:46 vm02 bash[56371]: cephadm 2026-03-10T05:53:45.320566+0000 mgr.y (mgr.24992) 80 : cephadm [INF] Upgrade: Updating osd.3 2026-03-10T05:53:46.905 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:46 vm02 bash[56371]: cephadm 2026-03-10T05:53:45.331467+0000 mgr.y (mgr.24992) 81 : cephadm [INF] Deploying daemon osd.3 on vm02 2026-03-10T05:53:46.905 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:46 vm02 bash[56371]: cephadm 2026-03-10T05:53:45.331467+0000 mgr.y (mgr.24992) 81 : cephadm [INF] Deploying daemon osd.3 on vm02 2026-03-10T05:53:46.905 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:46 vm02 bash[56371]: cluster 2026-03-10T05:53:46.206868+0000 mon.a (mon.0) 189 : cluster [INF] osd.3 marked itself down and dead 2026-03-10T05:53:46.905 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:46 vm02 bash[56371]: cluster 2026-03-10T05:53:46.206868+0000 mon.a (mon.0) 189 : cluster [INF] osd.3 marked itself down and dead 2026-03-10T05:53:46.905 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:46 vm02 bash[55303]: cluster 2026-03-10T05:53:44.832334+0000 mgr.y (mgr.24992) 75 : cluster [DBG] pgmap v20: 161 pgs: 161 active+clean; 457 KiB data, 103 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T05:53:46.905 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:46 vm02 bash[55303]: cluster 2026-03-10T05:53:44.832334+0000 mgr.y (mgr.24992) 75 : cluster [DBG] pgmap v20: 161 pgs: 161 active+clean; 457 KiB data, 103 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T05:53:46.905 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:46 vm02 bash[55303]: cephadm 2026-03-10T05:53:44.859121+0000 mgr.y (mgr.24992) 76 : cephadm [INF] Upgrade: Setting container_image for all mon 2026-03-10T05:53:46.905 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:46 vm02 bash[55303]: cephadm 2026-03-10T05:53:44.859121+0000 mgr.y (mgr.24992) 76 : cephadm [INF] Upgrade: Setting container_image for all mon 2026-03-10T05:53:46.905 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:46 vm02 bash[55303]: cephadm 2026-03-10T05:53:44.889367+0000 mgr.y (mgr.24992) 77 : cephadm [INF] Upgrade: Setting container_image for all crash 2026-03-10T05:53:46.905 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:46 vm02 bash[55303]: cephadm 2026-03-10T05:53:44.889367+0000 mgr.y (mgr.24992) 77 : cephadm [INF] Upgrade: Setting container_image for all crash 2026-03-10T05:53:46.905 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:46 vm02 bash[55303]: audit 2026-03-10T05:53:44.894080+0000 mgr.y (mgr.24992) 78 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "osd ok-to-stop", "ids": ["3"], "max": 16}]: dispatch 2026-03-10T05:53:46.905 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:46 vm02 bash[55303]: audit 2026-03-10T05:53:44.894080+0000 mgr.y (mgr.24992) 78 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "osd ok-to-stop", "ids": ["3"], "max": 16}]: dispatch 2026-03-10T05:53:46.905 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:46 vm02 bash[55303]: cephadm 2026-03-10T05:53:44.895509+0000 mgr.y (mgr.24992) 79 : cephadm [INF] Upgrade: osd.3 is safe to restart 2026-03-10T05:53:46.905 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:46 vm02 bash[55303]: cephadm 2026-03-10T05:53:44.895509+0000 mgr.y (mgr.24992) 79 : cephadm [INF] Upgrade: osd.3 is safe to restart 2026-03-10T05:53:46.905 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:46 vm02 bash[55303]: cephadm 2026-03-10T05:53:45.320566+0000 mgr.y (mgr.24992) 80 : cephadm [INF] Upgrade: Updating osd.3 2026-03-10T05:53:46.905 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:46 vm02 bash[55303]: cephadm 2026-03-10T05:53:45.320566+0000 mgr.y (mgr.24992) 80 : cephadm [INF] Upgrade: Updating osd.3 2026-03-10T05:53:46.905 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:46 vm02 bash[55303]: cephadm 2026-03-10T05:53:45.331467+0000 mgr.y (mgr.24992) 81 : cephadm [INF] Deploying daemon osd.3 on vm02 2026-03-10T05:53:46.905 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:46 vm02 bash[55303]: cephadm 2026-03-10T05:53:45.331467+0000 mgr.y (mgr.24992) 81 : cephadm [INF] Deploying daemon osd.3 on vm02 2026-03-10T05:53:46.905 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:46 vm02 bash[55303]: cluster 2026-03-10T05:53:46.206868+0000 mon.a (mon.0) 189 : cluster [INF] osd.3 marked itself down and dead 2026-03-10T05:53:46.905 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:46 vm02 bash[55303]: cluster 2026-03-10T05:53:46.206868+0000 mon.a (mon.0) 189 : cluster [INF] osd.3 marked itself down and dead 2026-03-10T05:53:46.946 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:46 vm05 bash[43541]: cluster 2026-03-10T05:53:44.832334+0000 mgr.y (mgr.24992) 75 : cluster [DBG] pgmap v20: 161 pgs: 161 active+clean; 457 KiB data, 103 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T05:53:46.946 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:46 vm05 bash[43541]: cluster 2026-03-10T05:53:44.832334+0000 mgr.y (mgr.24992) 75 : cluster [DBG] pgmap v20: 161 pgs: 161 active+clean; 457 KiB data, 103 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T05:53:46.946 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:46 vm05 bash[43541]: cephadm 2026-03-10T05:53:44.859121+0000 mgr.y (mgr.24992) 76 : cephadm [INF] Upgrade: Setting container_image for all mon 2026-03-10T05:53:46.946 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:46 vm05 bash[43541]: cephadm 2026-03-10T05:53:44.859121+0000 mgr.y (mgr.24992) 76 : cephadm [INF] Upgrade: Setting container_image for all mon 2026-03-10T05:53:46.946 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:46 vm05 bash[43541]: cephadm 2026-03-10T05:53:44.889367+0000 mgr.y (mgr.24992) 77 : cephadm [INF] Upgrade: Setting container_image for all crash 2026-03-10T05:53:46.946 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:46 vm05 bash[43541]: cephadm 2026-03-10T05:53:44.889367+0000 mgr.y (mgr.24992) 77 : cephadm [INF] Upgrade: Setting container_image for all crash 2026-03-10T05:53:46.946 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:46 vm05 bash[43541]: audit 2026-03-10T05:53:44.894080+0000 mgr.y (mgr.24992) 78 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "osd ok-to-stop", "ids": ["3"], "max": 16}]: dispatch 2026-03-10T05:53:46.946 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:46 vm05 bash[43541]: audit 2026-03-10T05:53:44.894080+0000 mgr.y (mgr.24992) 78 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "osd ok-to-stop", "ids": ["3"], "max": 16}]: dispatch 2026-03-10T05:53:46.946 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:46 vm05 bash[43541]: cephadm 2026-03-10T05:53:44.895509+0000 mgr.y (mgr.24992) 79 : cephadm [INF] Upgrade: osd.3 is safe to restart 2026-03-10T05:53:46.946 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:46 vm05 bash[43541]: cephadm 2026-03-10T05:53:44.895509+0000 mgr.y (mgr.24992) 79 : cephadm [INF] Upgrade: osd.3 is safe to restart 2026-03-10T05:53:46.946 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:46 vm05 bash[43541]: cephadm 2026-03-10T05:53:45.320566+0000 mgr.y (mgr.24992) 80 : cephadm [INF] Upgrade: Updating osd.3 2026-03-10T05:53:46.946 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:46 vm05 bash[43541]: cephadm 2026-03-10T05:53:45.320566+0000 mgr.y (mgr.24992) 80 : cephadm [INF] Upgrade: Updating osd.3 2026-03-10T05:53:46.947 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:46 vm05 bash[43541]: cephadm 2026-03-10T05:53:45.331467+0000 mgr.y (mgr.24992) 81 : cephadm [INF] Deploying daemon osd.3 on vm02 2026-03-10T05:53:46.947 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:46 vm05 bash[43541]: cephadm 2026-03-10T05:53:45.331467+0000 mgr.y (mgr.24992) 81 : cephadm [INF] Deploying daemon osd.3 on vm02 2026-03-10T05:53:46.947 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:46 vm05 bash[43541]: cluster 2026-03-10T05:53:46.206868+0000 mon.a (mon.0) 189 : cluster [INF] osd.3 marked itself down and dead 2026-03-10T05:53:46.947 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:46 vm05 bash[43541]: cluster 2026-03-10T05:53:46.206868+0000 mon.a (mon.0) 189 : cluster [INF] osd.3 marked itself down and dead 2026-03-10T05:53:47.165 INFO:journalctl@ceph.osd.3.vm02.stdout:Mar 10 05:53:46 vm02 bash[58576]: ceph-107483ae-1c44-11f1-b530-c1172cd6122a-osd-3 2026-03-10T05:53:47.250 INFO:journalctl@ceph.prometheus.a.vm05.stdout:Mar 10 05:53:46 vm05 bash[41269]: ts=2026-03-10T05:53:46.948Z caller=group.go:483 level=warn name=CephNodeDiskspaceWarning index=4 component="rule manager" file=/etc/prometheus/alerting/ceph_alerts.yml group=nodes msg="Evaluating rule failed" rule="alert: CephNodeDiskspaceWarning\nexpr: predict_linear(node_filesystem_free_bytes{device=~\"/.*\"}[2d], 3600 * 24 * 5)\n * on (instance) group_left (nodename) node_uname_info < 0\nlabels:\n oid: 1.3.6.1.4.1.50495.1.2.1.8.4\n severity: warning\n type: ceph_default\nannotations:\n description: Mountpoint {{ $labels.mountpoint }} on {{ $labels.nodename }} will\n be full in less than 5 days based on the 48 hour trailing fill rate.\n summary: Host filesystem free space is getting low\n" err="found duplicate series for the match group {instance=\"vm05\"} on the right hand-side of the operation: [{__name__=\"node_uname_info\", cluster=\"107483ae-1c44-11f1-b530-c1172cd6122a\", domainname=\"(none)\", instance=\"vm05\", job=\"node\", machine=\"x86_64\", nodename=\"vm05\", release=\"5.15.0-1092-kvm\", sysname=\"Linux\", version=\"#97-Ubuntu SMP Fri Jan 23 15:00:24 UTC 2026\"}, {__name__=\"node_uname_info\", domainname=\"(none)\", instance=\"vm05\", job=\"node\", machine=\"x86_64\", nodename=\"vm05\", release=\"5.15.0-1092-kvm\", sysname=\"Linux\", version=\"#97-Ubuntu SMP Fri Jan 23 15:00:24 UTC 2026\"}];many-to-many matching not allowed: matching labels must be unique on one side" 2026-03-10T05:53:47.484 INFO:journalctl@ceph.osd.0.vm02.stdout:Mar 10 05:53:47 vm02 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:53:47.484 INFO:journalctl@ceph.osd.1.vm02.stdout:Mar 10 05:53:47 vm02 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:53:47.484 INFO:journalctl@ceph.osd.2.vm02.stdout:Mar 10 05:53:47 vm02 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:53:47.484 INFO:journalctl@ceph.osd.3.vm02.stdout:Mar 10 05:53:47 vm02 systemd[1]: ceph-107483ae-1c44-11f1-b530-c1172cd6122a@osd.3.service: Deactivated successfully. 2026-03-10T05:53:47.484 INFO:journalctl@ceph.osd.3.vm02.stdout:Mar 10 05:53:47 vm02 systemd[1]: Stopped Ceph osd.3 for 107483ae-1c44-11f1-b530-c1172cd6122a. 2026-03-10T05:53:47.484 INFO:journalctl@ceph.osd.3.vm02.stdout:Mar 10 05:53:47 vm02 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:53:47.484 INFO:journalctl@ceph.osd.3.vm02.stdout:Mar 10 05:53:47 vm02 systemd[1]: Started Ceph osd.3 for 107483ae-1c44-11f1-b530-c1172cd6122a. 2026-03-10T05:53:47.484 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:47 vm02 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:53:47.484 INFO:journalctl@ceph.mgr.y.vm02.stdout:Mar 10 05:53:47 vm02 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:53:47.484 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:47 vm02 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:53:47.484 INFO:journalctl@ceph.node-exporter.a.vm02.stdout:Mar 10 05:53:47 vm02 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:53:47.484 INFO:journalctl@ceph.alertmanager.a.vm02.stdout:Mar 10 05:53:47 vm02 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:53:47.834 INFO:journalctl@ceph.osd.3.vm02.stdout:Mar 10 05:53:47 vm02 bash[58786]: Running command: /usr/bin/ceph-authtool --gen-print-key 2026-03-10T05:53:47.835 INFO:journalctl@ceph.osd.3.vm02.stdout:Mar 10 05:53:47 vm02 bash[58786]: Running command: /usr/bin/ceph-authtool --gen-print-key 2026-03-10T05:53:47.835 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:47 vm02 bash[56371]: cluster 2026-03-10T05:53:46.828634+0000 mon.a (mon.0) 190 : cluster [WRN] Health check failed: 1 osds down (OSD_DOWN) 2026-03-10T05:53:47.835 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:47 vm02 bash[56371]: cluster 2026-03-10T05:53:46.828634+0000 mon.a (mon.0) 190 : cluster [WRN] Health check failed: 1 osds down (OSD_DOWN) 2026-03-10T05:53:47.835 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:47 vm02 bash[56371]: cluster 2026-03-10T05:53:46.836923+0000 mon.a (mon.0) 191 : cluster [DBG] osdmap e92: 8 total, 7 up, 8 in 2026-03-10T05:53:47.835 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:47 vm02 bash[56371]: cluster 2026-03-10T05:53:46.836923+0000 mon.a (mon.0) 191 : cluster [DBG] osdmap e92: 8 total, 7 up, 8 in 2026-03-10T05:53:47.835 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:47 vm02 bash[56371]: audit 2026-03-10T05:53:47.436271+0000 mon.a (mon.0) 192 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:53:47.835 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:47 vm02 bash[56371]: audit 2026-03-10T05:53:47.436271+0000 mon.a (mon.0) 192 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:53:47.835 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:47 vm02 bash[56371]: audit 2026-03-10T05:53:47.441659+0000 mon.a (mon.0) 193 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:53:47.835 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:47 vm02 bash[56371]: audit 2026-03-10T05:53:47.441659+0000 mon.a (mon.0) 193 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:53:47.835 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:47 vm02 bash[55303]: cluster 2026-03-10T05:53:46.828634+0000 mon.a (mon.0) 190 : cluster [WRN] Health check failed: 1 osds down (OSD_DOWN) 2026-03-10T05:53:47.835 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:47 vm02 bash[55303]: cluster 2026-03-10T05:53:46.828634+0000 mon.a (mon.0) 190 : cluster [WRN] Health check failed: 1 osds down (OSD_DOWN) 2026-03-10T05:53:47.835 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:47 vm02 bash[55303]: cluster 2026-03-10T05:53:46.836923+0000 mon.a (mon.0) 191 : cluster [DBG] osdmap e92: 8 total, 7 up, 8 in 2026-03-10T05:53:47.835 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:47 vm02 bash[55303]: cluster 2026-03-10T05:53:46.836923+0000 mon.a (mon.0) 191 : cluster [DBG] osdmap e92: 8 total, 7 up, 8 in 2026-03-10T05:53:47.835 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:47 vm02 bash[55303]: audit 2026-03-10T05:53:47.436271+0000 mon.a (mon.0) 192 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:53:47.835 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:47 vm02 bash[55303]: audit 2026-03-10T05:53:47.436271+0000 mon.a (mon.0) 192 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:53:47.835 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:47 vm02 bash[55303]: audit 2026-03-10T05:53:47.441659+0000 mon.a (mon.0) 193 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:53:47.835 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:47 vm02 bash[55303]: audit 2026-03-10T05:53:47.441659+0000 mon.a (mon.0) 193 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:53:48.001 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:47 vm05 bash[43541]: cluster 2026-03-10T05:53:46.828634+0000 mon.a (mon.0) 190 : cluster [WRN] Health check failed: 1 osds down (OSD_DOWN) 2026-03-10T05:53:48.001 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:47 vm05 bash[43541]: cluster 2026-03-10T05:53:46.828634+0000 mon.a (mon.0) 190 : cluster [WRN] Health check failed: 1 osds down (OSD_DOWN) 2026-03-10T05:53:48.001 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:47 vm05 bash[43541]: cluster 2026-03-10T05:53:46.836923+0000 mon.a (mon.0) 191 : cluster [DBG] osdmap e92: 8 total, 7 up, 8 in 2026-03-10T05:53:48.001 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:47 vm05 bash[43541]: cluster 2026-03-10T05:53:46.836923+0000 mon.a (mon.0) 191 : cluster [DBG] osdmap e92: 8 total, 7 up, 8 in 2026-03-10T05:53:48.001 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:47 vm05 bash[43541]: audit 2026-03-10T05:53:47.436271+0000 mon.a (mon.0) 192 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:53:48.001 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:47 vm05 bash[43541]: audit 2026-03-10T05:53:47.436271+0000 mon.a (mon.0) 192 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:53:48.001 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:47 vm05 bash[43541]: audit 2026-03-10T05:53:47.441659+0000 mon.a (mon.0) 193 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:53:48.001 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:47 vm05 bash[43541]: audit 2026-03-10T05:53:47.441659+0000 mon.a (mon.0) 193 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:53:48.820 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:48 vm02 bash[56371]: cluster 2026-03-10T05:53:46.832625+0000 mgr.y (mgr.24992) 82 : cluster [DBG] pgmap v21: 161 pgs: 161 active+clean; 457 KiB data, 103 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T05:53:48.820 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:48 vm02 bash[56371]: cluster 2026-03-10T05:53:46.832625+0000 mgr.y (mgr.24992) 82 : cluster [DBG] pgmap v21: 161 pgs: 161 active+clean; 457 KiB data, 103 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T05:53:48.820 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:48 vm02 bash[56371]: audit 2026-03-10T05:53:46.894479+0000 mgr.y (mgr.24992) 83 : audit [DBG] from='client.25048 -' entity='client.iscsi.foo.vm02.mxbwmh' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T05:53:48.820 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:48 vm02 bash[56371]: audit 2026-03-10T05:53:46.894479+0000 mgr.y (mgr.24992) 83 : audit [DBG] from='client.25048 -' entity='client.iscsi.foo.vm02.mxbwmh' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T05:53:48.820 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:48 vm02 bash[56371]: cluster 2026-03-10T05:53:47.841881+0000 mon.a (mon.0) 194 : cluster [DBG] osdmap e93: 8 total, 7 up, 8 in 2026-03-10T05:53:48.820 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:48 vm02 bash[56371]: cluster 2026-03-10T05:53:47.841881+0000 mon.a (mon.0) 194 : cluster [DBG] osdmap e93: 8 total, 7 up, 8 in 2026-03-10T05:53:48.820 INFO:journalctl@ceph.osd.3.vm02.stdout:Mar 10 05:53:48 vm02 bash[58786]: --> Failed to activate via raw: did not find any matching OSD to activate 2026-03-10T05:53:48.820 INFO:journalctl@ceph.osd.3.vm02.stdout:Mar 10 05:53:48 vm02 bash[58786]: Running command: /usr/bin/ceph-authtool --gen-print-key 2026-03-10T05:53:48.820 INFO:journalctl@ceph.osd.3.vm02.stdout:Mar 10 05:53:48 vm02 bash[58786]: Running command: /usr/bin/ceph-authtool --gen-print-key 2026-03-10T05:53:48.820 INFO:journalctl@ceph.osd.3.vm02.stdout:Mar 10 05:53:48 vm02 bash[58786]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-3 2026-03-10T05:53:48.820 INFO:journalctl@ceph.osd.3.vm02.stdout:Mar 10 05:53:48 vm02 bash[58786]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph-15cff827-32d2-4ef0-bd8e-b821626f6fa4/osd-block-c8c62231-6895-42f2-ba03-c49e0ca5380e --path /var/lib/ceph/osd/ceph-3 --no-mon-config 2026-03-10T05:53:48.821 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:48 vm02 bash[55303]: cluster 2026-03-10T05:53:46.832625+0000 mgr.y (mgr.24992) 82 : cluster [DBG] pgmap v21: 161 pgs: 161 active+clean; 457 KiB data, 103 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T05:53:48.821 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:48 vm02 bash[55303]: cluster 2026-03-10T05:53:46.832625+0000 mgr.y (mgr.24992) 82 : cluster [DBG] pgmap v21: 161 pgs: 161 active+clean; 457 KiB data, 103 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T05:53:48.821 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:48 vm02 bash[55303]: audit 2026-03-10T05:53:46.894479+0000 mgr.y (mgr.24992) 83 : audit [DBG] from='client.25048 -' entity='client.iscsi.foo.vm02.mxbwmh' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T05:53:48.821 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:48 vm02 bash[55303]: audit 2026-03-10T05:53:46.894479+0000 mgr.y (mgr.24992) 83 : audit [DBG] from='client.25048 -' entity='client.iscsi.foo.vm02.mxbwmh' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T05:53:48.821 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:48 vm02 bash[55303]: cluster 2026-03-10T05:53:47.841881+0000 mon.a (mon.0) 194 : cluster [DBG] osdmap e93: 8 total, 7 up, 8 in 2026-03-10T05:53:48.821 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:48 vm02 bash[55303]: cluster 2026-03-10T05:53:47.841881+0000 mon.a (mon.0) 194 : cluster [DBG] osdmap e93: 8 total, 7 up, 8 in 2026-03-10T05:53:49.000 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:48 vm05 bash[43541]: cluster 2026-03-10T05:53:46.832625+0000 mgr.y (mgr.24992) 82 : cluster [DBG] pgmap v21: 161 pgs: 161 active+clean; 457 KiB data, 103 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T05:53:49.000 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:48 vm05 bash[43541]: cluster 2026-03-10T05:53:46.832625+0000 mgr.y (mgr.24992) 82 : cluster [DBG] pgmap v21: 161 pgs: 161 active+clean; 457 KiB data, 103 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T05:53:49.001 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:48 vm05 bash[43541]: audit 2026-03-10T05:53:46.894479+0000 mgr.y (mgr.24992) 83 : audit [DBG] from='client.25048 -' entity='client.iscsi.foo.vm02.mxbwmh' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T05:53:49.001 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:48 vm05 bash[43541]: audit 2026-03-10T05:53:46.894479+0000 mgr.y (mgr.24992) 83 : audit [DBG] from='client.25048 -' entity='client.iscsi.foo.vm02.mxbwmh' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T05:53:49.001 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:48 vm05 bash[43541]: cluster 2026-03-10T05:53:47.841881+0000 mon.a (mon.0) 194 : cluster [DBG] osdmap e93: 8 total, 7 up, 8 in 2026-03-10T05:53:49.001 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:48 vm05 bash[43541]: cluster 2026-03-10T05:53:47.841881+0000 mon.a (mon.0) 194 : cluster [DBG] osdmap e93: 8 total, 7 up, 8 in 2026-03-10T05:53:49.084 INFO:journalctl@ceph.osd.3.vm02.stdout:Mar 10 05:53:48 vm02 bash[58786]: Running command: /usr/bin/ln -snf /dev/ceph-15cff827-32d2-4ef0-bd8e-b821626f6fa4/osd-block-c8c62231-6895-42f2-ba03-c49e0ca5380e /var/lib/ceph/osd/ceph-3/block 2026-03-10T05:53:49.084 INFO:journalctl@ceph.osd.3.vm02.stdout:Mar 10 05:53:48 vm02 bash[58786]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-3/block 2026-03-10T05:53:49.084 INFO:journalctl@ceph.osd.3.vm02.stdout:Mar 10 05:53:48 vm02 bash[58786]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-3 2026-03-10T05:53:49.084 INFO:journalctl@ceph.osd.3.vm02.stdout:Mar 10 05:53:48 vm02 bash[58786]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-3 2026-03-10T05:53:49.084 INFO:journalctl@ceph.osd.3.vm02.stdout:Mar 10 05:53:48 vm02 bash[58786]: --> ceph-volume lvm activate successful for osd ID: 3 2026-03-10T05:53:49.084 INFO:journalctl@ceph.osd.3.vm02.stdout:Mar 10 05:53:48 vm02 bash[59145]: debug 2026-03-10T05:53:48.959+0000 7f7cfbc3b640 1 -- 192.168.123.102:0/1507408151 <== mon.0 v2:192.168.123.102:3300/0 4 ==== auth_reply(proto 2 0 (0) Success) ==== 194+0+0 (secure 0 0 0) 0x55ba76ed5680 con 0x55ba76ece000 2026-03-10T05:53:49.607 INFO:teuthology.orchestra.run.vm02.stdout:true 2026-03-10T05:53:50.000 INFO:teuthology.orchestra.run.vm02.stdout:NAME HOST PORTS STATUS REFRESHED AGE MEM USE MEM LIM VERSION IMAGE ID CONTAINER ID 2026-03-10T05:53:50.000 INFO:teuthology.orchestra.run.vm02.stdout:alertmanager.a vm02 *:9093,9094 running (112s) 33s ago 6m 14.8M - 0.25.0 c8568f914cd2 7a7c5c2cddb6 2026-03-10T05:53:50.000 INFO:teuthology.orchestra.run.vm02.stdout:grafana.a vm05 *:3000 running (110s) 18s ago 6m 39.4M - dad864ee21e9 95c6d977988a 2026-03-10T05:53:50.000 INFO:teuthology.orchestra.run.vm02.stdout:iscsi.foo.vm02.mxbwmh vm02 running (73s) 33s ago 6m 43.0M - 3.5 e1d6a67b021e 62aba5b41046 2026-03-10T05:53:50.000 INFO:teuthology.orchestra.run.vm02.stdout:mgr.x vm05 *:8443,9283,8765 running (71s) 18s ago 9m 464M - 19.2.3-678-ge911bdeb 654f31e6858e 7579626ada90 2026-03-10T05:53:50.000 INFO:teuthology.orchestra.run.vm02.stdout:mgr.y vm02 *:8443,9283,8765 running (101s) 33s ago 9m 508M - 19.2.3-678-ge911bdeb 654f31e6858e ef46d0f7b15e 2026-03-10T05:53:50.000 INFO:teuthology.orchestra.run.vm02.stdout:mon.a vm02 running (43s) 33s ago 9m 30.8M 2048M 19.2.3-678-ge911bdeb 654f31e6858e df3a0a290a95 2026-03-10T05:53:50.000 INFO:teuthology.orchestra.run.vm02.stdout:mon.b vm05 running (24s) 18s ago 9m 19.2M 2048M 19.2.3-678-ge911bdeb 654f31e6858e 1da04b90d16b 2026-03-10T05:53:50.000 INFO:teuthology.orchestra.run.vm02.stdout:mon.c vm02 running (58s) 33s ago 9m 32.7M 2048M 19.2.3-678-ge911bdeb 654f31e6858e 7f2cdf1b7aa6 2026-03-10T05:53:50.000 INFO:teuthology.orchestra.run.vm02.stdout:node-exporter.a vm02 *:9100 running (109s) 33s ago 6m 7235k - 1.7.0 72c9c2088986 90288450bd1f 2026-03-10T05:53:50.000 INFO:teuthology.orchestra.run.vm02.stdout:node-exporter.b vm05 *:9100 running (107s) 18s ago 6m 7275k - 1.7.0 72c9c2088986 4e859143cb0e 2026-03-10T05:53:50.000 INFO:teuthology.orchestra.run.vm02.stdout:osd.0 vm02 running (9m) 33s ago 9m 51.4M 4096M 17.2.0 e1d6a67b021e 563d55a3e6a4 2026-03-10T05:53:50.000 INFO:teuthology.orchestra.run.vm02.stdout:osd.1 vm02 running (8m) 33s ago 8m 54.2M 4096M 17.2.0 e1d6a67b021e 8c25a1e89677 2026-03-10T05:53:50.000 INFO:teuthology.orchestra.run.vm02.stdout:osd.2 vm02 running (8m) 33s ago 8m 49.5M 4096M 17.2.0 e1d6a67b021e 826f54bdbc5c 2026-03-10T05:53:50.000 INFO:teuthology.orchestra.run.vm02.stdout:osd.3 vm02 starting - - - 4096M 2026-03-10T05:53:50.000 INFO:teuthology.orchestra.run.vm02.stdout:osd.4 vm05 running (8m) 18s ago 8m 53.2M 4096M 17.2.0 e1d6a67b021e 4ffe1741f201 2026-03-10T05:53:50.000 INFO:teuthology.orchestra.run.vm02.stdout:osd.5 vm05 running (7m) 18s ago 7m 52.2M 4096M 17.2.0 e1d6a67b021e cba5583c238e 2026-03-10T05:53:50.000 INFO:teuthology.orchestra.run.vm02.stdout:osd.6 vm05 running (7m) 18s ago 7m 49.8M 4096M 17.2.0 e1d6a67b021e 9d1b370357d7 2026-03-10T05:53:50.000 INFO:teuthology.orchestra.run.vm02.stdout:osd.7 vm05 running (7m) 18s ago 7m 51.3M 4096M 17.2.0 e1d6a67b021e 8a4837b788cf 2026-03-10T05:53:50.000 INFO:teuthology.orchestra.run.vm02.stdout:prometheus.a vm05 *:9095 running (72s) 18s ago 6m 37.3M - 2.51.0 1d3b7f56885b 3328811f8f28 2026-03-10T05:53:50.000 INFO:teuthology.orchestra.run.vm02.stdout:rgw.foo.vm02.pbogjd vm02 *:8000 running (6m) 33s ago 6m 86.8M - 17.2.0 e1d6a67b021e 2ab2ffd1abaa 2026-03-10T05:53:50.000 INFO:teuthology.orchestra.run.vm02.stdout:rgw.foo.vm05.hvmsxl vm05 *:8000 running (6m) 18s ago 6m 85.8M - 17.2.0 e1d6a67b021e 85d1c77b7e9d 2026-03-10T05:53:50.000 INFO:teuthology.orchestra.run.vm02.stdout:rgw.smpl.vm02.pglcfm vm02 *:80 running (6m) 33s ago 6m 85.6M - 17.2.0 e1d6a67b021e ef152a460673 2026-03-10T05:53:50.000 INFO:teuthology.orchestra.run.vm02.stdout:rgw.smpl.vm05.hqqmap vm05 *:80 running (6m) 18s ago 6m 86.0M - 17.2.0 e1d6a67b021e 29c9ee794f34 2026-03-10T05:53:50.227 INFO:teuthology.orchestra.run.vm02.stdout:{ 2026-03-10T05:53:50.228 INFO:teuthology.orchestra.run.vm02.stdout: "mon": { 2026-03-10T05:53:50.228 INFO:teuthology.orchestra.run.vm02.stdout: "ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable)": 3 2026-03-10T05:53:50.228 INFO:teuthology.orchestra.run.vm02.stdout: }, 2026-03-10T05:53:50.228 INFO:teuthology.orchestra.run.vm02.stdout: "mgr": { 2026-03-10T05:53:50.228 INFO:teuthology.orchestra.run.vm02.stdout: "ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable)": 2 2026-03-10T05:53:50.228 INFO:teuthology.orchestra.run.vm02.stdout: }, 2026-03-10T05:53:50.228 INFO:teuthology.orchestra.run.vm02.stdout: "osd": { 2026-03-10T05:53:50.228 INFO:teuthology.orchestra.run.vm02.stdout: "ceph version 17.2.0 (43e2e60a7559d3f46c9d53f1ca875fd499a1e35e) quincy (stable)": 7 2026-03-10T05:53:50.228 INFO:teuthology.orchestra.run.vm02.stdout: }, 2026-03-10T05:53:50.228 INFO:teuthology.orchestra.run.vm02.stdout: "rgw": { 2026-03-10T05:53:50.228 INFO:teuthology.orchestra.run.vm02.stdout: "ceph version 17.2.0 (43e2e60a7559d3f46c9d53f1ca875fd499a1e35e) quincy (stable)": 4 2026-03-10T05:53:50.228 INFO:teuthology.orchestra.run.vm02.stdout: }, 2026-03-10T05:53:50.228 INFO:teuthology.orchestra.run.vm02.stdout: "overall": { 2026-03-10T05:53:50.228 INFO:teuthology.orchestra.run.vm02.stdout: "ceph version 17.2.0 (43e2e60a7559d3f46c9d53f1ca875fd499a1e35e) quincy (stable)": 11, 2026-03-10T05:53:50.228 INFO:teuthology.orchestra.run.vm02.stdout: "ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable)": 5 2026-03-10T05:53:50.228 INFO:teuthology.orchestra.run.vm02.stdout: } 2026-03-10T05:53:50.228 INFO:teuthology.orchestra.run.vm02.stdout:} 2026-03-10T05:53:50.334 INFO:journalctl@ceph.osd.3.vm02.stdout:Mar 10 05:53:49 vm02 bash[59145]: debug 2026-03-10T05:53:49.899+0000 7f7cfe4a5740 -1 Falling back to public interface 2026-03-10T05:53:50.410 INFO:teuthology.orchestra.run.vm02.stdout:{ 2026-03-10T05:53:50.410 INFO:teuthology.orchestra.run.vm02.stdout: "target_image": "quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df", 2026-03-10T05:53:50.410 INFO:teuthology.orchestra.run.vm02.stdout: "in_progress": true, 2026-03-10T05:53:50.411 INFO:teuthology.orchestra.run.vm02.stdout: "which": "Upgrading all daemon types on all hosts", 2026-03-10T05:53:50.411 INFO:teuthology.orchestra.run.vm02.stdout: "services_complete": [ 2026-03-10T05:53:50.411 INFO:teuthology.orchestra.run.vm02.stdout: "mgr", 2026-03-10T05:53:50.411 INFO:teuthology.orchestra.run.vm02.stdout: "mon" 2026-03-10T05:53:50.411 INFO:teuthology.orchestra.run.vm02.stdout: ], 2026-03-10T05:53:50.411 INFO:teuthology.orchestra.run.vm02.stdout: "progress": "5/23 daemons upgraded", 2026-03-10T05:53:50.411 INFO:teuthology.orchestra.run.vm02.stdout: "message": "Currently upgrading osd daemons", 2026-03-10T05:53:50.411 INFO:teuthology.orchestra.run.vm02.stdout: "is_paused": false 2026-03-10T05:53:50.411 INFO:teuthology.orchestra.run.vm02.stdout:} 2026-03-10T05:53:50.634 INFO:teuthology.orchestra.run.vm02.stdout:HEALTH_WARN 1 osds down 2026-03-10T05:53:50.634 INFO:teuthology.orchestra.run.vm02.stdout:[WRN] OSD_DOWN: 1 osds down 2026-03-10T05:53:50.634 INFO:teuthology.orchestra.run.vm02.stdout: osd.3 (root=default,host=vm02) is down 2026-03-10T05:53:51.000 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:50 vm05 bash[43541]: cluster 2026-03-10T05:53:48.832909+0000 mgr.y (mgr.24992) 84 : cluster [DBG] pgmap v24: 161 pgs: 27 stale+active+clean, 134 active+clean; 457 KiB data, 103 MiB used, 160 GiB / 160 GiB avail; 639 B/s rd, 0 op/s 2026-03-10T05:53:51.000 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:50 vm05 bash[43541]: cluster 2026-03-10T05:53:48.832909+0000 mgr.y (mgr.24992) 84 : cluster [DBG] pgmap v24: 161 pgs: 27 stale+active+clean, 134 active+clean; 457 KiB data, 103 MiB used, 160 GiB / 160 GiB avail; 639 B/s rd, 0 op/s 2026-03-10T05:53:51.000 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:50 vm05 bash[43541]: audit 2026-03-10T05:53:49.592140+0000 mgr.y (mgr.24992) 85 : audit [DBG] from='client.34168 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T05:53:51.001 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:50 vm05 bash[43541]: audit 2026-03-10T05:53:49.592140+0000 mgr.y (mgr.24992) 85 : audit [DBG] from='client.34168 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T05:53:51.001 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:50 vm05 bash[43541]: audit 2026-03-10T05:53:50.227059+0000 mon.a (mon.0) 195 : audit [DBG] from='client.? 192.168.123.102:0/1990538266' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T05:53:51.001 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:50 vm05 bash[43541]: audit 2026-03-10T05:53:50.227059+0000 mon.a (mon.0) 195 : audit [DBG] from='client.? 192.168.123.102:0/1990538266' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T05:53:51.001 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:50 vm05 bash[43541]: audit 2026-03-10T05:53:50.636172+0000 mon.b (mon.2) 3 : audit [DBG] from='client.? 192.168.123.102:0/3020412264' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch 2026-03-10T05:53:51.001 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:50 vm05 bash[43541]: audit 2026-03-10T05:53:50.636172+0000 mon.b (mon.2) 3 : audit [DBG] from='client.? 192.168.123.102:0/3020412264' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch 2026-03-10T05:53:51.084 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:50 vm02 bash[56371]: cluster 2026-03-10T05:53:48.832909+0000 mgr.y (mgr.24992) 84 : cluster [DBG] pgmap v24: 161 pgs: 27 stale+active+clean, 134 active+clean; 457 KiB data, 103 MiB used, 160 GiB / 160 GiB avail; 639 B/s rd, 0 op/s 2026-03-10T05:53:51.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:50 vm02 bash[56371]: cluster 2026-03-10T05:53:48.832909+0000 mgr.y (mgr.24992) 84 : cluster [DBG] pgmap v24: 161 pgs: 27 stale+active+clean, 134 active+clean; 457 KiB data, 103 MiB used, 160 GiB / 160 GiB avail; 639 B/s rd, 0 op/s 2026-03-10T05:53:51.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:50 vm02 bash[56371]: audit 2026-03-10T05:53:49.592140+0000 mgr.y (mgr.24992) 85 : audit [DBG] from='client.34168 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T05:53:51.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:50 vm02 bash[56371]: audit 2026-03-10T05:53:49.592140+0000 mgr.y (mgr.24992) 85 : audit [DBG] from='client.34168 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T05:53:51.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:50 vm02 bash[56371]: audit 2026-03-10T05:53:50.227059+0000 mon.a (mon.0) 195 : audit [DBG] from='client.? 192.168.123.102:0/1990538266' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T05:53:51.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:50 vm02 bash[56371]: audit 2026-03-10T05:53:50.227059+0000 mon.a (mon.0) 195 : audit [DBG] from='client.? 192.168.123.102:0/1990538266' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T05:53:51.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:50 vm02 bash[56371]: audit 2026-03-10T05:53:50.636172+0000 mon.b (mon.2) 3 : audit [DBG] from='client.? 192.168.123.102:0/3020412264' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch 2026-03-10T05:53:51.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:50 vm02 bash[56371]: audit 2026-03-10T05:53:50.636172+0000 mon.b (mon.2) 3 : audit [DBG] from='client.? 192.168.123.102:0/3020412264' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch 2026-03-10T05:53:51.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:50 vm02 bash[55303]: cluster 2026-03-10T05:53:48.832909+0000 mgr.y (mgr.24992) 84 : cluster [DBG] pgmap v24: 161 pgs: 27 stale+active+clean, 134 active+clean; 457 KiB data, 103 MiB used, 160 GiB / 160 GiB avail; 639 B/s rd, 0 op/s 2026-03-10T05:53:51.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:50 vm02 bash[55303]: cluster 2026-03-10T05:53:48.832909+0000 mgr.y (mgr.24992) 84 : cluster [DBG] pgmap v24: 161 pgs: 27 stale+active+clean, 134 active+clean; 457 KiB data, 103 MiB used, 160 GiB / 160 GiB avail; 639 B/s rd, 0 op/s 2026-03-10T05:53:51.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:50 vm02 bash[55303]: audit 2026-03-10T05:53:49.592140+0000 mgr.y (mgr.24992) 85 : audit [DBG] from='client.34168 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T05:53:51.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:50 vm02 bash[55303]: audit 2026-03-10T05:53:49.592140+0000 mgr.y (mgr.24992) 85 : audit [DBG] from='client.34168 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T05:53:51.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:50 vm02 bash[55303]: audit 2026-03-10T05:53:50.227059+0000 mon.a (mon.0) 195 : audit [DBG] from='client.? 192.168.123.102:0/1990538266' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T05:53:51.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:50 vm02 bash[55303]: audit 2026-03-10T05:53:50.227059+0000 mon.a (mon.0) 195 : audit [DBG] from='client.? 192.168.123.102:0/1990538266' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T05:53:51.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:50 vm02 bash[55303]: audit 2026-03-10T05:53:50.636172+0000 mon.b (mon.2) 3 : audit [DBG] from='client.? 192.168.123.102:0/3020412264' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch 2026-03-10T05:53:51.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:50 vm02 bash[55303]: audit 2026-03-10T05:53:50.636172+0000 mon.b (mon.2) 3 : audit [DBG] from='client.? 192.168.123.102:0/3020412264' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch 2026-03-10T05:53:51.584 INFO:journalctl@ceph.osd.3.vm02.stdout:Mar 10 05:53:51 vm02 bash[59145]: debug 2026-03-10T05:53:51.115+0000 7f7cfe4a5740 -1 osd.3 0 read_superblock omap replica is missing. 2026-03-10T05:53:51.584 INFO:journalctl@ceph.osd.3.vm02.stdout:Mar 10 05:53:51 vm02 bash[59145]: debug 2026-03-10T05:53:51.139+0000 7f7cfe4a5740 -1 osd.3 91 log_to_monitors true 2026-03-10T05:53:51.667 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:51 vm05 bash[43541]: audit 2026-03-10T05:53:49.807356+0000 mgr.y (mgr.24992) 86 : audit [DBG] from='client.54110 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T05:53:51.667 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:51 vm05 bash[43541]: audit 2026-03-10T05:53:49.807356+0000 mgr.y (mgr.24992) 86 : audit [DBG] from='client.54110 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T05:53:51.667 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:51 vm05 bash[43541]: audit 2026-03-10T05:53:49.995266+0000 mgr.y (mgr.24992) 87 : audit [DBG] from='client.34180 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T05:53:51.667 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:51 vm05 bash[43541]: audit 2026-03-10T05:53:49.995266+0000 mgr.y (mgr.24992) 87 : audit [DBG] from='client.34180 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T05:53:52.000 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:51 vm05 bash[43541]: audit 2026-03-10T05:53:50.409817+0000 mgr.y (mgr.24992) 88 : audit [DBG] from='client.54122 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T05:53:52.000 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:51 vm05 bash[43541]: audit 2026-03-10T05:53:50.409817+0000 mgr.y (mgr.24992) 88 : audit [DBG] from='client.54122 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T05:53:52.000 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:51 vm05 bash[43541]: audit 2026-03-10T05:53:51.147990+0000 mon.c (mon.1) 4 : audit [INF] from='osd.3 [v2:192.168.123.102:6826/604934260,v1:192.168.123.102:6827/604934260]' entity='osd.3' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["3"]}]: dispatch 2026-03-10T05:53:52.000 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:51 vm05 bash[43541]: audit 2026-03-10T05:53:51.147990+0000 mon.c (mon.1) 4 : audit [INF] from='osd.3 [v2:192.168.123.102:6826/604934260,v1:192.168.123.102:6827/604934260]' entity='osd.3' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["3"]}]: dispatch 2026-03-10T05:53:52.000 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:51 vm05 bash[43541]: audit 2026-03-10T05:53:51.148410+0000 mon.a (mon.0) 196 : audit [INF] from='osd.3 ' entity='osd.3' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["3"]}]: dispatch 2026-03-10T05:53:52.000 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:51 vm05 bash[43541]: audit 2026-03-10T05:53:51.148410+0000 mon.a (mon.0) 196 : audit [INF] from='osd.3 ' entity='osd.3' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["3"]}]: dispatch 2026-03-10T05:53:52.084 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:51 vm02 bash[56371]: audit 2026-03-10T05:53:49.807356+0000 mgr.y (mgr.24992) 86 : audit [DBG] from='client.54110 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T05:53:52.084 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:51 vm02 bash[56371]: audit 2026-03-10T05:53:49.807356+0000 mgr.y (mgr.24992) 86 : audit [DBG] from='client.54110 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T05:53:52.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:51 vm02 bash[56371]: audit 2026-03-10T05:53:49.995266+0000 mgr.y (mgr.24992) 87 : audit [DBG] from='client.34180 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T05:53:52.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:51 vm02 bash[56371]: audit 2026-03-10T05:53:49.995266+0000 mgr.y (mgr.24992) 87 : audit [DBG] from='client.34180 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T05:53:52.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:51 vm02 bash[56371]: audit 2026-03-10T05:53:50.409817+0000 mgr.y (mgr.24992) 88 : audit [DBG] from='client.54122 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T05:53:52.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:51 vm02 bash[56371]: audit 2026-03-10T05:53:50.409817+0000 mgr.y (mgr.24992) 88 : audit [DBG] from='client.54122 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T05:53:52.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:51 vm02 bash[56371]: audit 2026-03-10T05:53:51.147990+0000 mon.c (mon.1) 4 : audit [INF] from='osd.3 [v2:192.168.123.102:6826/604934260,v1:192.168.123.102:6827/604934260]' entity='osd.3' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["3"]}]: dispatch 2026-03-10T05:53:52.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:51 vm02 bash[56371]: audit 2026-03-10T05:53:51.147990+0000 mon.c (mon.1) 4 : audit [INF] from='osd.3 [v2:192.168.123.102:6826/604934260,v1:192.168.123.102:6827/604934260]' entity='osd.3' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["3"]}]: dispatch 2026-03-10T05:53:52.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:51 vm02 bash[56371]: audit 2026-03-10T05:53:51.148410+0000 mon.a (mon.0) 196 : audit [INF] from='osd.3 ' entity='osd.3' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["3"]}]: dispatch 2026-03-10T05:53:52.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:51 vm02 bash[56371]: audit 2026-03-10T05:53:51.148410+0000 mon.a (mon.0) 196 : audit [INF] from='osd.3 ' entity='osd.3' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["3"]}]: dispatch 2026-03-10T05:53:52.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:51 vm02 bash[55303]: audit 2026-03-10T05:53:49.807356+0000 mgr.y (mgr.24992) 86 : audit [DBG] from='client.54110 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T05:53:52.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:51 vm02 bash[55303]: audit 2026-03-10T05:53:49.807356+0000 mgr.y (mgr.24992) 86 : audit [DBG] from='client.54110 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T05:53:52.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:51 vm02 bash[55303]: audit 2026-03-10T05:53:49.995266+0000 mgr.y (mgr.24992) 87 : audit [DBG] from='client.34180 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T05:53:52.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:51 vm02 bash[55303]: audit 2026-03-10T05:53:49.995266+0000 mgr.y (mgr.24992) 87 : audit [DBG] from='client.34180 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T05:53:52.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:51 vm02 bash[55303]: audit 2026-03-10T05:53:50.409817+0000 mgr.y (mgr.24992) 88 : audit [DBG] from='client.54122 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T05:53:52.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:51 vm02 bash[55303]: audit 2026-03-10T05:53:50.409817+0000 mgr.y (mgr.24992) 88 : audit [DBG] from='client.54122 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T05:53:52.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:51 vm02 bash[55303]: audit 2026-03-10T05:53:51.147990+0000 mon.c (mon.1) 4 : audit [INF] from='osd.3 [v2:192.168.123.102:6826/604934260,v1:192.168.123.102:6827/604934260]' entity='osd.3' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["3"]}]: dispatch 2026-03-10T05:53:52.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:51 vm02 bash[55303]: audit 2026-03-10T05:53:51.147990+0000 mon.c (mon.1) 4 : audit [INF] from='osd.3 [v2:192.168.123.102:6826/604934260,v1:192.168.123.102:6827/604934260]' entity='osd.3' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["3"]}]: dispatch 2026-03-10T05:53:52.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:51 vm02 bash[55303]: audit 2026-03-10T05:53:51.148410+0000 mon.a (mon.0) 196 : audit [INF] from='osd.3 ' entity='osd.3' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["3"]}]: dispatch 2026-03-10T05:53:52.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:51 vm02 bash[55303]: audit 2026-03-10T05:53:51.148410+0000 mon.a (mon.0) 196 : audit [INF] from='osd.3 ' entity='osd.3' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["3"]}]: dispatch 2026-03-10T05:53:52.834 INFO:journalctl@ceph.osd.3.vm02.stdout:Mar 10 05:53:52 vm02 bash[59145]: debug 2026-03-10T05:53:52.575+0000 7f7cf5a4f640 -1 osd.3 91 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory 2026-03-10T05:53:52.834 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:52 vm02 bash[56371]: cluster 2026-03-10T05:53:50.833208+0000 mgr.y (mgr.24992) 89 : cluster [DBG] pgmap v25: 161 pgs: 27 stale+active+clean, 134 active+clean; 457 KiB data, 103 MiB used, 160 GiB / 160 GiB avail 2026-03-10T05:53:52.835 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:52 vm02 bash[56371]: cluster 2026-03-10T05:53:50.833208+0000 mgr.y (mgr.24992) 89 : cluster [DBG] pgmap v25: 161 pgs: 27 stale+active+clean, 134 active+clean; 457 KiB data, 103 MiB used, 160 GiB / 160 GiB avail 2026-03-10T05:53:52.835 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:52 vm02 bash[56371]: audit 2026-03-10T05:53:51.663965+0000 mon.a (mon.0) 197 : audit [INF] from='osd.3 ' entity='osd.3' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["3"]}]': finished 2026-03-10T05:53:52.835 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:52 vm02 bash[56371]: audit 2026-03-10T05:53:51.663965+0000 mon.a (mon.0) 197 : audit [INF] from='osd.3 ' entity='osd.3' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["3"]}]': finished 2026-03-10T05:53:52.835 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:52 vm02 bash[56371]: cluster 2026-03-10T05:53:51.673431+0000 mon.a (mon.0) 198 : cluster [DBG] osdmap e94: 8 total, 7 up, 8 in 2026-03-10T05:53:52.835 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:52 vm02 bash[56371]: cluster 2026-03-10T05:53:51.673431+0000 mon.a (mon.0) 198 : cluster [DBG] osdmap e94: 8 total, 7 up, 8 in 2026-03-10T05:53:52.835 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:52 vm02 bash[56371]: audit 2026-03-10T05:53:51.674081+0000 mon.c (mon.1) 5 : audit [INF] from='osd.3 [v2:192.168.123.102:6826/604934260,v1:192.168.123.102:6827/604934260]' entity='osd.3' cmd=[{"prefix": "osd crush create-or-move", "id": 3, "weight":0.0195, "args": ["host=vm02", "root=default"]}]: dispatch 2026-03-10T05:53:52.835 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:52 vm02 bash[56371]: audit 2026-03-10T05:53:51.674081+0000 mon.c (mon.1) 5 : audit [INF] from='osd.3 [v2:192.168.123.102:6826/604934260,v1:192.168.123.102:6827/604934260]' entity='osd.3' cmd=[{"prefix": "osd crush create-or-move", "id": 3, "weight":0.0195, "args": ["host=vm02", "root=default"]}]: dispatch 2026-03-10T05:53:52.835 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:52 vm02 bash[56371]: audit 2026-03-10T05:53:51.677061+0000 mon.a (mon.0) 199 : audit [INF] from='osd.3 ' entity='osd.3' cmd=[{"prefix": "osd crush create-or-move", "id": 3, "weight":0.0195, "args": ["host=vm02", "root=default"]}]: dispatch 2026-03-10T05:53:52.835 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:52 vm02 bash[56371]: audit 2026-03-10T05:53:51.677061+0000 mon.a (mon.0) 199 : audit [INF] from='osd.3 ' entity='osd.3' cmd=[{"prefix": "osd crush create-or-move", "id": 3, "weight":0.0195, "args": ["host=vm02", "root=default"]}]: dispatch 2026-03-10T05:53:52.835 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:52 vm02 bash[55303]: cluster 2026-03-10T05:53:50.833208+0000 mgr.y (mgr.24992) 89 : cluster [DBG] pgmap v25: 161 pgs: 27 stale+active+clean, 134 active+clean; 457 KiB data, 103 MiB used, 160 GiB / 160 GiB avail 2026-03-10T05:53:52.835 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:52 vm02 bash[55303]: cluster 2026-03-10T05:53:50.833208+0000 mgr.y (mgr.24992) 89 : cluster [DBG] pgmap v25: 161 pgs: 27 stale+active+clean, 134 active+clean; 457 KiB data, 103 MiB used, 160 GiB / 160 GiB avail 2026-03-10T05:53:52.835 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:52 vm02 bash[55303]: audit 2026-03-10T05:53:51.663965+0000 mon.a (mon.0) 197 : audit [INF] from='osd.3 ' entity='osd.3' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["3"]}]': finished 2026-03-10T05:53:52.835 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:52 vm02 bash[55303]: audit 2026-03-10T05:53:51.663965+0000 mon.a (mon.0) 197 : audit [INF] from='osd.3 ' entity='osd.3' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["3"]}]': finished 2026-03-10T05:53:52.835 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:52 vm02 bash[55303]: cluster 2026-03-10T05:53:51.673431+0000 mon.a (mon.0) 198 : cluster [DBG] osdmap e94: 8 total, 7 up, 8 in 2026-03-10T05:53:52.835 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:52 vm02 bash[55303]: cluster 2026-03-10T05:53:51.673431+0000 mon.a (mon.0) 198 : cluster [DBG] osdmap e94: 8 total, 7 up, 8 in 2026-03-10T05:53:52.835 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:52 vm02 bash[55303]: audit 2026-03-10T05:53:51.674081+0000 mon.c (mon.1) 5 : audit [INF] from='osd.3 [v2:192.168.123.102:6826/604934260,v1:192.168.123.102:6827/604934260]' entity='osd.3' cmd=[{"prefix": "osd crush create-or-move", "id": 3, "weight":0.0195, "args": ["host=vm02", "root=default"]}]: dispatch 2026-03-10T05:53:52.835 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:52 vm02 bash[55303]: audit 2026-03-10T05:53:51.674081+0000 mon.c (mon.1) 5 : audit [INF] from='osd.3 [v2:192.168.123.102:6826/604934260,v1:192.168.123.102:6827/604934260]' entity='osd.3' cmd=[{"prefix": "osd crush create-or-move", "id": 3, "weight":0.0195, "args": ["host=vm02", "root=default"]}]: dispatch 2026-03-10T05:53:52.835 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:52 vm02 bash[55303]: audit 2026-03-10T05:53:51.677061+0000 mon.a (mon.0) 199 : audit [INF] from='osd.3 ' entity='osd.3' cmd=[{"prefix": "osd crush create-or-move", "id": 3, "weight":0.0195, "args": ["host=vm02", "root=default"]}]: dispatch 2026-03-10T05:53:52.835 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:52 vm02 bash[55303]: audit 2026-03-10T05:53:51.677061+0000 mon.a (mon.0) 199 : audit [INF] from='osd.3 ' entity='osd.3' cmd=[{"prefix": "osd crush create-or-move", "id": 3, "weight":0.0195, "args": ["host=vm02", "root=default"]}]: dispatch 2026-03-10T05:53:53.000 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:52 vm05 bash[43541]: cluster 2026-03-10T05:53:50.833208+0000 mgr.y (mgr.24992) 89 : cluster [DBG] pgmap v25: 161 pgs: 27 stale+active+clean, 134 active+clean; 457 KiB data, 103 MiB used, 160 GiB / 160 GiB avail 2026-03-10T05:53:53.000 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:52 vm05 bash[43541]: cluster 2026-03-10T05:53:50.833208+0000 mgr.y (mgr.24992) 89 : cluster [DBG] pgmap v25: 161 pgs: 27 stale+active+clean, 134 active+clean; 457 KiB data, 103 MiB used, 160 GiB / 160 GiB avail 2026-03-10T05:53:53.000 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:52 vm05 bash[43541]: audit 2026-03-10T05:53:51.663965+0000 mon.a (mon.0) 197 : audit [INF] from='osd.3 ' entity='osd.3' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["3"]}]': finished 2026-03-10T05:53:53.000 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:52 vm05 bash[43541]: audit 2026-03-10T05:53:51.663965+0000 mon.a (mon.0) 197 : audit [INF] from='osd.3 ' entity='osd.3' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["3"]}]': finished 2026-03-10T05:53:53.000 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:52 vm05 bash[43541]: cluster 2026-03-10T05:53:51.673431+0000 mon.a (mon.0) 198 : cluster [DBG] osdmap e94: 8 total, 7 up, 8 in 2026-03-10T05:53:53.000 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:52 vm05 bash[43541]: cluster 2026-03-10T05:53:51.673431+0000 mon.a (mon.0) 198 : cluster [DBG] osdmap e94: 8 total, 7 up, 8 in 2026-03-10T05:53:53.000 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:52 vm05 bash[43541]: audit 2026-03-10T05:53:51.674081+0000 mon.c (mon.1) 5 : audit [INF] from='osd.3 [v2:192.168.123.102:6826/604934260,v1:192.168.123.102:6827/604934260]' entity='osd.3' cmd=[{"prefix": "osd crush create-or-move", "id": 3, "weight":0.0195, "args": ["host=vm02", "root=default"]}]: dispatch 2026-03-10T05:53:53.000 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:52 vm05 bash[43541]: audit 2026-03-10T05:53:51.674081+0000 mon.c (mon.1) 5 : audit [INF] from='osd.3 [v2:192.168.123.102:6826/604934260,v1:192.168.123.102:6827/604934260]' entity='osd.3' cmd=[{"prefix": "osd crush create-or-move", "id": 3, "weight":0.0195, "args": ["host=vm02", "root=default"]}]: dispatch 2026-03-10T05:53:53.000 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:52 vm05 bash[43541]: audit 2026-03-10T05:53:51.677061+0000 mon.a (mon.0) 199 : audit [INF] from='osd.3 ' entity='osd.3' cmd=[{"prefix": "osd crush create-or-move", "id": 3, "weight":0.0195, "args": ["host=vm02", "root=default"]}]: dispatch 2026-03-10T05:53:53.000 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:52 vm05 bash[43541]: audit 2026-03-10T05:53:51.677061+0000 mon.a (mon.0) 199 : audit [INF] from='osd.3 ' entity='osd.3' cmd=[{"prefix": "osd crush create-or-move", "id": 3, "weight":0.0195, "args": ["host=vm02", "root=default"]}]: dispatch 2026-03-10T05:53:53.335 INFO:journalctl@ceph.mgr.y.vm02.stdout:Mar 10 05:53:52 vm02 bash[52264]: ::ffff:192.168.123.105 - - [10/Mar/2026:05:53:52] "GET /metrics HTTP/1.1" 200 37814 "" "Prometheus/2.51.0" 2026-03-10T05:53:54.000 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:53 vm05 bash[43541]: cluster 2026-03-10T05:53:52.664307+0000 mon.a (mon.0) 200 : cluster [INF] Health check cleared: OSD_DOWN (was: 1 osds down) 2026-03-10T05:53:54.000 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:53 vm05 bash[43541]: cluster 2026-03-10T05:53:52.664307+0000 mon.a (mon.0) 200 : cluster [INF] Health check cleared: OSD_DOWN (was: 1 osds down) 2026-03-10T05:53:54.000 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:53 vm05 bash[43541]: cluster 2026-03-10T05:53:52.664319+0000 mon.a (mon.0) 201 : cluster [INF] Cluster is now healthy 2026-03-10T05:53:54.000 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:53 vm05 bash[43541]: cluster 2026-03-10T05:53:52.664319+0000 mon.a (mon.0) 201 : cluster [INF] Cluster is now healthy 2026-03-10T05:53:54.000 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:53 vm05 bash[43541]: cluster 2026-03-10T05:53:52.687942+0000 mon.a (mon.0) 202 : cluster [INF] osd.3 [v2:192.168.123.102:6826/604934260,v1:192.168.123.102:6827/604934260] boot 2026-03-10T05:53:54.000 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:53 vm05 bash[43541]: cluster 2026-03-10T05:53:52.687942+0000 mon.a (mon.0) 202 : cluster [INF] osd.3 [v2:192.168.123.102:6826/604934260,v1:192.168.123.102:6827/604934260] boot 2026-03-10T05:53:54.000 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:53 vm05 bash[43541]: cluster 2026-03-10T05:53:52.688072+0000 mon.a (mon.0) 203 : cluster [DBG] osdmap e95: 8 total, 8 up, 8 in 2026-03-10T05:53:54.000 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:53 vm05 bash[43541]: cluster 2026-03-10T05:53:52.688072+0000 mon.a (mon.0) 203 : cluster [DBG] osdmap e95: 8 total, 8 up, 8 in 2026-03-10T05:53:54.000 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:53 vm05 bash[43541]: audit 2026-03-10T05:53:52.689538+0000 mon.a (mon.0) 204 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-10T05:53:54.000 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:53 vm05 bash[43541]: audit 2026-03-10T05:53:52.689538+0000 mon.a (mon.0) 204 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-10T05:53:54.000 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:53 vm05 bash[43541]: cluster 2026-03-10T05:53:52.833536+0000 mgr.y (mgr.24992) 90 : cluster [DBG] pgmap v28: 161 pgs: 42 active+undersized, 24 active+undersized+degraded, 95 active+clean; 457 KiB data, 123 MiB used, 160 GiB / 160 GiB avail; 96/723 objects degraded (13.278%) 2026-03-10T05:53:54.000 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:53 vm05 bash[43541]: cluster 2026-03-10T05:53:52.833536+0000 mgr.y (mgr.24992) 90 : cluster [DBG] pgmap v28: 161 pgs: 42 active+undersized, 24 active+undersized+degraded, 95 active+clean; 457 KiB data, 123 MiB used, 160 GiB / 160 GiB avail; 96/723 objects degraded (13.278%) 2026-03-10T05:53:54.001 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:53 vm05 bash[43541]: cluster 2026-03-10T05:53:53.693568+0000 mon.a (mon.0) 205 : cluster [DBG] osdmap e96: 8 total, 8 up, 8 in 2026-03-10T05:53:54.001 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:53 vm05 bash[43541]: cluster 2026-03-10T05:53:53.693568+0000 mon.a (mon.0) 205 : cluster [DBG] osdmap e96: 8 total, 8 up, 8 in 2026-03-10T05:53:54.086 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:53 vm02 bash[56371]: cluster 2026-03-10T05:53:52.664307+0000 mon.a (mon.0) 200 : cluster [INF] Health check cleared: OSD_DOWN (was: 1 osds down) 2026-03-10T05:53:54.086 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:53 vm02 bash[56371]: cluster 2026-03-10T05:53:52.664307+0000 mon.a (mon.0) 200 : cluster [INF] Health check cleared: OSD_DOWN (was: 1 osds down) 2026-03-10T05:53:54.086 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:53 vm02 bash[56371]: cluster 2026-03-10T05:53:52.664319+0000 mon.a (mon.0) 201 : cluster [INF] Cluster is now healthy 2026-03-10T05:53:54.086 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:53 vm02 bash[56371]: cluster 2026-03-10T05:53:52.664319+0000 mon.a (mon.0) 201 : cluster [INF] Cluster is now healthy 2026-03-10T05:53:54.086 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:53 vm02 bash[56371]: cluster 2026-03-10T05:53:52.687942+0000 mon.a (mon.0) 202 : cluster [INF] osd.3 [v2:192.168.123.102:6826/604934260,v1:192.168.123.102:6827/604934260] boot 2026-03-10T05:53:54.086 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:53 vm02 bash[56371]: cluster 2026-03-10T05:53:52.687942+0000 mon.a (mon.0) 202 : cluster [INF] osd.3 [v2:192.168.123.102:6826/604934260,v1:192.168.123.102:6827/604934260] boot 2026-03-10T05:53:54.086 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:53 vm02 bash[56371]: cluster 2026-03-10T05:53:52.688072+0000 mon.a (mon.0) 203 : cluster [DBG] osdmap e95: 8 total, 8 up, 8 in 2026-03-10T05:53:54.086 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:53 vm02 bash[56371]: cluster 2026-03-10T05:53:52.688072+0000 mon.a (mon.0) 203 : cluster [DBG] osdmap e95: 8 total, 8 up, 8 in 2026-03-10T05:53:54.086 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:53 vm02 bash[56371]: audit 2026-03-10T05:53:52.689538+0000 mon.a (mon.0) 204 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-10T05:53:54.086 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:53 vm02 bash[56371]: audit 2026-03-10T05:53:52.689538+0000 mon.a (mon.0) 204 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-10T05:53:54.086 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:53 vm02 bash[56371]: cluster 2026-03-10T05:53:52.833536+0000 mgr.y (mgr.24992) 90 : cluster [DBG] pgmap v28: 161 pgs: 42 active+undersized, 24 active+undersized+degraded, 95 active+clean; 457 KiB data, 123 MiB used, 160 GiB / 160 GiB avail; 96/723 objects degraded (13.278%) 2026-03-10T05:53:54.086 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:53 vm02 bash[56371]: cluster 2026-03-10T05:53:52.833536+0000 mgr.y (mgr.24992) 90 : cluster [DBG] pgmap v28: 161 pgs: 42 active+undersized, 24 active+undersized+degraded, 95 active+clean; 457 KiB data, 123 MiB used, 160 GiB / 160 GiB avail; 96/723 objects degraded (13.278%) 2026-03-10T05:53:54.086 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:53 vm02 bash[56371]: cluster 2026-03-10T05:53:53.693568+0000 mon.a (mon.0) 205 : cluster [DBG] osdmap e96: 8 total, 8 up, 8 in 2026-03-10T05:53:54.086 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:53 vm02 bash[56371]: cluster 2026-03-10T05:53:53.693568+0000 mon.a (mon.0) 205 : cluster [DBG] osdmap e96: 8 total, 8 up, 8 in 2026-03-10T05:53:54.086 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:53 vm02 bash[55303]: cluster 2026-03-10T05:53:52.664307+0000 mon.a (mon.0) 200 : cluster [INF] Health check cleared: OSD_DOWN (was: 1 osds down) 2026-03-10T05:53:54.086 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:53 vm02 bash[55303]: cluster 2026-03-10T05:53:52.664307+0000 mon.a (mon.0) 200 : cluster [INF] Health check cleared: OSD_DOWN (was: 1 osds down) 2026-03-10T05:53:54.086 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:53 vm02 bash[55303]: cluster 2026-03-10T05:53:52.664319+0000 mon.a (mon.0) 201 : cluster [INF] Cluster is now healthy 2026-03-10T05:53:54.086 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:53 vm02 bash[55303]: cluster 2026-03-10T05:53:52.664319+0000 mon.a (mon.0) 201 : cluster [INF] Cluster is now healthy 2026-03-10T05:53:54.087 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:53 vm02 bash[55303]: cluster 2026-03-10T05:53:52.687942+0000 mon.a (mon.0) 202 : cluster [INF] osd.3 [v2:192.168.123.102:6826/604934260,v1:192.168.123.102:6827/604934260] boot 2026-03-10T05:53:54.087 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:53 vm02 bash[55303]: cluster 2026-03-10T05:53:52.687942+0000 mon.a (mon.0) 202 : cluster [INF] osd.3 [v2:192.168.123.102:6826/604934260,v1:192.168.123.102:6827/604934260] boot 2026-03-10T05:53:54.087 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:53 vm02 bash[55303]: cluster 2026-03-10T05:53:52.688072+0000 mon.a (mon.0) 203 : cluster [DBG] osdmap e95: 8 total, 8 up, 8 in 2026-03-10T05:53:54.087 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:53 vm02 bash[55303]: cluster 2026-03-10T05:53:52.688072+0000 mon.a (mon.0) 203 : cluster [DBG] osdmap e95: 8 total, 8 up, 8 in 2026-03-10T05:53:54.087 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:53 vm02 bash[55303]: audit 2026-03-10T05:53:52.689538+0000 mon.a (mon.0) 204 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-10T05:53:54.087 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:53 vm02 bash[55303]: audit 2026-03-10T05:53:52.689538+0000 mon.a (mon.0) 204 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-10T05:53:54.087 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:53 vm02 bash[55303]: cluster 2026-03-10T05:53:52.833536+0000 mgr.y (mgr.24992) 90 : cluster [DBG] pgmap v28: 161 pgs: 42 active+undersized, 24 active+undersized+degraded, 95 active+clean; 457 KiB data, 123 MiB used, 160 GiB / 160 GiB avail; 96/723 objects degraded (13.278%) 2026-03-10T05:53:54.087 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:53 vm02 bash[55303]: cluster 2026-03-10T05:53:52.833536+0000 mgr.y (mgr.24992) 90 : cluster [DBG] pgmap v28: 161 pgs: 42 active+undersized, 24 active+undersized+degraded, 95 active+clean; 457 KiB data, 123 MiB used, 160 GiB / 160 GiB avail; 96/723 objects degraded (13.278%) 2026-03-10T05:53:54.087 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:53 vm02 bash[55303]: cluster 2026-03-10T05:53:53.693568+0000 mon.a (mon.0) 205 : cluster [DBG] osdmap e96: 8 total, 8 up, 8 in 2026-03-10T05:53:54.087 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:53 vm02 bash[55303]: cluster 2026-03-10T05:53:53.693568+0000 mon.a (mon.0) 205 : cluster [DBG] osdmap e96: 8 total, 8 up, 8 in 2026-03-10T05:53:54.500 INFO:journalctl@ceph.prometheus.a.vm05.stdout:Mar 10 05:53:54 vm05 bash[41269]: ts=2026-03-10T05:53:54.148Z caller=group.go:483 level=warn name=CephOSDFlapping index=13 component="rule manager" file=/etc/prometheus/alerting/ceph_alerts.yml group=osd msg="Evaluating rule failed" rule="alert: CephOSDFlapping\nexpr: (rate(ceph_osd_up[5m]) * on (ceph_daemon) group_left (hostname) ceph_osd_metadata)\n * 60 > 1\nlabels:\n oid: 1.3.6.1.4.1.50495.1.2.1.4.4\n severity: warning\n type: ceph_default\nannotations:\n description: OSD {{ $labels.ceph_daemon }} on {{ $labels.hostname }} was marked\n down and back up {{ $value | humanize }} times once a minute for 5 minutes. This\n may indicate a network issue (latency, packet loss, MTU mismatch) on the cluster\n network, or the public network if no cluster network is deployed. Check the network\n stats on the listed host(s).\n documentation: https://docs.ceph.com/en/latest/rados/troubleshooting/troubleshooting-osd#flapping-osds\n summary: Network issues are causing OSDs to flap (mark each other down)\n" err="found duplicate series for the match group {ceph_daemon=\"osd.0\"} on the right hand-side of the operation: [{__name__=\"ceph_osd_metadata\", ceph_daemon=\"osd.0\", ceph_version=\"ceph version 17.2.0 (43e2e60a7559d3f46c9d53f1ca875fd499a1e35e) quincy (stable)\", cluster=\"107483ae-1c44-11f1-b530-c1172cd6122a\", cluster_addr=\"192.168.123.102\", device_class=\"hdd\", hostname=\"vm02\", instance=\"ceph_cluster\", job=\"ceph\", objectstore=\"bluestore\", public_addr=\"192.168.123.102\"}, {__name__=\"ceph_osd_metadata\", ceph_daemon=\"osd.0\", ceph_version=\"ceph version 17.2.0 (43e2e60a7559d3f46c9d53f1ca875fd499a1e35e) quincy (stable)\", cluster_addr=\"192.168.123.102\", device_class=\"hdd\", hostname=\"vm02\", instance=\"192.168.123.105:9283\", job=\"ceph\", objectstore=\"bluestore\", public_addr=\"192.168.123.102\"}];many-to-many matching not allowed: matching labels must be unique on one side" 2026-03-10T05:53:55.000 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:54 vm05 bash[43541]: cluster 2026-03-10T05:53:52.580695+0000 osd.3 (osd.3) 1 : cluster [WRN] OSD bench result of 30790.761249 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.3. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd]. 2026-03-10T05:53:55.000 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:54 vm05 bash[43541]: cluster 2026-03-10T05:53:52.580695+0000 osd.3 (osd.3) 1 : cluster [WRN] OSD bench result of 30790.761249 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.3. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd]. 2026-03-10T05:53:55.000 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:54 vm05 bash[43541]: cluster 2026-03-10T05:53:53.701644+0000 mon.a (mon.0) 206 : cluster [WRN] Health check failed: Degraded data redundancy: 96/723 objects degraded (13.278%), 24 pgs degraded (PG_DEGRADED) 2026-03-10T05:53:55.000 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:54 vm05 bash[43541]: cluster 2026-03-10T05:53:53.701644+0000 mon.a (mon.0) 206 : cluster [WRN] Health check failed: Degraded data redundancy: 96/723 objects degraded (13.278%), 24 pgs degraded (PG_DEGRADED) 2026-03-10T05:53:55.000 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:54 vm05 bash[43541]: audit 2026-03-10T05:53:54.363809+0000 mon.a (mon.0) 207 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:53:55.000 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:54 vm05 bash[43541]: audit 2026-03-10T05:53:54.363809+0000 mon.a (mon.0) 207 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:53:55.000 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:54 vm05 bash[43541]: audit 2026-03-10T05:53:54.369974+0000 mon.a (mon.0) 208 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:53:55.000 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:54 vm05 bash[43541]: audit 2026-03-10T05:53:54.369974+0000 mon.a (mon.0) 208 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:53:55.086 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:54 vm02 bash[56371]: cluster 2026-03-10T05:53:52.580695+0000 osd.3 (osd.3) 1 : cluster [WRN] OSD bench result of 30790.761249 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.3. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd]. 2026-03-10T05:53:55.086 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:54 vm02 bash[56371]: cluster 2026-03-10T05:53:52.580695+0000 osd.3 (osd.3) 1 : cluster [WRN] OSD bench result of 30790.761249 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.3. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd]. 2026-03-10T05:53:55.086 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:54 vm02 bash[56371]: cluster 2026-03-10T05:53:53.701644+0000 mon.a (mon.0) 206 : cluster [WRN] Health check failed: Degraded data redundancy: 96/723 objects degraded (13.278%), 24 pgs degraded (PG_DEGRADED) 2026-03-10T05:53:55.086 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:54 vm02 bash[56371]: cluster 2026-03-10T05:53:53.701644+0000 mon.a (mon.0) 206 : cluster [WRN] Health check failed: Degraded data redundancy: 96/723 objects degraded (13.278%), 24 pgs degraded (PG_DEGRADED) 2026-03-10T05:53:55.086 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:54 vm02 bash[56371]: audit 2026-03-10T05:53:54.363809+0000 mon.a (mon.0) 207 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:53:55.086 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:54 vm02 bash[56371]: audit 2026-03-10T05:53:54.363809+0000 mon.a (mon.0) 207 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:53:55.086 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:54 vm02 bash[56371]: audit 2026-03-10T05:53:54.369974+0000 mon.a (mon.0) 208 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:53:55.086 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:54 vm02 bash[56371]: audit 2026-03-10T05:53:54.369974+0000 mon.a (mon.0) 208 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:53:55.086 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:54 vm02 bash[55303]: cluster 2026-03-10T05:53:52.580695+0000 osd.3 (osd.3) 1 : cluster [WRN] OSD bench result of 30790.761249 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.3. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd]. 2026-03-10T05:53:55.086 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:54 vm02 bash[55303]: cluster 2026-03-10T05:53:52.580695+0000 osd.3 (osd.3) 1 : cluster [WRN] OSD bench result of 30790.761249 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.3. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd]. 2026-03-10T05:53:55.086 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:54 vm02 bash[55303]: cluster 2026-03-10T05:53:53.701644+0000 mon.a (mon.0) 206 : cluster [WRN] Health check failed: Degraded data redundancy: 96/723 objects degraded (13.278%), 24 pgs degraded (PG_DEGRADED) 2026-03-10T05:53:55.086 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:54 vm02 bash[55303]: cluster 2026-03-10T05:53:53.701644+0000 mon.a (mon.0) 206 : cluster [WRN] Health check failed: Degraded data redundancy: 96/723 objects degraded (13.278%), 24 pgs degraded (PG_DEGRADED) 2026-03-10T05:53:55.086 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:54 vm02 bash[55303]: audit 2026-03-10T05:53:54.363809+0000 mon.a (mon.0) 207 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:53:55.086 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:54 vm02 bash[55303]: audit 2026-03-10T05:53:54.363809+0000 mon.a (mon.0) 207 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:53:55.086 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:54 vm02 bash[55303]: audit 2026-03-10T05:53:54.369974+0000 mon.a (mon.0) 208 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:53:55.086 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:54 vm02 bash[55303]: audit 2026-03-10T05:53:54.369974+0000 mon.a (mon.0) 208 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:53:56.250 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:55 vm05 bash[43541]: cluster 2026-03-10T05:53:54.833880+0000 mgr.y (mgr.24992) 91 : cluster [DBG] pgmap v30: 161 pgs: 42 active+undersized, 24 active+undersized+degraded, 95 active+clean; 457 KiB data, 123 MiB used, 160 GiB / 160 GiB avail; 96/723 objects degraded (13.278%) 2026-03-10T05:53:56.251 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:55 vm05 bash[43541]: cluster 2026-03-10T05:53:54.833880+0000 mgr.y (mgr.24992) 91 : cluster [DBG] pgmap v30: 161 pgs: 42 active+undersized, 24 active+undersized+degraded, 95 active+clean; 457 KiB data, 123 MiB used, 160 GiB / 160 GiB avail; 96/723 objects degraded (13.278%) 2026-03-10T05:53:56.251 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:55 vm05 bash[43541]: audit 2026-03-10T05:53:54.971990+0000 mon.a (mon.0) 209 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:53:56.251 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:55 vm05 bash[43541]: audit 2026-03-10T05:53:54.971990+0000 mon.a (mon.0) 209 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:53:56.251 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:55 vm05 bash[43541]: audit 2026-03-10T05:53:54.978039+0000 mon.a (mon.0) 210 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:53:56.251 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:55 vm05 bash[43541]: audit 2026-03-10T05:53:54.978039+0000 mon.a (mon.0) 210 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:53:56.251 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:55 vm05 bash[43541]: audit 2026-03-10T05:53:55.883640+0000 mon.a (mon.0) 211 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:53:56.251 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:55 vm05 bash[43541]: audit 2026-03-10T05:53:55.883640+0000 mon.a (mon.0) 211 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:53:56.251 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:55 vm05 bash[43541]: audit 2026-03-10T05:53:55.884498+0000 mon.a (mon.0) 212 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T05:53:56.251 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:55 vm05 bash[43541]: audit 2026-03-10T05:53:55.884498+0000 mon.a (mon.0) 212 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T05:53:56.334 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:55 vm02 bash[56371]: cluster 2026-03-10T05:53:54.833880+0000 mgr.y (mgr.24992) 91 : cluster [DBG] pgmap v30: 161 pgs: 42 active+undersized, 24 active+undersized+degraded, 95 active+clean; 457 KiB data, 123 MiB used, 160 GiB / 160 GiB avail; 96/723 objects degraded (13.278%) 2026-03-10T05:53:56.334 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:55 vm02 bash[56371]: cluster 2026-03-10T05:53:54.833880+0000 mgr.y (mgr.24992) 91 : cluster [DBG] pgmap v30: 161 pgs: 42 active+undersized, 24 active+undersized+degraded, 95 active+clean; 457 KiB data, 123 MiB used, 160 GiB / 160 GiB avail; 96/723 objects degraded (13.278%) 2026-03-10T05:53:56.334 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:55 vm02 bash[56371]: audit 2026-03-10T05:53:54.971990+0000 mon.a (mon.0) 209 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:53:56.334 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:55 vm02 bash[56371]: audit 2026-03-10T05:53:54.971990+0000 mon.a (mon.0) 209 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:53:56.334 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:55 vm02 bash[56371]: audit 2026-03-10T05:53:54.978039+0000 mon.a (mon.0) 210 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:53:56.334 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:55 vm02 bash[56371]: audit 2026-03-10T05:53:54.978039+0000 mon.a (mon.0) 210 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:53:56.334 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:55 vm02 bash[56371]: audit 2026-03-10T05:53:55.883640+0000 mon.a (mon.0) 211 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:53:56.334 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:55 vm02 bash[56371]: audit 2026-03-10T05:53:55.883640+0000 mon.a (mon.0) 211 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:53:56.334 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:55 vm02 bash[56371]: audit 2026-03-10T05:53:55.884498+0000 mon.a (mon.0) 212 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T05:53:56.334 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:55 vm02 bash[56371]: audit 2026-03-10T05:53:55.884498+0000 mon.a (mon.0) 212 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T05:53:56.335 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:55 vm02 bash[55303]: cluster 2026-03-10T05:53:54.833880+0000 mgr.y (mgr.24992) 91 : cluster [DBG] pgmap v30: 161 pgs: 42 active+undersized, 24 active+undersized+degraded, 95 active+clean; 457 KiB data, 123 MiB used, 160 GiB / 160 GiB avail; 96/723 objects degraded (13.278%) 2026-03-10T05:53:56.335 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:55 vm02 bash[55303]: cluster 2026-03-10T05:53:54.833880+0000 mgr.y (mgr.24992) 91 : cluster [DBG] pgmap v30: 161 pgs: 42 active+undersized, 24 active+undersized+degraded, 95 active+clean; 457 KiB data, 123 MiB used, 160 GiB / 160 GiB avail; 96/723 objects degraded (13.278%) 2026-03-10T05:53:56.335 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:55 vm02 bash[55303]: audit 2026-03-10T05:53:54.971990+0000 mon.a (mon.0) 209 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:53:56.335 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:55 vm02 bash[55303]: audit 2026-03-10T05:53:54.971990+0000 mon.a (mon.0) 209 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:53:56.335 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:55 vm02 bash[55303]: audit 2026-03-10T05:53:54.978039+0000 mon.a (mon.0) 210 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:53:56.335 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:55 vm02 bash[55303]: audit 2026-03-10T05:53:54.978039+0000 mon.a (mon.0) 210 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:53:56.335 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:55 vm02 bash[55303]: audit 2026-03-10T05:53:55.883640+0000 mon.a (mon.0) 211 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:53:56.335 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:55 vm02 bash[55303]: audit 2026-03-10T05:53:55.883640+0000 mon.a (mon.0) 211 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:53:56.335 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:55 vm02 bash[55303]: audit 2026-03-10T05:53:55.884498+0000 mon.a (mon.0) 212 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T05:53:56.335 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:55 vm02 bash[55303]: audit 2026-03-10T05:53:55.884498+0000 mon.a (mon.0) 212 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T05:53:57.250 INFO:journalctl@ceph.prometheus.a.vm05.stdout:Mar 10 05:53:56 vm05 bash[41269]: ts=2026-03-10T05:53:56.948Z caller=group.go:483 level=warn name=CephNodeDiskspaceWarning index=4 component="rule manager" file=/etc/prometheus/alerting/ceph_alerts.yml group=nodes msg="Evaluating rule failed" rule="alert: CephNodeDiskspaceWarning\nexpr: predict_linear(node_filesystem_free_bytes{device=~\"/.*\"}[2d], 3600 * 24 * 5)\n * on (instance) group_left (nodename) node_uname_info < 0\nlabels:\n oid: 1.3.6.1.4.1.50495.1.2.1.8.4\n severity: warning\n type: ceph_default\nannotations:\n description: Mountpoint {{ $labels.mountpoint }} on {{ $labels.nodename }} will\n be full in less than 5 days based on the 48 hour trailing fill rate.\n summary: Host filesystem free space is getting low\n" err="found duplicate series for the match group {instance=\"vm05\"} on the right hand-side of the operation: [{__name__=\"node_uname_info\", cluster=\"107483ae-1c44-11f1-b530-c1172cd6122a\", domainname=\"(none)\", instance=\"vm05\", job=\"node\", machine=\"x86_64\", nodename=\"vm05\", release=\"5.15.0-1092-kvm\", sysname=\"Linux\", version=\"#97-Ubuntu SMP Fri Jan 23 15:00:24 UTC 2026\"}, {__name__=\"node_uname_info\", domainname=\"(none)\", instance=\"vm05\", job=\"node\", machine=\"x86_64\", nodename=\"vm05\", release=\"5.15.0-1092-kvm\", sysname=\"Linux\", version=\"#97-Ubuntu SMP Fri Jan 23 15:00:24 UTC 2026\"}];many-to-many matching not allowed: matching labels must be unique on one side" 2026-03-10T05:53:58.250 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:57 vm05 bash[43541]: cluster 2026-03-10T05:53:56.834406+0000 mgr.y (mgr.24992) 92 : cluster [DBG] pgmap v31: 161 pgs: 14 active+undersized, 9 active+undersized+degraded, 138 active+clean; 457 KiB data, 123 MiB used, 160 GiB / 160 GiB avail; 37/723 objects degraded (5.118%) 2026-03-10T05:53:58.250 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:57 vm05 bash[43541]: cluster 2026-03-10T05:53:56.834406+0000 mgr.y (mgr.24992) 92 : cluster [DBG] pgmap v31: 161 pgs: 14 active+undersized, 9 active+undersized+degraded, 138 active+clean; 457 KiB data, 123 MiB used, 160 GiB / 160 GiB avail; 37/723 objects degraded (5.118%) 2026-03-10T05:53:58.250 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:57 vm05 bash[43541]: audit 2026-03-10T05:53:56.899178+0000 mgr.y (mgr.24992) 93 : audit [DBG] from='client.25048 -' entity='client.iscsi.foo.vm02.mxbwmh' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T05:53:58.250 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:57 vm05 bash[43541]: audit 2026-03-10T05:53:56.899178+0000 mgr.y (mgr.24992) 93 : audit [DBG] from='client.25048 -' entity='client.iscsi.foo.vm02.mxbwmh' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T05:53:58.334 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:57 vm02 bash[56371]: cluster 2026-03-10T05:53:56.834406+0000 mgr.y (mgr.24992) 92 : cluster [DBG] pgmap v31: 161 pgs: 14 active+undersized, 9 active+undersized+degraded, 138 active+clean; 457 KiB data, 123 MiB used, 160 GiB / 160 GiB avail; 37/723 objects degraded (5.118%) 2026-03-10T05:53:58.334 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:57 vm02 bash[56371]: cluster 2026-03-10T05:53:56.834406+0000 mgr.y (mgr.24992) 92 : cluster [DBG] pgmap v31: 161 pgs: 14 active+undersized, 9 active+undersized+degraded, 138 active+clean; 457 KiB data, 123 MiB used, 160 GiB / 160 GiB avail; 37/723 objects degraded (5.118%) 2026-03-10T05:53:58.334 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:57 vm02 bash[56371]: audit 2026-03-10T05:53:56.899178+0000 mgr.y (mgr.24992) 93 : audit [DBG] from='client.25048 -' entity='client.iscsi.foo.vm02.mxbwmh' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T05:53:58.334 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:57 vm02 bash[56371]: audit 2026-03-10T05:53:56.899178+0000 mgr.y (mgr.24992) 93 : audit [DBG] from='client.25048 -' entity='client.iscsi.foo.vm02.mxbwmh' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T05:53:58.334 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:57 vm02 bash[55303]: cluster 2026-03-10T05:53:56.834406+0000 mgr.y (mgr.24992) 92 : cluster [DBG] pgmap v31: 161 pgs: 14 active+undersized, 9 active+undersized+degraded, 138 active+clean; 457 KiB data, 123 MiB used, 160 GiB / 160 GiB avail; 37/723 objects degraded (5.118%) 2026-03-10T05:53:58.335 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:57 vm02 bash[55303]: cluster 2026-03-10T05:53:56.834406+0000 mgr.y (mgr.24992) 92 : cluster [DBG] pgmap v31: 161 pgs: 14 active+undersized, 9 active+undersized+degraded, 138 active+clean; 457 KiB data, 123 MiB used, 160 GiB / 160 GiB avail; 37/723 objects degraded (5.118%) 2026-03-10T05:53:58.335 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:57 vm02 bash[55303]: audit 2026-03-10T05:53:56.899178+0000 mgr.y (mgr.24992) 93 : audit [DBG] from='client.25048 -' entity='client.iscsi.foo.vm02.mxbwmh' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T05:53:58.335 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:57 vm02 bash[55303]: audit 2026-03-10T05:53:56.899178+0000 mgr.y (mgr.24992) 93 : audit [DBG] from='client.25048 -' entity='client.iscsi.foo.vm02.mxbwmh' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T05:53:59.334 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:59 vm02 bash[56371]: cluster 2026-03-10T05:53:58.981693+0000 mon.a (mon.0) 213 : cluster [INF] Health check cleared: PG_DEGRADED (was: Degraded data redundancy: 37/723 objects degraded (5.118%), 9 pgs degraded) 2026-03-10T05:53:59.334 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:59 vm02 bash[56371]: cluster 2026-03-10T05:53:58.981693+0000 mon.a (mon.0) 213 : cluster [INF] Health check cleared: PG_DEGRADED (was: Degraded data redundancy: 37/723 objects degraded (5.118%), 9 pgs degraded) 2026-03-10T05:53:59.334 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:59 vm02 bash[56371]: cluster 2026-03-10T05:53:58.981707+0000 mon.a (mon.0) 214 : cluster [INF] Cluster is now healthy 2026-03-10T05:53:59.334 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:53:59 vm02 bash[56371]: cluster 2026-03-10T05:53:58.981707+0000 mon.a (mon.0) 214 : cluster [INF] Cluster is now healthy 2026-03-10T05:53:59.335 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:59 vm02 bash[55303]: cluster 2026-03-10T05:53:58.981693+0000 mon.a (mon.0) 213 : cluster [INF] Health check cleared: PG_DEGRADED (was: Degraded data redundancy: 37/723 objects degraded (5.118%), 9 pgs degraded) 2026-03-10T05:53:59.335 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:59 vm02 bash[55303]: cluster 2026-03-10T05:53:58.981693+0000 mon.a (mon.0) 213 : cluster [INF] Health check cleared: PG_DEGRADED (was: Degraded data redundancy: 37/723 objects degraded (5.118%), 9 pgs degraded) 2026-03-10T05:53:59.335 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:59 vm02 bash[55303]: cluster 2026-03-10T05:53:58.981707+0000 mon.a (mon.0) 214 : cluster [INF] Cluster is now healthy 2026-03-10T05:53:59.335 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:53:59 vm02 bash[55303]: cluster 2026-03-10T05:53:58.981707+0000 mon.a (mon.0) 214 : cluster [INF] Cluster is now healthy 2026-03-10T05:53:59.500 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:59 vm05 bash[43541]: cluster 2026-03-10T05:53:58.981693+0000 mon.a (mon.0) 213 : cluster [INF] Health check cleared: PG_DEGRADED (was: Degraded data redundancy: 37/723 objects degraded (5.118%), 9 pgs degraded) 2026-03-10T05:53:59.500 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:59 vm05 bash[43541]: cluster 2026-03-10T05:53:58.981693+0000 mon.a (mon.0) 213 : cluster [INF] Health check cleared: PG_DEGRADED (was: Degraded data redundancy: 37/723 objects degraded (5.118%), 9 pgs degraded) 2026-03-10T05:53:59.500 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:59 vm05 bash[43541]: cluster 2026-03-10T05:53:58.981707+0000 mon.a (mon.0) 214 : cluster [INF] Cluster is now healthy 2026-03-10T05:53:59.500 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:53:59 vm05 bash[43541]: cluster 2026-03-10T05:53:58.981707+0000 mon.a (mon.0) 214 : cluster [INF] Cluster is now healthy 2026-03-10T05:54:00.084 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:54:00 vm02 bash[56371]: cluster 2026-03-10T05:53:58.834906+0000 mgr.y (mgr.24992) 94 : cluster [DBG] pgmap v32: 161 pgs: 161 active+clean; 457 KiB data, 124 MiB used, 160 GiB / 160 GiB avail 2026-03-10T05:54:00.084 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:54:00 vm02 bash[56371]: cluster 2026-03-10T05:53:58.834906+0000 mgr.y (mgr.24992) 94 : cluster [DBG] pgmap v32: 161 pgs: 161 active+clean; 457 KiB data, 124 MiB used, 160 GiB / 160 GiB avail 2026-03-10T05:54:00.084 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:54:00 vm02 bash[55303]: cluster 2026-03-10T05:53:58.834906+0000 mgr.y (mgr.24992) 94 : cluster [DBG] pgmap v32: 161 pgs: 161 active+clean; 457 KiB data, 124 MiB used, 160 GiB / 160 GiB avail 2026-03-10T05:54:00.084 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:54:00 vm02 bash[55303]: cluster 2026-03-10T05:53:58.834906+0000 mgr.y (mgr.24992) 94 : cluster [DBG] pgmap v32: 161 pgs: 161 active+clean; 457 KiB data, 124 MiB used, 160 GiB / 160 GiB avail 2026-03-10T05:54:00.500 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:54:00 vm05 bash[43541]: cluster 2026-03-10T05:53:58.834906+0000 mgr.y (mgr.24992) 94 : cluster [DBG] pgmap v32: 161 pgs: 161 active+clean; 457 KiB data, 124 MiB used, 160 GiB / 160 GiB avail 2026-03-10T05:54:00.500 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:54:00 vm05 bash[43541]: cluster 2026-03-10T05:53:58.834906+0000 mgr.y (mgr.24992) 94 : cluster [DBG] pgmap v32: 161 pgs: 161 active+clean; 457 KiB data, 124 MiB used, 160 GiB / 160 GiB avail 2026-03-10T05:54:02.798 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:54:02 vm02 bash[56371]: cluster 2026-03-10T05:54:00.835197+0000 mgr.y (mgr.24992) 95 : cluster [DBG] pgmap v33: 161 pgs: 161 active+clean; 457 KiB data, 124 MiB used, 160 GiB / 160 GiB avail 2026-03-10T05:54:02.798 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:54:02 vm02 bash[56371]: cluster 2026-03-10T05:54:00.835197+0000 mgr.y (mgr.24992) 95 : cluster [DBG] pgmap v33: 161 pgs: 161 active+clean; 457 KiB data, 124 MiB used, 160 GiB / 160 GiB avail 2026-03-10T05:54:02.798 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:54:02 vm02 bash[56371]: audit 2026-03-10T05:54:01.687780+0000 mon.a (mon.0) 215 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:54:02.798 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:54:02 vm02 bash[56371]: audit 2026-03-10T05:54:01.687780+0000 mon.a (mon.0) 215 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:54:02.798 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:54:02 vm02 bash[56371]: audit 2026-03-10T05:54:01.697878+0000 mon.a (mon.0) 216 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:54:02.798 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:54:02 vm02 bash[56371]: audit 2026-03-10T05:54:01.697878+0000 mon.a (mon.0) 216 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:54:02.798 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:54:02 vm02 bash[56371]: audit 2026-03-10T05:54:01.699062+0000 mon.a (mon.0) 217 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:54:02.798 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:54:02 vm02 bash[56371]: audit 2026-03-10T05:54:01.699062+0000 mon.a (mon.0) 217 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:54:02.798 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:54:02 vm02 bash[56371]: audit 2026-03-10T05:54:01.699580+0000 mon.a (mon.0) 218 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T05:54:02.798 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:54:02 vm02 bash[56371]: audit 2026-03-10T05:54:01.699580+0000 mon.a (mon.0) 218 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T05:54:02.798 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:54:02 vm02 bash[56371]: audit 2026-03-10T05:54:01.705247+0000 mon.a (mon.0) 219 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:54:02.798 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:54:02 vm02 bash[56371]: audit 2026-03-10T05:54:01.705247+0000 mon.a (mon.0) 219 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:54:02.798 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:54:02 vm02 bash[56371]: audit 2026-03-10T05:54:01.749139+0000 mon.a (mon.0) 220 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T05:54:02.798 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:54:02 vm02 bash[56371]: audit 2026-03-10T05:54:01.749139+0000 mon.a (mon.0) 220 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T05:54:02.798 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:54:02 vm02 bash[56371]: audit 2026-03-10T05:54:01.750435+0000 mon.a (mon.0) 221 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T05:54:02.798 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:54:02 vm02 bash[56371]: audit 2026-03-10T05:54:01.750435+0000 mon.a (mon.0) 221 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T05:54:02.798 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:54:02 vm02 bash[56371]: audit 2026-03-10T05:54:01.751357+0000 mon.a (mon.0) 222 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T05:54:02.798 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:54:02 vm02 bash[56371]: audit 2026-03-10T05:54:01.751357+0000 mon.a (mon.0) 222 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T05:54:02.798 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:54:02 vm02 bash[56371]: audit 2026-03-10T05:54:01.752494+0000 mon.a (mon.0) 223 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T05:54:02.798 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:54:02 vm02 bash[56371]: audit 2026-03-10T05:54:01.752494+0000 mon.a (mon.0) 223 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T05:54:02.798 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:54:02 vm02 bash[56371]: audit 2026-03-10T05:54:01.754041+0000 mon.a (mon.0) 224 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "osd ok-to-stop", "ids": ["2"], "max": 16}]: dispatch 2026-03-10T05:54:02.798 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:54:02 vm02 bash[56371]: audit 2026-03-10T05:54:01.754041+0000 mon.a (mon.0) 224 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "osd ok-to-stop", "ids": ["2"], "max": 16}]: dispatch 2026-03-10T05:54:02.798 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:54:02 vm02 bash[56371]: audit 2026-03-10T05:54:02.197560+0000 mon.a (mon.0) 225 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:54:02.798 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:54:02 vm02 bash[56371]: audit 2026-03-10T05:54:02.197560+0000 mon.a (mon.0) 225 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:54:02.799 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:54:02 vm02 bash[56371]: audit 2026-03-10T05:54:02.202265+0000 mon.a (mon.0) 226 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.2"}]: dispatch 2026-03-10T05:54:02.799 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:54:02 vm02 bash[56371]: audit 2026-03-10T05:54:02.202265+0000 mon.a (mon.0) 226 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.2"}]: dispatch 2026-03-10T05:54:02.799 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:54:02 vm02 bash[56371]: audit 2026-03-10T05:54:02.202679+0000 mon.a (mon.0) 227 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:54:02.799 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:54:02 vm02 bash[56371]: audit 2026-03-10T05:54:02.202679+0000 mon.a (mon.0) 227 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:54:02.799 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:54:02 vm02 bash[55303]: cluster 2026-03-10T05:54:00.835197+0000 mgr.y (mgr.24992) 95 : cluster [DBG] pgmap v33: 161 pgs: 161 active+clean; 457 KiB data, 124 MiB used, 160 GiB / 160 GiB avail 2026-03-10T05:54:02.799 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:54:02 vm02 bash[55303]: cluster 2026-03-10T05:54:00.835197+0000 mgr.y (mgr.24992) 95 : cluster [DBG] pgmap v33: 161 pgs: 161 active+clean; 457 KiB data, 124 MiB used, 160 GiB / 160 GiB avail 2026-03-10T05:54:02.799 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:54:02 vm02 bash[55303]: audit 2026-03-10T05:54:01.687780+0000 mon.a (mon.0) 215 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:54:02.799 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:54:02 vm02 bash[55303]: audit 2026-03-10T05:54:01.687780+0000 mon.a (mon.0) 215 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:54:02.799 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:54:02 vm02 bash[55303]: audit 2026-03-10T05:54:01.697878+0000 mon.a (mon.0) 216 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:54:02.799 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:54:02 vm02 bash[55303]: audit 2026-03-10T05:54:01.697878+0000 mon.a (mon.0) 216 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:54:02.799 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:54:02 vm02 bash[55303]: audit 2026-03-10T05:54:01.699062+0000 mon.a (mon.0) 217 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:54:02.799 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:54:02 vm02 bash[55303]: audit 2026-03-10T05:54:01.699062+0000 mon.a (mon.0) 217 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:54:02.799 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:54:02 vm02 bash[55303]: audit 2026-03-10T05:54:01.699580+0000 mon.a (mon.0) 218 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T05:54:02.799 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:54:02 vm02 bash[55303]: audit 2026-03-10T05:54:01.699580+0000 mon.a (mon.0) 218 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T05:54:02.799 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:54:02 vm02 bash[55303]: audit 2026-03-10T05:54:01.705247+0000 mon.a (mon.0) 219 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:54:02.799 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:54:02 vm02 bash[55303]: audit 2026-03-10T05:54:01.705247+0000 mon.a (mon.0) 219 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:54:02.799 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:54:02 vm02 bash[55303]: audit 2026-03-10T05:54:01.749139+0000 mon.a (mon.0) 220 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T05:54:02.799 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:54:02 vm02 bash[55303]: audit 2026-03-10T05:54:01.749139+0000 mon.a (mon.0) 220 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T05:54:02.799 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:54:02 vm02 bash[55303]: audit 2026-03-10T05:54:01.750435+0000 mon.a (mon.0) 221 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T05:54:02.799 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:54:02 vm02 bash[55303]: audit 2026-03-10T05:54:01.750435+0000 mon.a (mon.0) 221 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T05:54:02.799 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:54:02 vm02 bash[55303]: audit 2026-03-10T05:54:01.751357+0000 mon.a (mon.0) 222 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T05:54:02.799 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:54:02 vm02 bash[55303]: audit 2026-03-10T05:54:01.751357+0000 mon.a (mon.0) 222 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T05:54:02.799 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:54:02 vm02 bash[55303]: audit 2026-03-10T05:54:01.752494+0000 mon.a (mon.0) 223 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T05:54:02.799 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:54:02 vm02 bash[55303]: audit 2026-03-10T05:54:01.752494+0000 mon.a (mon.0) 223 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T05:54:02.799 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:54:02 vm02 bash[55303]: audit 2026-03-10T05:54:01.754041+0000 mon.a (mon.0) 224 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "osd ok-to-stop", "ids": ["2"], "max": 16}]: dispatch 2026-03-10T05:54:02.799 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:54:02 vm02 bash[55303]: audit 2026-03-10T05:54:01.754041+0000 mon.a (mon.0) 224 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "osd ok-to-stop", "ids": ["2"], "max": 16}]: dispatch 2026-03-10T05:54:02.799 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:54:02 vm02 bash[55303]: audit 2026-03-10T05:54:02.197560+0000 mon.a (mon.0) 225 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:54:02.799 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:54:02 vm02 bash[55303]: audit 2026-03-10T05:54:02.197560+0000 mon.a (mon.0) 225 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:54:02.799 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:54:02 vm02 bash[55303]: audit 2026-03-10T05:54:02.202265+0000 mon.a (mon.0) 226 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.2"}]: dispatch 2026-03-10T05:54:02.799 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:54:02 vm02 bash[55303]: audit 2026-03-10T05:54:02.202265+0000 mon.a (mon.0) 226 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.2"}]: dispatch 2026-03-10T05:54:02.799 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:54:02 vm02 bash[55303]: audit 2026-03-10T05:54:02.202679+0000 mon.a (mon.0) 227 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:54:02.799 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:54:02 vm02 bash[55303]: audit 2026-03-10T05:54:02.202679+0000 mon.a (mon.0) 227 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:54:03.000 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:54:02 vm05 bash[43541]: cluster 2026-03-10T05:54:00.835197+0000 mgr.y (mgr.24992) 95 : cluster [DBG] pgmap v33: 161 pgs: 161 active+clean; 457 KiB data, 124 MiB used, 160 GiB / 160 GiB avail 2026-03-10T05:54:03.000 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:54:02 vm05 bash[43541]: cluster 2026-03-10T05:54:00.835197+0000 mgr.y (mgr.24992) 95 : cluster [DBG] pgmap v33: 161 pgs: 161 active+clean; 457 KiB data, 124 MiB used, 160 GiB / 160 GiB avail 2026-03-10T05:54:03.001 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:54:02 vm05 bash[43541]: audit 2026-03-10T05:54:01.687780+0000 mon.a (mon.0) 215 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:54:03.001 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:54:02 vm05 bash[43541]: audit 2026-03-10T05:54:01.687780+0000 mon.a (mon.0) 215 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:54:03.001 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:54:02 vm05 bash[43541]: audit 2026-03-10T05:54:01.697878+0000 mon.a (mon.0) 216 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:54:03.001 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:54:02 vm05 bash[43541]: audit 2026-03-10T05:54:01.697878+0000 mon.a (mon.0) 216 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:54:03.001 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:54:02 vm05 bash[43541]: audit 2026-03-10T05:54:01.699062+0000 mon.a (mon.0) 217 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:54:03.001 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:54:02 vm05 bash[43541]: audit 2026-03-10T05:54:01.699062+0000 mon.a (mon.0) 217 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:54:03.001 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:54:02 vm05 bash[43541]: audit 2026-03-10T05:54:01.699580+0000 mon.a (mon.0) 218 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T05:54:03.001 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:54:02 vm05 bash[43541]: audit 2026-03-10T05:54:01.699580+0000 mon.a (mon.0) 218 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T05:54:03.001 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:54:02 vm05 bash[43541]: audit 2026-03-10T05:54:01.705247+0000 mon.a (mon.0) 219 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:54:03.001 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:54:02 vm05 bash[43541]: audit 2026-03-10T05:54:01.705247+0000 mon.a (mon.0) 219 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:54:03.001 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:54:02 vm05 bash[43541]: audit 2026-03-10T05:54:01.749139+0000 mon.a (mon.0) 220 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T05:54:03.001 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:54:02 vm05 bash[43541]: audit 2026-03-10T05:54:01.749139+0000 mon.a (mon.0) 220 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T05:54:03.001 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:54:02 vm05 bash[43541]: audit 2026-03-10T05:54:01.750435+0000 mon.a (mon.0) 221 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T05:54:03.001 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:54:02 vm05 bash[43541]: audit 2026-03-10T05:54:01.750435+0000 mon.a (mon.0) 221 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T05:54:03.001 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:54:02 vm05 bash[43541]: audit 2026-03-10T05:54:01.751357+0000 mon.a (mon.0) 222 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T05:54:03.001 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:54:02 vm05 bash[43541]: audit 2026-03-10T05:54:01.751357+0000 mon.a (mon.0) 222 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T05:54:03.001 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:54:02 vm05 bash[43541]: audit 2026-03-10T05:54:01.752494+0000 mon.a (mon.0) 223 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T05:54:03.001 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:54:02 vm05 bash[43541]: audit 2026-03-10T05:54:01.752494+0000 mon.a (mon.0) 223 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T05:54:03.001 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:54:02 vm05 bash[43541]: audit 2026-03-10T05:54:01.754041+0000 mon.a (mon.0) 224 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "osd ok-to-stop", "ids": ["2"], "max": 16}]: dispatch 2026-03-10T05:54:03.001 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:54:02 vm05 bash[43541]: audit 2026-03-10T05:54:01.754041+0000 mon.a (mon.0) 224 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "osd ok-to-stop", "ids": ["2"], "max": 16}]: dispatch 2026-03-10T05:54:03.001 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:54:02 vm05 bash[43541]: audit 2026-03-10T05:54:02.197560+0000 mon.a (mon.0) 225 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:54:03.001 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:54:02 vm05 bash[43541]: audit 2026-03-10T05:54:02.197560+0000 mon.a (mon.0) 225 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:54:03.001 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:54:02 vm05 bash[43541]: audit 2026-03-10T05:54:02.202265+0000 mon.a (mon.0) 226 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.2"}]: dispatch 2026-03-10T05:54:03.001 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:54:02 vm05 bash[43541]: audit 2026-03-10T05:54:02.202265+0000 mon.a (mon.0) 226 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.2"}]: dispatch 2026-03-10T05:54:03.001 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:54:02 vm05 bash[43541]: audit 2026-03-10T05:54:02.202679+0000 mon.a (mon.0) 227 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:54:03.001 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:54:02 vm05 bash[43541]: audit 2026-03-10T05:54:02.202679+0000 mon.a (mon.0) 227 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:54:03.084 INFO:journalctl@ceph.osd.0.vm02.stdout:Mar 10 05:54:02 vm02 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:54:03.084 INFO:journalctl@ceph.osd.1.vm02.stdout:Mar 10 05:54:02 vm02 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:54:03.085 INFO:journalctl@ceph.osd.2.vm02.stdout:Mar 10 05:54:02 vm02 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:54:03.085 INFO:journalctl@ceph.osd.2.vm02.stdout:Mar 10 05:54:02 vm02 systemd[1]: Stopping Ceph osd.2 for 107483ae-1c44-11f1-b530-c1172cd6122a... 2026-03-10T05:54:03.085 INFO:journalctl@ceph.osd.2.vm02.stdout:Mar 10 05:54:03 vm02 bash[31546]: debug 2026-03-10T05:54:03.031+0000 7f49b273a700 -1 received signal: Terminated from /sbin/docker-init -- /usr/bin/ceph-osd -n osd.2 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-stderr=true --default-log-stderr-prefix=debug (PID: 1) UID: 0 2026-03-10T05:54:03.085 INFO:journalctl@ceph.osd.2.vm02.stdout:Mar 10 05:54:03 vm02 bash[31546]: debug 2026-03-10T05:54:03.031+0000 7f49b273a700 -1 osd.2 96 *** Got signal Terminated *** 2026-03-10T05:54:03.085 INFO:journalctl@ceph.osd.2.vm02.stdout:Mar 10 05:54:03 vm02 bash[31546]: debug 2026-03-10T05:54:03.031+0000 7f49b273a700 -1 osd.2 96 *** Immediate shutdown (osd_fast_shutdown=true) *** 2026-03-10T05:54:03.085 INFO:journalctl@ceph.osd.3.vm02.stdout:Mar 10 05:54:02 vm02 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:54:03.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:54:02 vm02 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:54:03.085 INFO:journalctl@ceph.mgr.y.vm02.stdout:Mar 10 05:54:02 vm02 bash[52264]: ::ffff:192.168.123.105 - - [10/Mar/2026:05:54:02] "GET /metrics HTTP/1.1" 200 37749 "" "Prometheus/2.51.0" 2026-03-10T05:54:03.085 INFO:journalctl@ceph.mgr.y.vm02.stdout:Mar 10 05:54:02 vm02 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:54:03.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:54:02 vm02 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:54:03.085 INFO:journalctl@ceph.alertmanager.a.vm02.stdout:Mar 10 05:54:02 vm02 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:54:03.085 INFO:journalctl@ceph.node-exporter.a.vm02.stdout:Mar 10 05:54:02 vm02 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:54:04.000 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:54:03 vm05 bash[43541]: audit 2026-03-10T05:54:01.754621+0000 mgr.y (mgr.24992) 96 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "osd ok-to-stop", "ids": ["2"], "max": 16}]: dispatch 2026-03-10T05:54:04.000 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:54:03 vm05 bash[43541]: audit 2026-03-10T05:54:01.754621+0000 mgr.y (mgr.24992) 96 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "osd ok-to-stop", "ids": ["2"], "max": 16}]: dispatch 2026-03-10T05:54:04.000 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:54:03 vm05 bash[43541]: cephadm 2026-03-10T05:54:01.755182+0000 mgr.y (mgr.24992) 97 : cephadm [INF] Upgrade: osd.2 is safe to restart 2026-03-10T05:54:04.000 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:54:03 vm05 bash[43541]: cephadm 2026-03-10T05:54:01.755182+0000 mgr.y (mgr.24992) 97 : cephadm [INF] Upgrade: osd.2 is safe to restart 2026-03-10T05:54:04.000 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:54:03 vm05 bash[43541]: cephadm 2026-03-10T05:54:02.192723+0000 mgr.y (mgr.24992) 98 : cephadm [INF] Upgrade: Updating osd.2 2026-03-10T05:54:04.000 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:54:03 vm05 bash[43541]: cephadm 2026-03-10T05:54:02.192723+0000 mgr.y (mgr.24992) 98 : cephadm [INF] Upgrade: Updating osd.2 2026-03-10T05:54:04.000 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:54:03 vm05 bash[43541]: cephadm 2026-03-10T05:54:02.204021+0000 mgr.y (mgr.24992) 99 : cephadm [INF] Deploying daemon osd.2 on vm02 2026-03-10T05:54:04.000 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:54:03 vm05 bash[43541]: cephadm 2026-03-10T05:54:02.204021+0000 mgr.y (mgr.24992) 99 : cephadm [INF] Deploying daemon osd.2 on vm02 2026-03-10T05:54:04.000 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:54:03 vm05 bash[43541]: cluster 2026-03-10T05:54:03.034873+0000 mon.a (mon.0) 228 : cluster [INF] osd.2 marked itself down and dead 2026-03-10T05:54:04.000 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:54:03 vm05 bash[43541]: cluster 2026-03-10T05:54:03.034873+0000 mon.a (mon.0) 228 : cluster [INF] osd.2 marked itself down and dead 2026-03-10T05:54:04.064 INFO:journalctl@ceph.osd.2.vm02.stdout:Mar 10 05:54:03 vm02 bash[60769]: ceph-107483ae-1c44-11f1-b530-c1172cd6122a-osd-2 2026-03-10T05:54:04.064 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:54:03 vm02 bash[56371]: audit 2026-03-10T05:54:01.754621+0000 mgr.y (mgr.24992) 96 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "osd ok-to-stop", "ids": ["2"], "max": 16}]: dispatch 2026-03-10T05:54:04.064 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:54:03 vm02 bash[56371]: audit 2026-03-10T05:54:01.754621+0000 mgr.y (mgr.24992) 96 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "osd ok-to-stop", "ids": ["2"], "max": 16}]: dispatch 2026-03-10T05:54:04.064 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:54:03 vm02 bash[56371]: cephadm 2026-03-10T05:54:01.755182+0000 mgr.y (mgr.24992) 97 : cephadm [INF] Upgrade: osd.2 is safe to restart 2026-03-10T05:54:04.064 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:54:03 vm02 bash[56371]: cephadm 2026-03-10T05:54:01.755182+0000 mgr.y (mgr.24992) 97 : cephadm [INF] Upgrade: osd.2 is safe to restart 2026-03-10T05:54:04.064 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:54:03 vm02 bash[56371]: cephadm 2026-03-10T05:54:02.192723+0000 mgr.y (mgr.24992) 98 : cephadm [INF] Upgrade: Updating osd.2 2026-03-10T05:54:04.064 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:54:03 vm02 bash[56371]: cephadm 2026-03-10T05:54:02.192723+0000 mgr.y (mgr.24992) 98 : cephadm [INF] Upgrade: Updating osd.2 2026-03-10T05:54:04.064 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:54:03 vm02 bash[56371]: cephadm 2026-03-10T05:54:02.204021+0000 mgr.y (mgr.24992) 99 : cephadm [INF] Deploying daemon osd.2 on vm02 2026-03-10T05:54:04.064 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:54:03 vm02 bash[56371]: cephadm 2026-03-10T05:54:02.204021+0000 mgr.y (mgr.24992) 99 : cephadm [INF] Deploying daemon osd.2 on vm02 2026-03-10T05:54:04.064 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:54:03 vm02 bash[56371]: cluster 2026-03-10T05:54:03.034873+0000 mon.a (mon.0) 228 : cluster [INF] osd.2 marked itself down and dead 2026-03-10T05:54:04.064 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:54:03 vm02 bash[56371]: cluster 2026-03-10T05:54:03.034873+0000 mon.a (mon.0) 228 : cluster [INF] osd.2 marked itself down and dead 2026-03-10T05:54:04.064 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:54:03 vm02 bash[55303]: audit 2026-03-10T05:54:01.754621+0000 mgr.y (mgr.24992) 96 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "osd ok-to-stop", "ids": ["2"], "max": 16}]: dispatch 2026-03-10T05:54:04.064 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:54:03 vm02 bash[55303]: audit 2026-03-10T05:54:01.754621+0000 mgr.y (mgr.24992) 96 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "osd ok-to-stop", "ids": ["2"], "max": 16}]: dispatch 2026-03-10T05:54:04.064 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:54:03 vm02 bash[55303]: cephadm 2026-03-10T05:54:01.755182+0000 mgr.y (mgr.24992) 97 : cephadm [INF] Upgrade: osd.2 is safe to restart 2026-03-10T05:54:04.064 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:54:03 vm02 bash[55303]: cephadm 2026-03-10T05:54:01.755182+0000 mgr.y (mgr.24992) 97 : cephadm [INF] Upgrade: osd.2 is safe to restart 2026-03-10T05:54:04.064 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:54:03 vm02 bash[55303]: cephadm 2026-03-10T05:54:02.192723+0000 mgr.y (mgr.24992) 98 : cephadm [INF] Upgrade: Updating osd.2 2026-03-10T05:54:04.064 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:54:03 vm02 bash[55303]: cephadm 2026-03-10T05:54:02.192723+0000 mgr.y (mgr.24992) 98 : cephadm [INF] Upgrade: Updating osd.2 2026-03-10T05:54:04.064 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:54:03 vm02 bash[55303]: cephadm 2026-03-10T05:54:02.204021+0000 mgr.y (mgr.24992) 99 : cephadm [INF] Deploying daemon osd.2 on vm02 2026-03-10T05:54:04.064 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:54:03 vm02 bash[55303]: cephadm 2026-03-10T05:54:02.204021+0000 mgr.y (mgr.24992) 99 : cephadm [INF] Deploying daemon osd.2 on vm02 2026-03-10T05:54:04.064 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:54:03 vm02 bash[55303]: cluster 2026-03-10T05:54:03.034873+0000 mon.a (mon.0) 228 : cluster [INF] osd.2 marked itself down and dead 2026-03-10T05:54:04.064 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:54:03 vm02 bash[55303]: cluster 2026-03-10T05:54:03.034873+0000 mon.a (mon.0) 228 : cluster [INF] osd.2 marked itself down and dead 2026-03-10T05:54:04.334 INFO:journalctl@ceph.osd.0.vm02.stdout:Mar 10 05:54:04 vm02 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:54:04.334 INFO:journalctl@ceph.osd.1.vm02.stdout:Mar 10 05:54:04 vm02 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:54:04.334 INFO:journalctl@ceph.osd.2.vm02.stdout:Mar 10 05:54:04 vm02 systemd[1]: ceph-107483ae-1c44-11f1-b530-c1172cd6122a@osd.2.service: Deactivated successfully. 2026-03-10T05:54:04.334 INFO:journalctl@ceph.osd.2.vm02.stdout:Mar 10 05:54:04 vm02 systemd[1]: Stopped Ceph osd.2 for 107483ae-1c44-11f1-b530-c1172cd6122a. 2026-03-10T05:54:04.334 INFO:journalctl@ceph.osd.2.vm02.stdout:Mar 10 05:54:04 vm02 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:54:04.335 INFO:journalctl@ceph.osd.2.vm02.stdout:Mar 10 05:54:04 vm02 systemd[1]: Started Ceph osd.2 for 107483ae-1c44-11f1-b530-c1172cd6122a. 2026-03-10T05:54:04.335 INFO:journalctl@ceph.osd.3.vm02.stdout:Mar 10 05:54:04 vm02 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:54:04.335 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:54:04 vm02 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:54:04.335 INFO:journalctl@ceph.mgr.y.vm02.stdout:Mar 10 05:54:04 vm02 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:54:04.335 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:54:04 vm02 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:54:04.335 INFO:journalctl@ceph.alertmanager.a.vm02.stdout:Mar 10 05:54:04 vm02 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:54:04.335 INFO:journalctl@ceph.node-exporter.a.vm02.stdout:Mar 10 05:54:04 vm02 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:54:04.500 INFO:journalctl@ceph.prometheus.a.vm05.stdout:Mar 10 05:54:04 vm05 bash[41269]: ts=2026-03-10T05:54:04.147Z caller=group.go:483 level=warn name=CephOSDFlapping index=13 component="rule manager" file=/etc/prometheus/alerting/ceph_alerts.yml group=osd msg="Evaluating rule failed" rule="alert: CephOSDFlapping\nexpr: (rate(ceph_osd_up[5m]) * on (ceph_daemon) group_left (hostname) ceph_osd_metadata)\n * 60 > 1\nlabels:\n oid: 1.3.6.1.4.1.50495.1.2.1.4.4\n severity: warning\n type: ceph_default\nannotations:\n description: OSD {{ $labels.ceph_daemon }} on {{ $labels.hostname }} was marked\n down and back up {{ $value | humanize }} times once a minute for 5 minutes. This\n may indicate a network issue (latency, packet loss, MTU mismatch) on the cluster\n network, or the public network if no cluster network is deployed. Check the network\n stats on the listed host(s).\n documentation: https://docs.ceph.com/en/latest/rados/troubleshooting/troubleshooting-osd#flapping-osds\n summary: Network issues are causing OSDs to flap (mark each other down)\n" err="found duplicate series for the match group {ceph_daemon=\"osd.0\"} on the right hand-side of the operation: [{__name__=\"ceph_osd_metadata\", ceph_daemon=\"osd.0\", ceph_version=\"ceph version 17.2.0 (43e2e60a7559d3f46c9d53f1ca875fd499a1e35e) quincy (stable)\", cluster=\"107483ae-1c44-11f1-b530-c1172cd6122a\", cluster_addr=\"192.168.123.102\", device_class=\"hdd\", hostname=\"vm02\", instance=\"ceph_cluster\", job=\"ceph\", objectstore=\"bluestore\", public_addr=\"192.168.123.102\"}, {__name__=\"ceph_osd_metadata\", ceph_daemon=\"osd.0\", ceph_version=\"ceph version 17.2.0 (43e2e60a7559d3f46c9d53f1ca875fd499a1e35e) quincy (stable)\", cluster_addr=\"192.168.123.102\", device_class=\"hdd\", hostname=\"vm02\", instance=\"192.168.123.105:9283\", job=\"ceph\", objectstore=\"bluestore\", public_addr=\"192.168.123.102\"}];many-to-many matching not allowed: matching labels must be unique on one side" 2026-03-10T05:54:04.717 INFO:journalctl@ceph.osd.2.vm02.stdout:Mar 10 05:54:04 vm02 bash[60982]: Running command: /usr/bin/ceph-authtool --gen-print-key 2026-03-10T05:54:04.717 INFO:journalctl@ceph.osd.2.vm02.stdout:Mar 10 05:54:04 vm02 bash[60982]: Running command: /usr/bin/ceph-authtool --gen-print-key 2026-03-10T05:54:05.000 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:54:04 vm05 bash[43541]: cluster 2026-03-10T05:54:02.835645+0000 mgr.y (mgr.24992) 100 : cluster [DBG] pgmap v34: 161 pgs: 161 active+clean; 457 KiB data, 124 MiB used, 160 GiB / 160 GiB avail; 409 B/s rd, 0 op/s 2026-03-10T05:54:05.000 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:54:04 vm05 bash[43541]: cluster 2026-03-10T05:54:02.835645+0000 mgr.y (mgr.24992) 100 : cluster [DBG] pgmap v34: 161 pgs: 161 active+clean; 457 KiB data, 124 MiB used, 160 GiB / 160 GiB avail; 409 B/s rd, 0 op/s 2026-03-10T05:54:05.000 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:54:04 vm05 bash[43541]: cluster 2026-03-10T05:54:03.690026+0000 mon.a (mon.0) 229 : cluster [WRN] Health check failed: 1 osds down (OSD_DOWN) 2026-03-10T05:54:05.001 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:54:04 vm05 bash[43541]: cluster 2026-03-10T05:54:03.690026+0000 mon.a (mon.0) 229 : cluster [WRN] Health check failed: 1 osds down (OSD_DOWN) 2026-03-10T05:54:05.001 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:54:04 vm05 bash[43541]: cluster 2026-03-10T05:54:03.722684+0000 mon.a (mon.0) 230 : cluster [DBG] osdmap e97: 8 total, 7 up, 8 in 2026-03-10T05:54:05.001 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:54:04 vm05 bash[43541]: cluster 2026-03-10T05:54:03.722684+0000 mon.a (mon.0) 230 : cluster [DBG] osdmap e97: 8 total, 7 up, 8 in 2026-03-10T05:54:05.001 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:54:04 vm05 bash[43541]: audit 2026-03-10T05:54:04.314653+0000 mon.a (mon.0) 231 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:54:05.001 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:54:04 vm05 bash[43541]: audit 2026-03-10T05:54:04.314653+0000 mon.a (mon.0) 231 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:54:05.001 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:54:04 vm05 bash[43541]: audit 2026-03-10T05:54:04.320812+0000 mon.a (mon.0) 232 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:54:05.001 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:54:04 vm05 bash[43541]: audit 2026-03-10T05:54:04.320812+0000 mon.a (mon.0) 232 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:54:05.084 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:54:04 vm02 bash[56371]: cluster 2026-03-10T05:54:02.835645+0000 mgr.y (mgr.24992) 100 : cluster [DBG] pgmap v34: 161 pgs: 161 active+clean; 457 KiB data, 124 MiB used, 160 GiB / 160 GiB avail; 409 B/s rd, 0 op/s 2026-03-10T05:54:05.084 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:54:04 vm02 bash[56371]: cluster 2026-03-10T05:54:02.835645+0000 mgr.y (mgr.24992) 100 : cluster [DBG] pgmap v34: 161 pgs: 161 active+clean; 457 KiB data, 124 MiB used, 160 GiB / 160 GiB avail; 409 B/s rd, 0 op/s 2026-03-10T05:54:05.084 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:54:04 vm02 bash[56371]: cluster 2026-03-10T05:54:03.690026+0000 mon.a (mon.0) 229 : cluster [WRN] Health check failed: 1 osds down (OSD_DOWN) 2026-03-10T05:54:05.084 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:54:04 vm02 bash[56371]: cluster 2026-03-10T05:54:03.690026+0000 mon.a (mon.0) 229 : cluster [WRN] Health check failed: 1 osds down (OSD_DOWN) 2026-03-10T05:54:05.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:54:04 vm02 bash[56371]: cluster 2026-03-10T05:54:03.722684+0000 mon.a (mon.0) 230 : cluster [DBG] osdmap e97: 8 total, 7 up, 8 in 2026-03-10T05:54:05.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:54:04 vm02 bash[56371]: cluster 2026-03-10T05:54:03.722684+0000 mon.a (mon.0) 230 : cluster [DBG] osdmap e97: 8 total, 7 up, 8 in 2026-03-10T05:54:05.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:54:04 vm02 bash[56371]: audit 2026-03-10T05:54:04.314653+0000 mon.a (mon.0) 231 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:54:05.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:54:04 vm02 bash[56371]: audit 2026-03-10T05:54:04.314653+0000 mon.a (mon.0) 231 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:54:05.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:54:04 vm02 bash[56371]: audit 2026-03-10T05:54:04.320812+0000 mon.a (mon.0) 232 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:54:05.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:54:04 vm02 bash[56371]: audit 2026-03-10T05:54:04.320812+0000 mon.a (mon.0) 232 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:54:05.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:54:04 vm02 bash[55303]: cluster 2026-03-10T05:54:02.835645+0000 mgr.y (mgr.24992) 100 : cluster [DBG] pgmap v34: 161 pgs: 161 active+clean; 457 KiB data, 124 MiB used, 160 GiB / 160 GiB avail; 409 B/s rd, 0 op/s 2026-03-10T05:54:05.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:54:04 vm02 bash[55303]: cluster 2026-03-10T05:54:02.835645+0000 mgr.y (mgr.24992) 100 : cluster [DBG] pgmap v34: 161 pgs: 161 active+clean; 457 KiB data, 124 MiB used, 160 GiB / 160 GiB avail; 409 B/s rd, 0 op/s 2026-03-10T05:54:05.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:54:04 vm02 bash[55303]: cluster 2026-03-10T05:54:03.690026+0000 mon.a (mon.0) 229 : cluster [WRN] Health check failed: 1 osds down (OSD_DOWN) 2026-03-10T05:54:05.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:54:04 vm02 bash[55303]: cluster 2026-03-10T05:54:03.690026+0000 mon.a (mon.0) 229 : cluster [WRN] Health check failed: 1 osds down (OSD_DOWN) 2026-03-10T05:54:05.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:54:04 vm02 bash[55303]: cluster 2026-03-10T05:54:03.722684+0000 mon.a (mon.0) 230 : cluster [DBG] osdmap e97: 8 total, 7 up, 8 in 2026-03-10T05:54:05.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:54:04 vm02 bash[55303]: cluster 2026-03-10T05:54:03.722684+0000 mon.a (mon.0) 230 : cluster [DBG] osdmap e97: 8 total, 7 up, 8 in 2026-03-10T05:54:05.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:54:04 vm02 bash[55303]: audit 2026-03-10T05:54:04.314653+0000 mon.a (mon.0) 231 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:54:05.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:54:04 vm02 bash[55303]: audit 2026-03-10T05:54:04.314653+0000 mon.a (mon.0) 231 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:54:05.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:54:04 vm02 bash[55303]: audit 2026-03-10T05:54:04.320812+0000 mon.a (mon.0) 232 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:54:05.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:54:04 vm02 bash[55303]: audit 2026-03-10T05:54:04.320812+0000 mon.a (mon.0) 232 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:54:05.584 INFO:journalctl@ceph.osd.2.vm02.stdout:Mar 10 05:54:05 vm02 bash[60982]: --> Failed to activate via raw: did not find any matching OSD to activate 2026-03-10T05:54:05.584 INFO:journalctl@ceph.osd.2.vm02.stdout:Mar 10 05:54:05 vm02 bash[60982]: Running command: /usr/bin/ceph-authtool --gen-print-key 2026-03-10T05:54:05.584 INFO:journalctl@ceph.osd.2.vm02.stdout:Mar 10 05:54:05 vm02 bash[60982]: Running command: /usr/bin/ceph-authtool --gen-print-key 2026-03-10T05:54:05.585 INFO:journalctl@ceph.osd.2.vm02.stdout:Mar 10 05:54:05 vm02 bash[60982]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2 2026-03-10T05:54:05.585 INFO:journalctl@ceph.osd.2.vm02.stdout:Mar 10 05:54:05 vm02 bash[60982]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph-2d60fd26-1d13-4945-8699-ed58adf37202/osd-block-2d5b11d8-3856-47e7-80bc-ba0d5e91fd6c --path /var/lib/ceph/osd/ceph-2 --no-mon-config 2026-03-10T05:54:06.000 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:54:05 vm05 bash[43541]: cluster 2026-03-10T05:54:04.705717+0000 mon.a (mon.0) 233 : cluster [DBG] osdmap e98: 8 total, 7 up, 8 in 2026-03-10T05:54:06.000 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:54:05 vm05 bash[43541]: cluster 2026-03-10T05:54:04.705717+0000 mon.a (mon.0) 233 : cluster [DBG] osdmap e98: 8 total, 7 up, 8 in 2026-03-10T05:54:06.000 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:54:05 vm05 bash[43541]: cluster 2026-03-10T05:54:04.835957+0000 mgr.y (mgr.24992) 101 : cluster [DBG] pgmap v37: 161 pgs: 13 stale+active+clean, 148 active+clean; 457 KiB data, 124 MiB used, 160 GiB / 160 GiB avail; 639 B/s rd, 0 op/s 2026-03-10T05:54:06.000 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:54:05 vm05 bash[43541]: cluster 2026-03-10T05:54:04.835957+0000 mgr.y (mgr.24992) 101 : cluster [DBG] pgmap v37: 161 pgs: 13 stale+active+clean, 148 active+clean; 457 KiB data, 124 MiB used, 160 GiB / 160 GiB avail; 639 B/s rd, 0 op/s 2026-03-10T05:54:06.084 INFO:journalctl@ceph.osd.2.vm02.stdout:Mar 10 05:54:05 vm02 bash[60982]: Running command: /usr/bin/ln -snf /dev/ceph-2d60fd26-1d13-4945-8699-ed58adf37202/osd-block-2d5b11d8-3856-47e7-80bc-ba0d5e91fd6c /var/lib/ceph/osd/ceph-2/block 2026-03-10T05:54:06.084 INFO:journalctl@ceph.osd.2.vm02.stdout:Mar 10 05:54:05 vm02 bash[60982]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-2/block 2026-03-10T05:54:06.084 INFO:journalctl@ceph.osd.2.vm02.stdout:Mar 10 05:54:05 vm02 bash[60982]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-2 2026-03-10T05:54:06.084 INFO:journalctl@ceph.osd.2.vm02.stdout:Mar 10 05:54:05 vm02 bash[60982]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2 2026-03-10T05:54:06.084 INFO:journalctl@ceph.osd.2.vm02.stdout:Mar 10 05:54:05 vm02 bash[60982]: --> ceph-volume lvm activate successful for osd ID: 2 2026-03-10T05:54:06.084 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:54:05 vm02 bash[56371]: cluster 2026-03-10T05:54:04.705717+0000 mon.a (mon.0) 233 : cluster [DBG] osdmap e98: 8 total, 7 up, 8 in 2026-03-10T05:54:06.084 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:54:05 vm02 bash[56371]: cluster 2026-03-10T05:54:04.705717+0000 mon.a (mon.0) 233 : cluster [DBG] osdmap e98: 8 total, 7 up, 8 in 2026-03-10T05:54:06.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:54:05 vm02 bash[56371]: cluster 2026-03-10T05:54:04.835957+0000 mgr.y (mgr.24992) 101 : cluster [DBG] pgmap v37: 161 pgs: 13 stale+active+clean, 148 active+clean; 457 KiB data, 124 MiB used, 160 GiB / 160 GiB avail; 639 B/s rd, 0 op/s 2026-03-10T05:54:06.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:54:05 vm02 bash[56371]: cluster 2026-03-10T05:54:04.835957+0000 mgr.y (mgr.24992) 101 : cluster [DBG] pgmap v37: 161 pgs: 13 stale+active+clean, 148 active+clean; 457 KiB data, 124 MiB used, 160 GiB / 160 GiB avail; 639 B/s rd, 0 op/s 2026-03-10T05:54:06.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:54:05 vm02 bash[55303]: cluster 2026-03-10T05:54:04.705717+0000 mon.a (mon.0) 233 : cluster [DBG] osdmap e98: 8 total, 7 up, 8 in 2026-03-10T05:54:06.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:54:05 vm02 bash[55303]: cluster 2026-03-10T05:54:04.705717+0000 mon.a (mon.0) 233 : cluster [DBG] osdmap e98: 8 total, 7 up, 8 in 2026-03-10T05:54:06.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:54:05 vm02 bash[55303]: cluster 2026-03-10T05:54:04.835957+0000 mgr.y (mgr.24992) 101 : cluster [DBG] pgmap v37: 161 pgs: 13 stale+active+clean, 148 active+clean; 457 KiB data, 124 MiB used, 160 GiB / 160 GiB avail; 639 B/s rd, 0 op/s 2026-03-10T05:54:06.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:54:05 vm02 bash[55303]: cluster 2026-03-10T05:54:04.835957+0000 mgr.y (mgr.24992) 101 : cluster [DBG] pgmap v37: 161 pgs: 13 stale+active+clean, 148 active+clean; 457 KiB data, 124 MiB used, 160 GiB / 160 GiB avail; 639 B/s rd, 0 op/s 2026-03-10T05:54:06.834 INFO:journalctl@ceph.osd.2.vm02.stdout:Mar 10 05:54:06 vm02 bash[61325]: debug 2026-03-10T05:54:06.523+0000 7f8dcec21740 -1 Falling back to public interface 2026-03-10T05:54:07.250 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:54:06 vm05 bash[43541]: audit 2026-03-10T05:54:05.876963+0000 mon.a (mon.0) 234 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:54:07.250 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:54:06 vm05 bash[43541]: audit 2026-03-10T05:54:05.876963+0000 mon.a (mon.0) 234 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:54:07.250 INFO:journalctl@ceph.prometheus.a.vm05.stdout:Mar 10 05:54:06 vm05 bash[41269]: ts=2026-03-10T05:54:06.948Z caller=group.go:483 level=warn name=CephNodeDiskspaceWarning index=4 component="rule manager" file=/etc/prometheus/alerting/ceph_alerts.yml group=nodes msg="Evaluating rule failed" rule="alert: CephNodeDiskspaceWarning\nexpr: predict_linear(node_filesystem_free_bytes{device=~\"/.*\"}[2d], 3600 * 24 * 5)\n * on (instance) group_left (nodename) node_uname_info < 0\nlabels:\n oid: 1.3.6.1.4.1.50495.1.2.1.8.4\n severity: warning\n type: ceph_default\nannotations:\n description: Mountpoint {{ $labels.mountpoint }} on {{ $labels.nodename }} will\n be full in less than 5 days based on the 48 hour trailing fill rate.\n summary: Host filesystem free space is getting low\n" err="found duplicate series for the match group {instance=\"vm05\"} on the right hand-side of the operation: [{__name__=\"node_uname_info\", cluster=\"107483ae-1c44-11f1-b530-c1172cd6122a\", domainname=\"(none)\", instance=\"vm05\", job=\"node\", machine=\"x86_64\", nodename=\"vm05\", release=\"5.15.0-1092-kvm\", sysname=\"Linux\", version=\"#97-Ubuntu SMP Fri Jan 23 15:00:24 UTC 2026\"}, {__name__=\"node_uname_info\", domainname=\"(none)\", instance=\"vm05\", job=\"node\", machine=\"x86_64\", nodename=\"vm05\", release=\"5.15.0-1092-kvm\", sysname=\"Linux\", version=\"#97-Ubuntu SMP Fri Jan 23 15:00:24 UTC 2026\"}];many-to-many matching not allowed: matching labels must be unique on one side" 2026-03-10T05:54:07.334 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:54:06 vm02 bash[56371]: audit 2026-03-10T05:54:05.876963+0000 mon.a (mon.0) 234 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:54:07.334 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:54:06 vm02 bash[56371]: audit 2026-03-10T05:54:05.876963+0000 mon.a (mon.0) 234 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:54:07.334 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:54:06 vm02 bash[55303]: audit 2026-03-10T05:54:05.876963+0000 mon.a (mon.0) 234 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:54:07.334 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:54:06 vm02 bash[55303]: audit 2026-03-10T05:54:05.876963+0000 mon.a (mon.0) 234 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:54:07.834 INFO:journalctl@ceph.osd.2.vm02.stdout:Mar 10 05:54:07 vm02 bash[61325]: debug 2026-03-10T05:54:07.467+0000 7f8dcec21740 -1 osd.2 0 read_superblock omap replica is missing. 2026-03-10T05:54:07.834 INFO:journalctl@ceph.osd.2.vm02.stdout:Mar 10 05:54:07 vm02 bash[61325]: debug 2026-03-10T05:54:07.495+0000 7f8dcec21740 -1 osd.2 96 log_to_monitors true 2026-03-10T05:54:08.250 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:54:07 vm05 bash[43541]: cluster 2026-03-10T05:54:06.836477+0000 mgr.y (mgr.24992) 102 : cluster [DBG] pgmap v38: 161 pgs: 23 active+undersized, 5 stale+active+clean, 11 active+undersized+degraded, 122 active+clean; 457 KiB data, 124 MiB used, 160 GiB / 160 GiB avail; 51/723 objects degraded (7.054%) 2026-03-10T05:54:08.250 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:54:07 vm05 bash[43541]: cluster 2026-03-10T05:54:06.836477+0000 mgr.y (mgr.24992) 102 : cluster [DBG] pgmap v38: 161 pgs: 23 active+undersized, 5 stale+active+clean, 11 active+undersized+degraded, 122 active+clean; 457 KiB data, 124 MiB used, 160 GiB / 160 GiB avail; 51/723 objects degraded (7.054%) 2026-03-10T05:54:08.250 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:54:07 vm05 bash[43541]: cluster 2026-03-10T05:54:06.875332+0000 mon.a (mon.0) 235 : cluster [WRN] Health check failed: Degraded data redundancy: 51/723 objects degraded (7.054%), 11 pgs degraded (PG_DEGRADED) 2026-03-10T05:54:08.250 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:54:07 vm05 bash[43541]: cluster 2026-03-10T05:54:06.875332+0000 mon.a (mon.0) 235 : cluster [WRN] Health check failed: Degraded data redundancy: 51/723 objects degraded (7.054%), 11 pgs degraded (PG_DEGRADED) 2026-03-10T05:54:08.250 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:54:07 vm05 bash[43541]: audit 2026-03-10T05:54:06.906389+0000 mgr.y (mgr.24992) 103 : audit [DBG] from='client.25048 -' entity='client.iscsi.foo.vm02.mxbwmh' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T05:54:08.250 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:54:07 vm05 bash[43541]: audit 2026-03-10T05:54:06.906389+0000 mgr.y (mgr.24992) 103 : audit [DBG] from='client.25048 -' entity='client.iscsi.foo.vm02.mxbwmh' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T05:54:08.250 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:54:07 vm05 bash[43541]: audit 2026-03-10T05:54:07.504809+0000 mon.c (mon.1) 6 : audit [INF] from='osd.2 [v2:192.168.123.102:6818/3128106458,v1:192.168.123.102:6819/3128106458]' entity='osd.2' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]: dispatch 2026-03-10T05:54:08.251 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:54:07 vm05 bash[43541]: audit 2026-03-10T05:54:07.504809+0000 mon.c (mon.1) 6 : audit [INF] from='osd.2 [v2:192.168.123.102:6818/3128106458,v1:192.168.123.102:6819/3128106458]' entity='osd.2' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]: dispatch 2026-03-10T05:54:08.251 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:54:07 vm05 bash[43541]: audit 2026-03-10T05:54:07.505099+0000 mon.a (mon.0) 236 : audit [INF] from='osd.2 ' entity='osd.2' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]: dispatch 2026-03-10T05:54:08.251 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:54:07 vm05 bash[43541]: audit 2026-03-10T05:54:07.505099+0000 mon.a (mon.0) 236 : audit [INF] from='osd.2 ' entity='osd.2' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]: dispatch 2026-03-10T05:54:08.335 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:54:07 vm02 bash[56371]: cluster 2026-03-10T05:54:06.836477+0000 mgr.y (mgr.24992) 102 : cluster [DBG] pgmap v38: 161 pgs: 23 active+undersized, 5 stale+active+clean, 11 active+undersized+degraded, 122 active+clean; 457 KiB data, 124 MiB used, 160 GiB / 160 GiB avail; 51/723 objects degraded (7.054%) 2026-03-10T05:54:08.335 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:54:07 vm02 bash[56371]: cluster 2026-03-10T05:54:06.836477+0000 mgr.y (mgr.24992) 102 : cluster [DBG] pgmap v38: 161 pgs: 23 active+undersized, 5 stale+active+clean, 11 active+undersized+degraded, 122 active+clean; 457 KiB data, 124 MiB used, 160 GiB / 160 GiB avail; 51/723 objects degraded (7.054%) 2026-03-10T05:54:08.335 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:54:07 vm02 bash[56371]: cluster 2026-03-10T05:54:06.875332+0000 mon.a (mon.0) 235 : cluster [WRN] Health check failed: Degraded data redundancy: 51/723 objects degraded (7.054%), 11 pgs degraded (PG_DEGRADED) 2026-03-10T05:54:08.335 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:54:07 vm02 bash[56371]: cluster 2026-03-10T05:54:06.875332+0000 mon.a (mon.0) 235 : cluster [WRN] Health check failed: Degraded data redundancy: 51/723 objects degraded (7.054%), 11 pgs degraded (PG_DEGRADED) 2026-03-10T05:54:08.335 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:54:07 vm02 bash[56371]: audit 2026-03-10T05:54:06.906389+0000 mgr.y (mgr.24992) 103 : audit [DBG] from='client.25048 -' entity='client.iscsi.foo.vm02.mxbwmh' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T05:54:08.335 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:54:07 vm02 bash[56371]: audit 2026-03-10T05:54:06.906389+0000 mgr.y (mgr.24992) 103 : audit [DBG] from='client.25048 -' entity='client.iscsi.foo.vm02.mxbwmh' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T05:54:08.335 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:54:07 vm02 bash[56371]: audit 2026-03-10T05:54:07.504809+0000 mon.c (mon.1) 6 : audit [INF] from='osd.2 [v2:192.168.123.102:6818/3128106458,v1:192.168.123.102:6819/3128106458]' entity='osd.2' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]: dispatch 2026-03-10T05:54:08.335 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:54:07 vm02 bash[56371]: audit 2026-03-10T05:54:07.504809+0000 mon.c (mon.1) 6 : audit [INF] from='osd.2 [v2:192.168.123.102:6818/3128106458,v1:192.168.123.102:6819/3128106458]' entity='osd.2' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]: dispatch 2026-03-10T05:54:08.335 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:54:07 vm02 bash[56371]: audit 2026-03-10T05:54:07.505099+0000 mon.a (mon.0) 236 : audit [INF] from='osd.2 ' entity='osd.2' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]: dispatch 2026-03-10T05:54:08.335 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:54:07 vm02 bash[56371]: audit 2026-03-10T05:54:07.505099+0000 mon.a (mon.0) 236 : audit [INF] from='osd.2 ' entity='osd.2' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]: dispatch 2026-03-10T05:54:08.335 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:54:07 vm02 bash[55303]: cluster 2026-03-10T05:54:06.836477+0000 mgr.y (mgr.24992) 102 : cluster [DBG] pgmap v38: 161 pgs: 23 active+undersized, 5 stale+active+clean, 11 active+undersized+degraded, 122 active+clean; 457 KiB data, 124 MiB used, 160 GiB / 160 GiB avail; 51/723 objects degraded (7.054%) 2026-03-10T05:54:08.335 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:54:07 vm02 bash[55303]: cluster 2026-03-10T05:54:06.836477+0000 mgr.y (mgr.24992) 102 : cluster [DBG] pgmap v38: 161 pgs: 23 active+undersized, 5 stale+active+clean, 11 active+undersized+degraded, 122 active+clean; 457 KiB data, 124 MiB used, 160 GiB / 160 GiB avail; 51/723 objects degraded (7.054%) 2026-03-10T05:54:08.335 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:54:07 vm02 bash[55303]: cluster 2026-03-10T05:54:06.875332+0000 mon.a (mon.0) 235 : cluster [WRN] Health check failed: Degraded data redundancy: 51/723 objects degraded (7.054%), 11 pgs degraded (PG_DEGRADED) 2026-03-10T05:54:08.335 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:54:07 vm02 bash[55303]: cluster 2026-03-10T05:54:06.875332+0000 mon.a (mon.0) 235 : cluster [WRN] Health check failed: Degraded data redundancy: 51/723 objects degraded (7.054%), 11 pgs degraded (PG_DEGRADED) 2026-03-10T05:54:08.335 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:54:07 vm02 bash[55303]: audit 2026-03-10T05:54:06.906389+0000 mgr.y (mgr.24992) 103 : audit [DBG] from='client.25048 -' entity='client.iscsi.foo.vm02.mxbwmh' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T05:54:08.335 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:54:07 vm02 bash[55303]: audit 2026-03-10T05:54:06.906389+0000 mgr.y (mgr.24992) 103 : audit [DBG] from='client.25048 -' entity='client.iscsi.foo.vm02.mxbwmh' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T05:54:08.335 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:54:07 vm02 bash[55303]: audit 2026-03-10T05:54:07.504809+0000 mon.c (mon.1) 6 : audit [INF] from='osd.2 [v2:192.168.123.102:6818/3128106458,v1:192.168.123.102:6819/3128106458]' entity='osd.2' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]: dispatch 2026-03-10T05:54:08.335 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:54:07 vm02 bash[55303]: audit 2026-03-10T05:54:07.504809+0000 mon.c (mon.1) 6 : audit [INF] from='osd.2 [v2:192.168.123.102:6818/3128106458,v1:192.168.123.102:6819/3128106458]' entity='osd.2' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]: dispatch 2026-03-10T05:54:08.335 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:54:07 vm02 bash[55303]: audit 2026-03-10T05:54:07.505099+0000 mon.a (mon.0) 236 : audit [INF] from='osd.2 ' entity='osd.2' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]: dispatch 2026-03-10T05:54:08.335 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:54:07 vm02 bash[55303]: audit 2026-03-10T05:54:07.505099+0000 mon.a (mon.0) 236 : audit [INF] from='osd.2 ' entity='osd.2' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]: dispatch 2026-03-10T05:54:08.335 INFO:journalctl@ceph.osd.2.vm02.stdout:Mar 10 05:54:07 vm02 bash[61325]: debug 2026-03-10T05:54:07.919+0000 7f8dc69cc640 -1 osd.2 96 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory 2026-03-10T05:54:09.250 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:54:08 vm05 bash[43541]: audit 2026-03-10T05:54:07.886956+0000 mon.a (mon.0) 237 : audit [INF] from='osd.2 ' entity='osd.2' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]': finished 2026-03-10T05:54:09.250 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:54:08 vm05 bash[43541]: audit 2026-03-10T05:54:07.886956+0000 mon.a (mon.0) 237 : audit [INF] from='osd.2 ' entity='osd.2' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]': finished 2026-03-10T05:54:09.250 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:54:08 vm05 bash[43541]: cluster 2026-03-10T05:54:07.890580+0000 mon.a (mon.0) 238 : cluster [DBG] osdmap e99: 8 total, 7 up, 8 in 2026-03-10T05:54:09.250 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:54:08 vm05 bash[43541]: cluster 2026-03-10T05:54:07.890580+0000 mon.a (mon.0) 238 : cluster [DBG] osdmap e99: 8 total, 7 up, 8 in 2026-03-10T05:54:09.250 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:54:08 vm05 bash[43541]: audit 2026-03-10T05:54:07.891801+0000 mon.c (mon.1) 7 : audit [INF] from='osd.2 [v2:192.168.123.102:6818/3128106458,v1:192.168.123.102:6819/3128106458]' entity='osd.2' cmd=[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=vm02", "root=default"]}]: dispatch 2026-03-10T05:54:09.250 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:54:08 vm05 bash[43541]: audit 2026-03-10T05:54:07.891801+0000 mon.c (mon.1) 7 : audit [INF] from='osd.2 [v2:192.168.123.102:6818/3128106458,v1:192.168.123.102:6819/3128106458]' entity='osd.2' cmd=[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=vm02", "root=default"]}]: dispatch 2026-03-10T05:54:09.250 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:54:08 vm05 bash[43541]: audit 2026-03-10T05:54:07.893013+0000 mon.a (mon.0) 239 : audit [INF] from='osd.2 ' entity='osd.2' cmd=[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=vm02", "root=default"]}]: dispatch 2026-03-10T05:54:09.250 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:54:08 vm05 bash[43541]: audit 2026-03-10T05:54:07.893013+0000 mon.a (mon.0) 239 : audit [INF] from='osd.2 ' entity='osd.2' cmd=[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=vm02", "root=default"]}]: dispatch 2026-03-10T05:54:09.335 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:54:08 vm02 bash[56371]: audit 2026-03-10T05:54:07.886956+0000 mon.a (mon.0) 237 : audit [INF] from='osd.2 ' entity='osd.2' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]': finished 2026-03-10T05:54:09.335 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:54:08 vm02 bash[56371]: audit 2026-03-10T05:54:07.886956+0000 mon.a (mon.0) 237 : audit [INF] from='osd.2 ' entity='osd.2' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]': finished 2026-03-10T05:54:09.335 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:54:08 vm02 bash[56371]: cluster 2026-03-10T05:54:07.890580+0000 mon.a (mon.0) 238 : cluster [DBG] osdmap e99: 8 total, 7 up, 8 in 2026-03-10T05:54:09.335 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:54:08 vm02 bash[56371]: cluster 2026-03-10T05:54:07.890580+0000 mon.a (mon.0) 238 : cluster [DBG] osdmap e99: 8 total, 7 up, 8 in 2026-03-10T05:54:09.335 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:54:08 vm02 bash[56371]: audit 2026-03-10T05:54:07.891801+0000 mon.c (mon.1) 7 : audit [INF] from='osd.2 [v2:192.168.123.102:6818/3128106458,v1:192.168.123.102:6819/3128106458]' entity='osd.2' cmd=[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=vm02", "root=default"]}]: dispatch 2026-03-10T05:54:09.335 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:54:08 vm02 bash[56371]: audit 2026-03-10T05:54:07.891801+0000 mon.c (mon.1) 7 : audit [INF] from='osd.2 [v2:192.168.123.102:6818/3128106458,v1:192.168.123.102:6819/3128106458]' entity='osd.2' cmd=[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=vm02", "root=default"]}]: dispatch 2026-03-10T05:54:09.335 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:54:08 vm02 bash[56371]: audit 2026-03-10T05:54:07.893013+0000 mon.a (mon.0) 239 : audit [INF] from='osd.2 ' entity='osd.2' cmd=[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=vm02", "root=default"]}]: dispatch 2026-03-10T05:54:09.335 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:54:08 vm02 bash[56371]: audit 2026-03-10T05:54:07.893013+0000 mon.a (mon.0) 239 : audit [INF] from='osd.2 ' entity='osd.2' cmd=[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=vm02", "root=default"]}]: dispatch 2026-03-10T05:54:09.335 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:54:08 vm02 bash[55303]: audit 2026-03-10T05:54:07.886956+0000 mon.a (mon.0) 237 : audit [INF] from='osd.2 ' entity='osd.2' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]': finished 2026-03-10T05:54:09.335 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:54:08 vm02 bash[55303]: audit 2026-03-10T05:54:07.886956+0000 mon.a (mon.0) 237 : audit [INF] from='osd.2 ' entity='osd.2' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]': finished 2026-03-10T05:54:09.335 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:54:08 vm02 bash[55303]: cluster 2026-03-10T05:54:07.890580+0000 mon.a (mon.0) 238 : cluster [DBG] osdmap e99: 8 total, 7 up, 8 in 2026-03-10T05:54:09.335 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:54:08 vm02 bash[55303]: cluster 2026-03-10T05:54:07.890580+0000 mon.a (mon.0) 238 : cluster [DBG] osdmap e99: 8 total, 7 up, 8 in 2026-03-10T05:54:09.335 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:54:08 vm02 bash[55303]: audit 2026-03-10T05:54:07.891801+0000 mon.c (mon.1) 7 : audit [INF] from='osd.2 [v2:192.168.123.102:6818/3128106458,v1:192.168.123.102:6819/3128106458]' entity='osd.2' cmd=[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=vm02", "root=default"]}]: dispatch 2026-03-10T05:54:09.335 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:54:08 vm02 bash[55303]: audit 2026-03-10T05:54:07.891801+0000 mon.c (mon.1) 7 : audit [INF] from='osd.2 [v2:192.168.123.102:6818/3128106458,v1:192.168.123.102:6819/3128106458]' entity='osd.2' cmd=[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=vm02", "root=default"]}]: dispatch 2026-03-10T05:54:09.335 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:54:08 vm02 bash[55303]: audit 2026-03-10T05:54:07.893013+0000 mon.a (mon.0) 239 : audit [INF] from='osd.2 ' entity='osd.2' cmd=[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=vm02", "root=default"]}]: dispatch 2026-03-10T05:54:09.335 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:54:08 vm02 bash[55303]: audit 2026-03-10T05:54:07.893013+0000 mon.a (mon.0) 239 : audit [INF] from='osd.2 ' entity='osd.2' cmd=[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=vm02", "root=default"]}]: dispatch 2026-03-10T05:54:10.250 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:54:09 vm05 bash[43541]: cluster 2026-03-10T05:54:08.836897+0000 mgr.y (mgr.24992) 104 : cluster [DBG] pgmap v40: 161 pgs: 32 active+undersized, 15 active+undersized+degraded, 114 active+clean; 457 KiB data, 143 MiB used, 160 GiB / 160 GiB avail; 66/723 objects degraded (9.129%) 2026-03-10T05:54:10.250 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:54:09 vm05 bash[43541]: cluster 2026-03-10T05:54:08.836897+0000 mgr.y (mgr.24992) 104 : cluster [DBG] pgmap v40: 161 pgs: 32 active+undersized, 15 active+undersized+degraded, 114 active+clean; 457 KiB data, 143 MiB used, 160 GiB / 160 GiB avail; 66/723 objects degraded (9.129%) 2026-03-10T05:54:10.251 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:54:09 vm05 bash[43541]: cluster 2026-03-10T05:54:08.899340+0000 mon.a (mon.0) 240 : cluster [INF] Health check cleared: OSD_DOWN (was: 1 osds down) 2026-03-10T05:54:10.251 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:54:09 vm05 bash[43541]: cluster 2026-03-10T05:54:08.899340+0000 mon.a (mon.0) 240 : cluster [INF] Health check cleared: OSD_DOWN (was: 1 osds down) 2026-03-10T05:54:10.251 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:54:09 vm05 bash[43541]: cluster 2026-03-10T05:54:08.928607+0000 mon.a (mon.0) 241 : cluster [INF] osd.2 [v2:192.168.123.102:6818/3128106458,v1:192.168.123.102:6819/3128106458] boot 2026-03-10T05:54:10.251 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:54:09 vm05 bash[43541]: cluster 2026-03-10T05:54:08.928607+0000 mon.a (mon.0) 241 : cluster [INF] osd.2 [v2:192.168.123.102:6818/3128106458,v1:192.168.123.102:6819/3128106458] boot 2026-03-10T05:54:10.251 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:54:09 vm05 bash[43541]: cluster 2026-03-10T05:54:08.928774+0000 mon.a (mon.0) 242 : cluster [DBG] osdmap e100: 8 total, 8 up, 8 in 2026-03-10T05:54:10.251 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:54:09 vm05 bash[43541]: cluster 2026-03-10T05:54:08.928774+0000 mon.a (mon.0) 242 : cluster [DBG] osdmap e100: 8 total, 8 up, 8 in 2026-03-10T05:54:10.251 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:54:09 vm05 bash[43541]: audit 2026-03-10T05:54:08.932481+0000 mon.a (mon.0) 243 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T05:54:10.251 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:54:09 vm05 bash[43541]: audit 2026-03-10T05:54:08.932481+0000 mon.a (mon.0) 243 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T05:54:10.334 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:54:09 vm02 bash[56371]: cluster 2026-03-10T05:54:08.836897+0000 mgr.y (mgr.24992) 104 : cluster [DBG] pgmap v40: 161 pgs: 32 active+undersized, 15 active+undersized+degraded, 114 active+clean; 457 KiB data, 143 MiB used, 160 GiB / 160 GiB avail; 66/723 objects degraded (9.129%) 2026-03-10T05:54:10.334 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:54:09 vm02 bash[56371]: cluster 2026-03-10T05:54:08.836897+0000 mgr.y (mgr.24992) 104 : cluster [DBG] pgmap v40: 161 pgs: 32 active+undersized, 15 active+undersized+degraded, 114 active+clean; 457 KiB data, 143 MiB used, 160 GiB / 160 GiB avail; 66/723 objects degraded (9.129%) 2026-03-10T05:54:10.334 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:54:09 vm02 bash[56371]: cluster 2026-03-10T05:54:08.899340+0000 mon.a (mon.0) 240 : cluster [INF] Health check cleared: OSD_DOWN (was: 1 osds down) 2026-03-10T05:54:10.334 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:54:09 vm02 bash[56371]: cluster 2026-03-10T05:54:08.899340+0000 mon.a (mon.0) 240 : cluster [INF] Health check cleared: OSD_DOWN (was: 1 osds down) 2026-03-10T05:54:10.334 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:54:09 vm02 bash[56371]: cluster 2026-03-10T05:54:08.928607+0000 mon.a (mon.0) 241 : cluster [INF] osd.2 [v2:192.168.123.102:6818/3128106458,v1:192.168.123.102:6819/3128106458] boot 2026-03-10T05:54:10.334 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:54:09 vm02 bash[56371]: cluster 2026-03-10T05:54:08.928607+0000 mon.a (mon.0) 241 : cluster [INF] osd.2 [v2:192.168.123.102:6818/3128106458,v1:192.168.123.102:6819/3128106458] boot 2026-03-10T05:54:10.334 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:54:09 vm02 bash[56371]: cluster 2026-03-10T05:54:08.928774+0000 mon.a (mon.0) 242 : cluster [DBG] osdmap e100: 8 total, 8 up, 8 in 2026-03-10T05:54:10.334 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:54:09 vm02 bash[56371]: cluster 2026-03-10T05:54:08.928774+0000 mon.a (mon.0) 242 : cluster [DBG] osdmap e100: 8 total, 8 up, 8 in 2026-03-10T05:54:10.334 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:54:09 vm02 bash[56371]: audit 2026-03-10T05:54:08.932481+0000 mon.a (mon.0) 243 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T05:54:10.335 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:54:09 vm02 bash[56371]: audit 2026-03-10T05:54:08.932481+0000 mon.a (mon.0) 243 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T05:54:10.335 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:54:09 vm02 bash[55303]: cluster 2026-03-10T05:54:08.836897+0000 mgr.y (mgr.24992) 104 : cluster [DBG] pgmap v40: 161 pgs: 32 active+undersized, 15 active+undersized+degraded, 114 active+clean; 457 KiB data, 143 MiB used, 160 GiB / 160 GiB avail; 66/723 objects degraded (9.129%) 2026-03-10T05:54:10.335 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:54:09 vm02 bash[55303]: cluster 2026-03-10T05:54:08.836897+0000 mgr.y (mgr.24992) 104 : cluster [DBG] pgmap v40: 161 pgs: 32 active+undersized, 15 active+undersized+degraded, 114 active+clean; 457 KiB data, 143 MiB used, 160 GiB / 160 GiB avail; 66/723 objects degraded (9.129%) 2026-03-10T05:54:10.335 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:54:09 vm02 bash[55303]: cluster 2026-03-10T05:54:08.899340+0000 mon.a (mon.0) 240 : cluster [INF] Health check cleared: OSD_DOWN (was: 1 osds down) 2026-03-10T05:54:10.335 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:54:09 vm02 bash[55303]: cluster 2026-03-10T05:54:08.899340+0000 mon.a (mon.0) 240 : cluster [INF] Health check cleared: OSD_DOWN (was: 1 osds down) 2026-03-10T05:54:10.335 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:54:09 vm02 bash[55303]: cluster 2026-03-10T05:54:08.928607+0000 mon.a (mon.0) 241 : cluster [INF] osd.2 [v2:192.168.123.102:6818/3128106458,v1:192.168.123.102:6819/3128106458] boot 2026-03-10T05:54:10.335 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:54:09 vm02 bash[55303]: cluster 2026-03-10T05:54:08.928607+0000 mon.a (mon.0) 241 : cluster [INF] osd.2 [v2:192.168.123.102:6818/3128106458,v1:192.168.123.102:6819/3128106458] boot 2026-03-10T05:54:10.335 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:54:09 vm02 bash[55303]: cluster 2026-03-10T05:54:08.928774+0000 mon.a (mon.0) 242 : cluster [DBG] osdmap e100: 8 total, 8 up, 8 in 2026-03-10T05:54:10.335 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:54:09 vm02 bash[55303]: cluster 2026-03-10T05:54:08.928774+0000 mon.a (mon.0) 242 : cluster [DBG] osdmap e100: 8 total, 8 up, 8 in 2026-03-10T05:54:10.335 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:54:09 vm02 bash[55303]: audit 2026-03-10T05:54:08.932481+0000 mon.a (mon.0) 243 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T05:54:10.335 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:54:09 vm02 bash[55303]: audit 2026-03-10T05:54:08.932481+0000 mon.a (mon.0) 243 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T05:54:11.250 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:54:10 vm05 bash[43541]: cluster 2026-03-10T05:54:09.916916+0000 mon.a (mon.0) 244 : cluster [DBG] osdmap e101: 8 total, 8 up, 8 in 2026-03-10T05:54:11.250 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:54:10 vm05 bash[43541]: cluster 2026-03-10T05:54:09.916916+0000 mon.a (mon.0) 244 : cluster [DBG] osdmap e101: 8 total, 8 up, 8 in 2026-03-10T05:54:11.250 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:54:10 vm05 bash[43541]: audit 2026-03-10T05:54:10.879974+0000 mon.a (mon.0) 245 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T05:54:11.250 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:54:10 vm05 bash[43541]: audit 2026-03-10T05:54:10.879974+0000 mon.a (mon.0) 245 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T05:54:11.250 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:54:10 vm05 bash[43541]: audit 2026-03-10T05:54:10.888930+0000 mon.a (mon.0) 246 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:54:11.250 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:54:10 vm05 bash[43541]: audit 2026-03-10T05:54:10.888930+0000 mon.a (mon.0) 246 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:54:11.250 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:54:10 vm05 bash[43541]: audit 2026-03-10T05:54:10.896020+0000 mon.a (mon.0) 247 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:54:11.250 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:54:10 vm05 bash[43541]: audit 2026-03-10T05:54:10.896020+0000 mon.a (mon.0) 247 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:54:11.334 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:54:10 vm02 bash[55303]: cluster 2026-03-10T05:54:09.916916+0000 mon.a (mon.0) 244 : cluster [DBG] osdmap e101: 8 total, 8 up, 8 in 2026-03-10T05:54:11.334 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:54:10 vm02 bash[55303]: cluster 2026-03-10T05:54:09.916916+0000 mon.a (mon.0) 244 : cluster [DBG] osdmap e101: 8 total, 8 up, 8 in 2026-03-10T05:54:11.334 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:54:10 vm02 bash[55303]: audit 2026-03-10T05:54:10.879974+0000 mon.a (mon.0) 245 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T05:54:11.334 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:54:10 vm02 bash[55303]: audit 2026-03-10T05:54:10.879974+0000 mon.a (mon.0) 245 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T05:54:11.334 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:54:10 vm02 bash[55303]: audit 2026-03-10T05:54:10.888930+0000 mon.a (mon.0) 246 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:54:11.334 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:54:10 vm02 bash[55303]: audit 2026-03-10T05:54:10.888930+0000 mon.a (mon.0) 246 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:54:11.334 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:54:10 vm02 bash[55303]: audit 2026-03-10T05:54:10.896020+0000 mon.a (mon.0) 247 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:54:11.335 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:54:10 vm02 bash[55303]: audit 2026-03-10T05:54:10.896020+0000 mon.a (mon.0) 247 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:54:11.335 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:54:10 vm02 bash[56371]: cluster 2026-03-10T05:54:09.916916+0000 mon.a (mon.0) 244 : cluster [DBG] osdmap e101: 8 total, 8 up, 8 in 2026-03-10T05:54:11.335 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:54:10 vm02 bash[56371]: cluster 2026-03-10T05:54:09.916916+0000 mon.a (mon.0) 244 : cluster [DBG] osdmap e101: 8 total, 8 up, 8 in 2026-03-10T05:54:11.335 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:54:10 vm02 bash[56371]: audit 2026-03-10T05:54:10.879974+0000 mon.a (mon.0) 245 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T05:54:11.335 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:54:10 vm02 bash[56371]: audit 2026-03-10T05:54:10.879974+0000 mon.a (mon.0) 245 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T05:54:11.335 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:54:10 vm02 bash[56371]: audit 2026-03-10T05:54:10.888930+0000 mon.a (mon.0) 246 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:54:11.335 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:54:10 vm02 bash[56371]: audit 2026-03-10T05:54:10.888930+0000 mon.a (mon.0) 246 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:54:11.335 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:54:10 vm02 bash[56371]: audit 2026-03-10T05:54:10.896020+0000 mon.a (mon.0) 247 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:54:11.335 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:54:10 vm02 bash[56371]: audit 2026-03-10T05:54:10.896020+0000 mon.a (mon.0) 247 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:54:12.750 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:54:12 vm05 bash[43541]: cluster 2026-03-10T05:54:10.837254+0000 mgr.y (mgr.24992) 105 : cluster [DBG] pgmap v43: 161 pgs: 32 active+undersized, 15 active+undersized+degraded, 114 active+clean; 457 KiB data, 143 MiB used, 160 GiB / 160 GiB avail; 66/723 objects degraded (9.129%) 2026-03-10T05:54:12.750 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:54:12 vm05 bash[43541]: cluster 2026-03-10T05:54:10.837254+0000 mgr.y (mgr.24992) 105 : cluster [DBG] pgmap v43: 161 pgs: 32 active+undersized, 15 active+undersized+degraded, 114 active+clean; 457 KiB data, 143 MiB used, 160 GiB / 160 GiB avail; 66/723 objects degraded (9.129%) 2026-03-10T05:54:12.750 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:54:12 vm05 bash[43541]: audit 2026-03-10T05:54:11.457117+0000 mon.a (mon.0) 248 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:54:12.750 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:54:12 vm05 bash[43541]: audit 2026-03-10T05:54:11.457117+0000 mon.a (mon.0) 248 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:54:12.750 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:54:12 vm05 bash[43541]: audit 2026-03-10T05:54:11.465363+0000 mon.a (mon.0) 249 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:54:12.750 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:54:12 vm05 bash[43541]: audit 2026-03-10T05:54:11.465363+0000 mon.a (mon.0) 249 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:54:12.834 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:54:12 vm02 bash[56371]: cluster 2026-03-10T05:54:10.837254+0000 mgr.y (mgr.24992) 105 : cluster [DBG] pgmap v43: 161 pgs: 32 active+undersized, 15 active+undersized+degraded, 114 active+clean; 457 KiB data, 143 MiB used, 160 GiB / 160 GiB avail; 66/723 objects degraded (9.129%) 2026-03-10T05:54:12.834 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:54:12 vm02 bash[56371]: cluster 2026-03-10T05:54:10.837254+0000 mgr.y (mgr.24992) 105 : cluster [DBG] pgmap v43: 161 pgs: 32 active+undersized, 15 active+undersized+degraded, 114 active+clean; 457 KiB data, 143 MiB used, 160 GiB / 160 GiB avail; 66/723 objects degraded (9.129%) 2026-03-10T05:54:12.834 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:54:12 vm02 bash[56371]: audit 2026-03-10T05:54:11.457117+0000 mon.a (mon.0) 248 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:54:12.834 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:54:12 vm02 bash[56371]: audit 2026-03-10T05:54:11.457117+0000 mon.a (mon.0) 248 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:54:12.834 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:54:12 vm02 bash[56371]: audit 2026-03-10T05:54:11.465363+0000 mon.a (mon.0) 249 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:54:12.834 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:54:12 vm02 bash[56371]: audit 2026-03-10T05:54:11.465363+0000 mon.a (mon.0) 249 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:54:12.835 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:54:12 vm02 bash[55303]: cluster 2026-03-10T05:54:10.837254+0000 mgr.y (mgr.24992) 105 : cluster [DBG] pgmap v43: 161 pgs: 32 active+undersized, 15 active+undersized+degraded, 114 active+clean; 457 KiB data, 143 MiB used, 160 GiB / 160 GiB avail; 66/723 objects degraded (9.129%) 2026-03-10T05:54:12.835 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:54:12 vm02 bash[55303]: cluster 2026-03-10T05:54:10.837254+0000 mgr.y (mgr.24992) 105 : cluster [DBG] pgmap v43: 161 pgs: 32 active+undersized, 15 active+undersized+degraded, 114 active+clean; 457 KiB data, 143 MiB used, 160 GiB / 160 GiB avail; 66/723 objects degraded (9.129%) 2026-03-10T05:54:12.835 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:54:12 vm02 bash[55303]: audit 2026-03-10T05:54:11.457117+0000 mon.a (mon.0) 248 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:54:12.835 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:54:12 vm02 bash[55303]: audit 2026-03-10T05:54:11.457117+0000 mon.a (mon.0) 248 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:54:12.835 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:54:12 vm02 bash[55303]: audit 2026-03-10T05:54:11.465363+0000 mon.a (mon.0) 249 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:54:12.835 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:54:12 vm02 bash[55303]: audit 2026-03-10T05:54:11.465363+0000 mon.a (mon.0) 249 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:54:13.334 INFO:journalctl@ceph.mgr.y.vm02.stdout:Mar 10 05:54:12 vm02 bash[52264]: ::ffff:192.168.123.105 - - [10/Mar/2026:05:54:12] "GET /metrics HTTP/1.1" 200 37759 "" "Prometheus/2.51.0" 2026-03-10T05:54:13.835 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:54:13 vm02 bash[56371]: cluster 2026-03-10T05:54:13.456143+0000 mon.a (mon.0) 250 : cluster [WRN] Health check update: Degraded data redundancy: 2/723 objects degraded (0.277%), 2 pgs degraded (PG_DEGRADED) 2026-03-10T05:54:13.835 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:54:13 vm02 bash[56371]: cluster 2026-03-10T05:54:13.456143+0000 mon.a (mon.0) 250 : cluster [WRN] Health check update: Degraded data redundancy: 2/723 objects degraded (0.277%), 2 pgs degraded (PG_DEGRADED) 2026-03-10T05:54:13.835 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:54:13 vm02 bash[55303]: cluster 2026-03-10T05:54:13.456143+0000 mon.a (mon.0) 250 : cluster [WRN] Health check update: Degraded data redundancy: 2/723 objects degraded (0.277%), 2 pgs degraded (PG_DEGRADED) 2026-03-10T05:54:13.835 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:54:13 vm02 bash[55303]: cluster 2026-03-10T05:54:13.456143+0000 mon.a (mon.0) 250 : cluster [WRN] Health check update: Degraded data redundancy: 2/723 objects degraded (0.277%), 2 pgs degraded (PG_DEGRADED) 2026-03-10T05:54:14.000 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:54:13 vm05 bash[43541]: cluster 2026-03-10T05:54:13.456143+0000 mon.a (mon.0) 250 : cluster [WRN] Health check update: Degraded data redundancy: 2/723 objects degraded (0.277%), 2 pgs degraded (PG_DEGRADED) 2026-03-10T05:54:14.000 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:54:13 vm05 bash[43541]: cluster 2026-03-10T05:54:13.456143+0000 mon.a (mon.0) 250 : cluster [WRN] Health check update: Degraded data redundancy: 2/723 objects degraded (0.277%), 2 pgs degraded (PG_DEGRADED) 2026-03-10T05:54:14.500 INFO:journalctl@ceph.prometheus.a.vm05.stdout:Mar 10 05:54:14 vm05 bash[41269]: ts=2026-03-10T05:54:14.147Z caller=group.go:483 level=warn name=CephOSDFlapping index=13 component="rule manager" file=/etc/prometheus/alerting/ceph_alerts.yml group=osd msg="Evaluating rule failed" rule="alert: CephOSDFlapping\nexpr: (rate(ceph_osd_up[5m]) * on (ceph_daemon) group_left (hostname) ceph_osd_metadata)\n * 60 > 1\nlabels:\n oid: 1.3.6.1.4.1.50495.1.2.1.4.4\n severity: warning\n type: ceph_default\nannotations:\n description: OSD {{ $labels.ceph_daemon }} on {{ $labels.hostname }} was marked\n down and back up {{ $value | humanize }} times once a minute for 5 minutes. This\n may indicate a network issue (latency, packet loss, MTU mismatch) on the cluster\n network, or the public network if no cluster network is deployed. Check the network\n stats on the listed host(s).\n documentation: https://docs.ceph.com/en/latest/rados/troubleshooting/troubleshooting-osd#flapping-osds\n summary: Network issues are causing OSDs to flap (mark each other down)\n" err="found duplicate series for the match group {ceph_daemon=\"osd.0\"} on the right hand-side of the operation: [{__name__=\"ceph_osd_metadata\", ceph_daemon=\"osd.0\", ceph_version=\"ceph version 17.2.0 (43e2e60a7559d3f46c9d53f1ca875fd499a1e35e) quincy (stable)\", cluster=\"107483ae-1c44-11f1-b530-c1172cd6122a\", cluster_addr=\"192.168.123.102\", device_class=\"hdd\", hostname=\"vm02\", instance=\"ceph_cluster\", job=\"ceph\", objectstore=\"bluestore\", public_addr=\"192.168.123.102\"}, {__name__=\"ceph_osd_metadata\", ceph_daemon=\"osd.0\", ceph_version=\"ceph version 17.2.0 (43e2e60a7559d3f46c9d53f1ca875fd499a1e35e) quincy (stable)\", cluster_addr=\"192.168.123.102\", device_class=\"hdd\", hostname=\"vm02\", instance=\"192.168.123.105:9283\", job=\"ceph\", objectstore=\"bluestore\", public_addr=\"192.168.123.102\"}];many-to-many matching not allowed: matching labels must be unique on one side" 2026-03-10T05:54:14.834 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:54:14 vm02 bash[56371]: cluster 2026-03-10T05:54:12.837703+0000 mgr.y (mgr.24992) 106 : cluster [DBG] pgmap v44: 161 pgs: 11 active+undersized, 2 active+undersized+degraded, 148 active+clean; 457 KiB data, 148 MiB used, 160 GiB / 160 GiB avail; 2/723 objects degraded (0.277%) 2026-03-10T05:54:14.834 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:54:14 vm02 bash[56371]: cluster 2026-03-10T05:54:12.837703+0000 mgr.y (mgr.24992) 106 : cluster [DBG] pgmap v44: 161 pgs: 11 active+undersized, 2 active+undersized+degraded, 148 active+clean; 457 KiB data, 148 MiB used, 160 GiB / 160 GiB avail; 2/723 objects degraded (0.277%) 2026-03-10T05:54:14.834 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:54:14 vm02 bash[55303]: cluster 2026-03-10T05:54:12.837703+0000 mgr.y (mgr.24992) 106 : cluster [DBG] pgmap v44: 161 pgs: 11 active+undersized, 2 active+undersized+degraded, 148 active+clean; 457 KiB data, 148 MiB used, 160 GiB / 160 GiB avail; 2/723 objects degraded (0.277%) 2026-03-10T05:54:14.834 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:54:14 vm02 bash[55303]: cluster 2026-03-10T05:54:12.837703+0000 mgr.y (mgr.24992) 106 : cluster [DBG] pgmap v44: 161 pgs: 11 active+undersized, 2 active+undersized+degraded, 148 active+clean; 457 KiB data, 148 MiB used, 160 GiB / 160 GiB avail; 2/723 objects degraded (0.277%) 2026-03-10T05:54:15.000 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:54:14 vm05 bash[43541]: cluster 2026-03-10T05:54:12.837703+0000 mgr.y (mgr.24992) 106 : cluster [DBG] pgmap v44: 161 pgs: 11 active+undersized, 2 active+undersized+degraded, 148 active+clean; 457 KiB data, 148 MiB used, 160 GiB / 160 GiB avail; 2/723 objects degraded (0.277%) 2026-03-10T05:54:15.000 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:54:14 vm05 bash[43541]: cluster 2026-03-10T05:54:12.837703+0000 mgr.y (mgr.24992) 106 : cluster [DBG] pgmap v44: 161 pgs: 11 active+undersized, 2 active+undersized+degraded, 148 active+clean; 457 KiB data, 148 MiB used, 160 GiB / 160 GiB avail; 2/723 objects degraded (0.277%) 2026-03-10T05:54:15.834 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:54:15 vm02 bash[56371]: cluster 2026-03-10T05:54:15.512251+0000 mon.a (mon.0) 251 : cluster [INF] Health check cleared: PG_DEGRADED (was: Degraded data redundancy: 2/723 objects degraded (0.277%), 2 pgs degraded) 2026-03-10T05:54:15.834 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:54:15 vm02 bash[56371]: cluster 2026-03-10T05:54:15.512251+0000 mon.a (mon.0) 251 : cluster [INF] Health check cleared: PG_DEGRADED (was: Degraded data redundancy: 2/723 objects degraded (0.277%), 2 pgs degraded) 2026-03-10T05:54:15.834 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:54:15 vm02 bash[56371]: cluster 2026-03-10T05:54:15.512292+0000 mon.a (mon.0) 252 : cluster [INF] Cluster is now healthy 2026-03-10T05:54:15.834 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:54:15 vm02 bash[56371]: cluster 2026-03-10T05:54:15.512292+0000 mon.a (mon.0) 252 : cluster [INF] Cluster is now healthy 2026-03-10T05:54:15.835 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:54:15 vm02 bash[55303]: cluster 2026-03-10T05:54:15.512251+0000 mon.a (mon.0) 251 : cluster [INF] Health check cleared: PG_DEGRADED (was: Degraded data redundancy: 2/723 objects degraded (0.277%), 2 pgs degraded) 2026-03-10T05:54:15.835 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:54:15 vm02 bash[55303]: cluster 2026-03-10T05:54:15.512251+0000 mon.a (mon.0) 251 : cluster [INF] Health check cleared: PG_DEGRADED (was: Degraded data redundancy: 2/723 objects degraded (0.277%), 2 pgs degraded) 2026-03-10T05:54:15.835 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:54:15 vm02 bash[55303]: cluster 2026-03-10T05:54:15.512292+0000 mon.a (mon.0) 252 : cluster [INF] Cluster is now healthy 2026-03-10T05:54:15.835 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:54:15 vm02 bash[55303]: cluster 2026-03-10T05:54:15.512292+0000 mon.a (mon.0) 252 : cluster [INF] Cluster is now healthy 2026-03-10T05:54:16.000 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:54:15 vm05 bash[43541]: cluster 2026-03-10T05:54:15.512251+0000 mon.a (mon.0) 251 : cluster [INF] Health check cleared: PG_DEGRADED (was: Degraded data redundancy: 2/723 objects degraded (0.277%), 2 pgs degraded) 2026-03-10T05:54:16.000 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:54:15 vm05 bash[43541]: cluster 2026-03-10T05:54:15.512251+0000 mon.a (mon.0) 251 : cluster [INF] Health check cleared: PG_DEGRADED (was: Degraded data redundancy: 2/723 objects degraded (0.277%), 2 pgs degraded) 2026-03-10T05:54:16.000 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:54:15 vm05 bash[43541]: cluster 2026-03-10T05:54:15.512292+0000 mon.a (mon.0) 252 : cluster [INF] Cluster is now healthy 2026-03-10T05:54:16.000 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:54:15 vm05 bash[43541]: cluster 2026-03-10T05:54:15.512292+0000 mon.a (mon.0) 252 : cluster [INF] Cluster is now healthy 2026-03-10T05:54:16.834 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:54:16 vm02 bash[56371]: cluster 2026-03-10T05:54:14.838061+0000 mgr.y (mgr.24992) 107 : cluster [DBG] pgmap v45: 161 pgs: 161 active+clean; 457 KiB data, 149 MiB used, 160 GiB / 160 GiB avail; 736 B/s rd, 0 op/s 2026-03-10T05:54:16.835 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:54:16 vm02 bash[56371]: cluster 2026-03-10T05:54:14.838061+0000 mgr.y (mgr.24992) 107 : cluster [DBG] pgmap v45: 161 pgs: 161 active+clean; 457 KiB data, 149 MiB used, 160 GiB / 160 GiB avail; 736 B/s rd, 0 op/s 2026-03-10T05:54:16.835 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:54:16 vm02 bash[55303]: cluster 2026-03-10T05:54:14.838061+0000 mgr.y (mgr.24992) 107 : cluster [DBG] pgmap v45: 161 pgs: 161 active+clean; 457 KiB data, 149 MiB used, 160 GiB / 160 GiB avail; 736 B/s rd, 0 op/s 2026-03-10T05:54:16.835 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:54:16 vm02 bash[55303]: cluster 2026-03-10T05:54:14.838061+0000 mgr.y (mgr.24992) 107 : cluster [DBG] pgmap v45: 161 pgs: 161 active+clean; 457 KiB data, 149 MiB used, 160 GiB / 160 GiB avail; 736 B/s rd, 0 op/s 2026-03-10T05:54:16.946 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:54:16 vm05 bash[43541]: cluster 2026-03-10T05:54:14.838061+0000 mgr.y (mgr.24992) 107 : cluster [DBG] pgmap v45: 161 pgs: 161 active+clean; 457 KiB data, 149 MiB used, 160 GiB / 160 GiB avail; 736 B/s rd, 0 op/s 2026-03-10T05:54:16.946 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:54:16 vm05 bash[43541]: cluster 2026-03-10T05:54:14.838061+0000 mgr.y (mgr.24992) 107 : cluster [DBG] pgmap v45: 161 pgs: 161 active+clean; 457 KiB data, 149 MiB used, 160 GiB / 160 GiB avail; 736 B/s rd, 0 op/s 2026-03-10T05:54:17.250 INFO:journalctl@ceph.prometheus.a.vm05.stdout:Mar 10 05:54:16 vm05 bash[41269]: ts=2026-03-10T05:54:16.948Z caller=group.go:483 level=warn name=CephNodeDiskspaceWarning index=4 component="rule manager" file=/etc/prometheus/alerting/ceph_alerts.yml group=nodes msg="Evaluating rule failed" rule="alert: CephNodeDiskspaceWarning\nexpr: predict_linear(node_filesystem_free_bytes{device=~\"/.*\"}[2d], 3600 * 24 * 5)\n * on (instance) group_left (nodename) node_uname_info < 0\nlabels:\n oid: 1.3.6.1.4.1.50495.1.2.1.8.4\n severity: warning\n type: ceph_default\nannotations:\n description: Mountpoint {{ $labels.mountpoint }} on {{ $labels.nodename }} will\n be full in less than 5 days based on the 48 hour trailing fill rate.\n summary: Host filesystem free space is getting low\n" err="found duplicate series for the match group {instance=\"vm05\"} on the right hand-side of the operation: [{__name__=\"node_uname_info\", cluster=\"107483ae-1c44-11f1-b530-c1172cd6122a\", domainname=\"(none)\", instance=\"vm05\", job=\"node\", machine=\"x86_64\", nodename=\"vm05\", release=\"5.15.0-1092-kvm\", sysname=\"Linux\", version=\"#97-Ubuntu SMP Fri Jan 23 15:00:24 UTC 2026\"}, {__name__=\"node_uname_info\", domainname=\"(none)\", instance=\"vm05\", job=\"node\", machine=\"x86_64\", nodename=\"vm05\", release=\"5.15.0-1092-kvm\", sysname=\"Linux\", version=\"#97-Ubuntu SMP Fri Jan 23 15:00:24 UTC 2026\"}];many-to-many matching not allowed: matching labels must be unique on one side" 2026-03-10T05:54:18.835 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:54:18 vm02 bash[56371]: cluster 2026-03-10T05:54:16.838450+0000 mgr.y (mgr.24992) 108 : cluster [DBG] pgmap v46: 161 pgs: 161 active+clean; 457 KiB data, 149 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T05:54:18.835 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:54:18 vm02 bash[56371]: cluster 2026-03-10T05:54:16.838450+0000 mgr.y (mgr.24992) 108 : cluster [DBG] pgmap v46: 161 pgs: 161 active+clean; 457 KiB data, 149 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T05:54:18.835 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:54:18 vm02 bash[56371]: audit 2026-03-10T05:54:16.915463+0000 mgr.y (mgr.24992) 109 : audit [DBG] from='client.25048 -' entity='client.iscsi.foo.vm02.mxbwmh' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T05:54:18.835 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:54:18 vm02 bash[56371]: audit 2026-03-10T05:54:16.915463+0000 mgr.y (mgr.24992) 109 : audit [DBG] from='client.25048 -' entity='client.iscsi.foo.vm02.mxbwmh' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T05:54:18.835 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:54:18 vm02 bash[56371]: audit 2026-03-10T05:54:18.161200+0000 mon.a (mon.0) 253 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:54:18.835 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:54:18 vm02 bash[56371]: audit 2026-03-10T05:54:18.161200+0000 mon.a (mon.0) 253 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:54:18.835 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:54:18 vm02 bash[56371]: audit 2026-03-10T05:54:18.168239+0000 mon.a (mon.0) 254 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:54:18.835 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:54:18 vm02 bash[56371]: audit 2026-03-10T05:54:18.168239+0000 mon.a (mon.0) 254 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:54:18.835 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:54:18 vm02 bash[56371]: audit 2026-03-10T05:54:18.169277+0000 mon.a (mon.0) 255 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:54:18.835 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:54:18 vm02 bash[56371]: audit 2026-03-10T05:54:18.169277+0000 mon.a (mon.0) 255 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:54:18.835 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:54:18 vm02 bash[56371]: audit 2026-03-10T05:54:18.169755+0000 mon.a (mon.0) 256 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T05:54:18.835 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:54:18 vm02 bash[56371]: audit 2026-03-10T05:54:18.169755+0000 mon.a (mon.0) 256 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T05:54:18.835 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:54:18 vm02 bash[56371]: audit 2026-03-10T05:54:18.174501+0000 mon.a (mon.0) 257 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:54:18.835 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:54:18 vm02 bash[56371]: audit 2026-03-10T05:54:18.174501+0000 mon.a (mon.0) 257 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:54:18.835 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:54:18 vm02 bash[56371]: audit 2026-03-10T05:54:18.213919+0000 mon.a (mon.0) 258 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T05:54:18.835 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:54:18 vm02 bash[56371]: audit 2026-03-10T05:54:18.213919+0000 mon.a (mon.0) 258 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T05:54:18.835 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:54:18 vm02 bash[56371]: audit 2026-03-10T05:54:18.215180+0000 mon.a (mon.0) 259 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T05:54:18.835 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:54:18 vm02 bash[56371]: audit 2026-03-10T05:54:18.215180+0000 mon.a (mon.0) 259 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T05:54:18.835 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:54:18 vm02 bash[56371]: audit 2026-03-10T05:54:18.215971+0000 mon.a (mon.0) 260 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T05:54:18.835 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:54:18 vm02 bash[56371]: audit 2026-03-10T05:54:18.215971+0000 mon.a (mon.0) 260 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T05:54:18.835 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:54:18 vm02 bash[56371]: audit 2026-03-10T05:54:18.216526+0000 mon.a (mon.0) 261 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T05:54:18.835 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:54:18 vm02 bash[56371]: audit 2026-03-10T05:54:18.216526+0000 mon.a (mon.0) 261 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T05:54:18.835 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:54:18 vm02 bash[56371]: audit 2026-03-10T05:54:18.217204+0000 mon.a (mon.0) 262 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "osd ok-to-stop", "ids": ["0"], "max": 16}]: dispatch 2026-03-10T05:54:18.836 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:54:18 vm02 bash[56371]: audit 2026-03-10T05:54:18.217204+0000 mon.a (mon.0) 262 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "osd ok-to-stop", "ids": ["0"], "max": 16}]: dispatch 2026-03-10T05:54:18.836 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:54:18 vm02 bash[55303]: cluster 2026-03-10T05:54:16.838450+0000 mgr.y (mgr.24992) 108 : cluster [DBG] pgmap v46: 161 pgs: 161 active+clean; 457 KiB data, 149 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T05:54:18.836 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:54:18 vm02 bash[55303]: cluster 2026-03-10T05:54:16.838450+0000 mgr.y (mgr.24992) 108 : cluster [DBG] pgmap v46: 161 pgs: 161 active+clean; 457 KiB data, 149 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T05:54:18.836 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:54:18 vm02 bash[55303]: audit 2026-03-10T05:54:16.915463+0000 mgr.y (mgr.24992) 109 : audit [DBG] from='client.25048 -' entity='client.iscsi.foo.vm02.mxbwmh' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T05:54:18.836 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:54:18 vm02 bash[55303]: audit 2026-03-10T05:54:16.915463+0000 mgr.y (mgr.24992) 109 : audit [DBG] from='client.25048 -' entity='client.iscsi.foo.vm02.mxbwmh' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T05:54:18.836 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:54:18 vm02 bash[55303]: audit 2026-03-10T05:54:18.161200+0000 mon.a (mon.0) 253 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:54:18.836 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:54:18 vm02 bash[55303]: audit 2026-03-10T05:54:18.161200+0000 mon.a (mon.0) 253 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:54:18.836 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:54:18 vm02 bash[55303]: audit 2026-03-10T05:54:18.168239+0000 mon.a (mon.0) 254 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:54:18.836 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:54:18 vm02 bash[55303]: audit 2026-03-10T05:54:18.168239+0000 mon.a (mon.0) 254 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:54:18.836 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:54:18 vm02 bash[55303]: audit 2026-03-10T05:54:18.169277+0000 mon.a (mon.0) 255 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:54:18.836 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:54:18 vm02 bash[55303]: audit 2026-03-10T05:54:18.169277+0000 mon.a (mon.0) 255 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:54:18.836 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:54:18 vm02 bash[55303]: audit 2026-03-10T05:54:18.169755+0000 mon.a (mon.0) 256 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T05:54:18.836 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:54:18 vm02 bash[55303]: audit 2026-03-10T05:54:18.169755+0000 mon.a (mon.0) 256 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T05:54:18.836 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:54:18 vm02 bash[55303]: audit 2026-03-10T05:54:18.174501+0000 mon.a (mon.0) 257 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:54:18.836 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:54:18 vm02 bash[55303]: audit 2026-03-10T05:54:18.174501+0000 mon.a (mon.0) 257 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:54:18.836 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:54:18 vm02 bash[55303]: audit 2026-03-10T05:54:18.213919+0000 mon.a (mon.0) 258 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T05:54:18.836 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:54:18 vm02 bash[55303]: audit 2026-03-10T05:54:18.213919+0000 mon.a (mon.0) 258 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T05:54:18.836 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:54:18 vm02 bash[55303]: audit 2026-03-10T05:54:18.215180+0000 mon.a (mon.0) 259 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T05:54:18.836 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:54:18 vm02 bash[55303]: audit 2026-03-10T05:54:18.215180+0000 mon.a (mon.0) 259 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T05:54:18.836 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:54:18 vm02 bash[55303]: audit 2026-03-10T05:54:18.215971+0000 mon.a (mon.0) 260 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T05:54:18.836 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:54:18 vm02 bash[55303]: audit 2026-03-10T05:54:18.215971+0000 mon.a (mon.0) 260 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T05:54:18.836 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:54:18 vm02 bash[55303]: audit 2026-03-10T05:54:18.216526+0000 mon.a (mon.0) 261 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T05:54:18.836 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:54:18 vm02 bash[55303]: audit 2026-03-10T05:54:18.216526+0000 mon.a (mon.0) 261 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T05:54:18.836 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:54:18 vm02 bash[55303]: audit 2026-03-10T05:54:18.217204+0000 mon.a (mon.0) 262 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "osd ok-to-stop", "ids": ["0"], "max": 16}]: dispatch 2026-03-10T05:54:18.836 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:54:18 vm02 bash[55303]: audit 2026-03-10T05:54:18.217204+0000 mon.a (mon.0) 262 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "osd ok-to-stop", "ids": ["0"], "max": 16}]: dispatch 2026-03-10T05:54:19.000 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:54:18 vm05 bash[43541]: cluster 2026-03-10T05:54:16.838450+0000 mgr.y (mgr.24992) 108 : cluster [DBG] pgmap v46: 161 pgs: 161 active+clean; 457 KiB data, 149 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T05:54:19.000 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:54:18 vm05 bash[43541]: cluster 2026-03-10T05:54:16.838450+0000 mgr.y (mgr.24992) 108 : cluster [DBG] pgmap v46: 161 pgs: 161 active+clean; 457 KiB data, 149 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T05:54:19.000 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:54:18 vm05 bash[43541]: audit 2026-03-10T05:54:16.915463+0000 mgr.y (mgr.24992) 109 : audit [DBG] from='client.25048 -' entity='client.iscsi.foo.vm02.mxbwmh' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T05:54:19.000 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:54:18 vm05 bash[43541]: audit 2026-03-10T05:54:16.915463+0000 mgr.y (mgr.24992) 109 : audit [DBG] from='client.25048 -' entity='client.iscsi.foo.vm02.mxbwmh' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T05:54:19.000 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:54:18 vm05 bash[43541]: audit 2026-03-10T05:54:18.161200+0000 mon.a (mon.0) 253 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:54:19.000 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:54:18 vm05 bash[43541]: audit 2026-03-10T05:54:18.161200+0000 mon.a (mon.0) 253 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:54:19.000 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:54:18 vm05 bash[43541]: audit 2026-03-10T05:54:18.168239+0000 mon.a (mon.0) 254 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:54:19.000 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:54:18 vm05 bash[43541]: audit 2026-03-10T05:54:18.168239+0000 mon.a (mon.0) 254 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:54:19.000 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:54:18 vm05 bash[43541]: audit 2026-03-10T05:54:18.169277+0000 mon.a (mon.0) 255 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:54:19.000 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:54:18 vm05 bash[43541]: audit 2026-03-10T05:54:18.169277+0000 mon.a (mon.0) 255 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:54:19.000 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:54:18 vm05 bash[43541]: audit 2026-03-10T05:54:18.169755+0000 mon.a (mon.0) 256 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T05:54:19.000 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:54:18 vm05 bash[43541]: audit 2026-03-10T05:54:18.169755+0000 mon.a (mon.0) 256 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T05:54:19.000 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:54:18 vm05 bash[43541]: audit 2026-03-10T05:54:18.174501+0000 mon.a (mon.0) 257 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:54:19.000 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:54:18 vm05 bash[43541]: audit 2026-03-10T05:54:18.174501+0000 mon.a (mon.0) 257 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:54:19.000 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:54:18 vm05 bash[43541]: audit 2026-03-10T05:54:18.213919+0000 mon.a (mon.0) 258 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T05:54:19.000 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:54:18 vm05 bash[43541]: audit 2026-03-10T05:54:18.213919+0000 mon.a (mon.0) 258 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T05:54:19.000 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:54:18 vm05 bash[43541]: audit 2026-03-10T05:54:18.215180+0000 mon.a (mon.0) 259 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T05:54:19.000 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:54:18 vm05 bash[43541]: audit 2026-03-10T05:54:18.215180+0000 mon.a (mon.0) 259 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T05:54:19.000 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:54:18 vm05 bash[43541]: audit 2026-03-10T05:54:18.215971+0000 mon.a (mon.0) 260 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T05:54:19.001 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:54:18 vm05 bash[43541]: audit 2026-03-10T05:54:18.215971+0000 mon.a (mon.0) 260 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T05:54:19.001 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:54:18 vm05 bash[43541]: audit 2026-03-10T05:54:18.216526+0000 mon.a (mon.0) 261 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T05:54:19.001 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:54:18 vm05 bash[43541]: audit 2026-03-10T05:54:18.216526+0000 mon.a (mon.0) 261 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T05:54:19.001 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:54:18 vm05 bash[43541]: audit 2026-03-10T05:54:18.217204+0000 mon.a (mon.0) 262 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "osd ok-to-stop", "ids": ["0"], "max": 16}]: dispatch 2026-03-10T05:54:19.001 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:54:18 vm05 bash[43541]: audit 2026-03-10T05:54:18.217204+0000 mon.a (mon.0) 262 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "osd ok-to-stop", "ids": ["0"], "max": 16}]: dispatch 2026-03-10T05:54:19.584 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:54:19 vm02 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:54:19.585 INFO:journalctl@ceph.mgr.y.vm02.stdout:Mar 10 05:54:19 vm02 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:54:19.585 INFO:journalctl@ceph.osd.0.vm02.stdout:Mar 10 05:54:19 vm02 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:54:19.585 INFO:journalctl@ceph.osd.0.vm02.stdout:Mar 10 05:54:19 vm02 systemd[1]: Stopping Ceph osd.0 for 107483ae-1c44-11f1-b530-c1172cd6122a... 2026-03-10T05:54:19.585 INFO:journalctl@ceph.osd.0.vm02.stdout:Mar 10 05:54:19 vm02 bash[25206]: debug 2026-03-10T05:54:19.471+0000 7f0aa4519700 -1 received signal: Terminated from /sbin/docker-init -- /usr/bin/ceph-osd -n osd.0 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-stderr=true --default-log-stderr-prefix=debug (PID: 1) UID: 0 2026-03-10T05:54:19.585 INFO:journalctl@ceph.osd.0.vm02.stdout:Mar 10 05:54:19 vm02 bash[25206]: debug 2026-03-10T05:54:19.471+0000 7f0aa4519700 -1 osd.0 101 *** Got signal Terminated *** 2026-03-10T05:54:19.585 INFO:journalctl@ceph.osd.0.vm02.stdout:Mar 10 05:54:19 vm02 bash[25206]: debug 2026-03-10T05:54:19.471+0000 7f0aa4519700 -1 osd.0 101 *** Immediate shutdown (osd_fast_shutdown=true) *** 2026-03-10T05:54:19.585 INFO:journalctl@ceph.osd.1.vm02.stdout:Mar 10 05:54:19 vm02 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:54:19.585 INFO:journalctl@ceph.osd.2.vm02.stdout:Mar 10 05:54:19 vm02 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:54:19.585 INFO:journalctl@ceph.osd.3.vm02.stdout:Mar 10 05:54:19 vm02 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:54:19.585 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:54:19 vm02 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:54:19.585 INFO:journalctl@ceph.node-exporter.a.vm02.stdout:Mar 10 05:54:19 vm02 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:54:19.585 INFO:journalctl@ceph.alertmanager.a.vm02.stdout:Mar 10 05:54:19 vm02 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:54:20.000 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:54:19 vm05 bash[43541]: audit 2026-03-10T05:54:18.217349+0000 mgr.y (mgr.24992) 110 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "osd ok-to-stop", "ids": ["0"], "max": 16}]: dispatch 2026-03-10T05:54:20.000 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:54:19 vm05 bash[43541]: audit 2026-03-10T05:54:18.217349+0000 mgr.y (mgr.24992) 110 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "osd ok-to-stop", "ids": ["0"], "max": 16}]: dispatch 2026-03-10T05:54:20.000 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:54:19 vm05 bash[43541]: cephadm 2026-03-10T05:54:18.217864+0000 mgr.y (mgr.24992) 111 : cephadm [INF] Upgrade: osd.0 is safe to restart 2026-03-10T05:54:20.000 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:54:19 vm05 bash[43541]: cephadm 2026-03-10T05:54:18.217864+0000 mgr.y (mgr.24992) 111 : cephadm [INF] Upgrade: osd.0 is safe to restart 2026-03-10T05:54:20.000 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:54:19 vm05 bash[43541]: cephadm 2026-03-10T05:54:18.630908+0000 mgr.y (mgr.24992) 112 : cephadm [INF] Upgrade: Updating osd.0 2026-03-10T05:54:20.000 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:54:19 vm05 bash[43541]: cephadm 2026-03-10T05:54:18.630908+0000 mgr.y (mgr.24992) 112 : cephadm [INF] Upgrade: Updating osd.0 2026-03-10T05:54:20.000 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:54:19 vm05 bash[43541]: audit 2026-03-10T05:54:18.635418+0000 mon.a (mon.0) 263 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:54:20.000 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:54:19 vm05 bash[43541]: audit 2026-03-10T05:54:18.635418+0000 mon.a (mon.0) 263 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:54:20.000 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:54:19 vm05 bash[43541]: audit 2026-03-10T05:54:18.637880+0000 mon.a (mon.0) 264 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.0"}]: dispatch 2026-03-10T05:54:20.000 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:54:19 vm05 bash[43541]: audit 2026-03-10T05:54:18.637880+0000 mon.a (mon.0) 264 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.0"}]: dispatch 2026-03-10T05:54:20.000 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:54:19 vm05 bash[43541]: audit 2026-03-10T05:54:18.638217+0000 mon.a (mon.0) 265 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:54:20.000 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:54:19 vm05 bash[43541]: audit 2026-03-10T05:54:18.638217+0000 mon.a (mon.0) 265 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:54:20.000 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:54:19 vm05 bash[43541]: cephadm 2026-03-10T05:54:18.639303+0000 mgr.y (mgr.24992) 113 : cephadm [INF] Deploying daemon osd.0 on vm02 2026-03-10T05:54:20.000 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:54:19 vm05 bash[43541]: cephadm 2026-03-10T05:54:18.639303+0000 mgr.y (mgr.24992) 113 : cephadm [INF] Deploying daemon osd.0 on vm02 2026-03-10T05:54:20.000 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:54:19 vm05 bash[43541]: cluster 2026-03-10T05:54:19.474460+0000 mon.a (mon.0) 266 : cluster [INF] osd.0 marked itself down and dead 2026-03-10T05:54:20.000 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:54:19 vm05 bash[43541]: cluster 2026-03-10T05:54:19.474460+0000 mon.a (mon.0) 266 : cluster [INF] osd.0 marked itself down and dead 2026-03-10T05:54:20.011 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:54:19 vm02 bash[56371]: audit 2026-03-10T05:54:18.217349+0000 mgr.y (mgr.24992) 110 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "osd ok-to-stop", "ids": ["0"], "max": 16}]: dispatch 2026-03-10T05:54:20.011 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:54:19 vm02 bash[56371]: audit 2026-03-10T05:54:18.217349+0000 mgr.y (mgr.24992) 110 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "osd ok-to-stop", "ids": ["0"], "max": 16}]: dispatch 2026-03-10T05:54:20.011 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:54:19 vm02 bash[56371]: cephadm 2026-03-10T05:54:18.217864+0000 mgr.y (mgr.24992) 111 : cephadm [INF] Upgrade: osd.0 is safe to restart 2026-03-10T05:54:20.011 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:54:19 vm02 bash[56371]: cephadm 2026-03-10T05:54:18.217864+0000 mgr.y (mgr.24992) 111 : cephadm [INF] Upgrade: osd.0 is safe to restart 2026-03-10T05:54:20.011 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:54:19 vm02 bash[56371]: cephadm 2026-03-10T05:54:18.630908+0000 mgr.y (mgr.24992) 112 : cephadm [INF] Upgrade: Updating osd.0 2026-03-10T05:54:20.011 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:54:19 vm02 bash[56371]: cephadm 2026-03-10T05:54:18.630908+0000 mgr.y (mgr.24992) 112 : cephadm [INF] Upgrade: Updating osd.0 2026-03-10T05:54:20.012 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:54:19 vm02 bash[56371]: audit 2026-03-10T05:54:18.635418+0000 mon.a (mon.0) 263 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:54:20.012 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:54:19 vm02 bash[56371]: audit 2026-03-10T05:54:18.635418+0000 mon.a (mon.0) 263 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:54:20.012 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:54:19 vm02 bash[56371]: audit 2026-03-10T05:54:18.637880+0000 mon.a (mon.0) 264 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.0"}]: dispatch 2026-03-10T05:54:20.012 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:54:19 vm02 bash[56371]: audit 2026-03-10T05:54:18.637880+0000 mon.a (mon.0) 264 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.0"}]: dispatch 2026-03-10T05:54:20.012 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:54:19 vm02 bash[56371]: audit 2026-03-10T05:54:18.638217+0000 mon.a (mon.0) 265 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:54:20.012 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:54:19 vm02 bash[56371]: audit 2026-03-10T05:54:18.638217+0000 mon.a (mon.0) 265 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:54:20.012 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:54:19 vm02 bash[56371]: cephadm 2026-03-10T05:54:18.639303+0000 mgr.y (mgr.24992) 113 : cephadm [INF] Deploying daemon osd.0 on vm02 2026-03-10T05:54:20.012 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:54:19 vm02 bash[56371]: cephadm 2026-03-10T05:54:18.639303+0000 mgr.y (mgr.24992) 113 : cephadm [INF] Deploying daemon osd.0 on vm02 2026-03-10T05:54:20.012 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:54:19 vm02 bash[56371]: cluster 2026-03-10T05:54:19.474460+0000 mon.a (mon.0) 266 : cluster [INF] osd.0 marked itself down and dead 2026-03-10T05:54:20.012 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:54:19 vm02 bash[56371]: cluster 2026-03-10T05:54:19.474460+0000 mon.a (mon.0) 266 : cluster [INF] osd.0 marked itself down and dead 2026-03-10T05:54:20.012 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:54:19 vm02 bash[55303]: audit 2026-03-10T05:54:18.217349+0000 mgr.y (mgr.24992) 110 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "osd ok-to-stop", "ids": ["0"], "max": 16}]: dispatch 2026-03-10T05:54:20.012 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:54:19 vm02 bash[55303]: audit 2026-03-10T05:54:18.217349+0000 mgr.y (mgr.24992) 110 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "osd ok-to-stop", "ids": ["0"], "max": 16}]: dispatch 2026-03-10T05:54:20.012 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:54:19 vm02 bash[55303]: cephadm 2026-03-10T05:54:18.217864+0000 mgr.y (mgr.24992) 111 : cephadm [INF] Upgrade: osd.0 is safe to restart 2026-03-10T05:54:20.012 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:54:19 vm02 bash[55303]: cephadm 2026-03-10T05:54:18.217864+0000 mgr.y (mgr.24992) 111 : cephadm [INF] Upgrade: osd.0 is safe to restart 2026-03-10T05:54:20.012 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:54:19 vm02 bash[55303]: cephadm 2026-03-10T05:54:18.630908+0000 mgr.y (mgr.24992) 112 : cephadm [INF] Upgrade: Updating osd.0 2026-03-10T05:54:20.012 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:54:19 vm02 bash[55303]: cephadm 2026-03-10T05:54:18.630908+0000 mgr.y (mgr.24992) 112 : cephadm [INF] Upgrade: Updating osd.0 2026-03-10T05:54:20.012 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:54:19 vm02 bash[55303]: audit 2026-03-10T05:54:18.635418+0000 mon.a (mon.0) 263 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:54:20.012 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:54:19 vm02 bash[55303]: audit 2026-03-10T05:54:18.635418+0000 mon.a (mon.0) 263 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:54:20.012 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:54:19 vm02 bash[55303]: audit 2026-03-10T05:54:18.637880+0000 mon.a (mon.0) 264 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.0"}]: dispatch 2026-03-10T05:54:20.012 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:54:19 vm02 bash[55303]: audit 2026-03-10T05:54:18.637880+0000 mon.a (mon.0) 264 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.0"}]: dispatch 2026-03-10T05:54:20.012 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:54:19 vm02 bash[55303]: audit 2026-03-10T05:54:18.638217+0000 mon.a (mon.0) 265 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:54:20.012 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:54:19 vm02 bash[55303]: audit 2026-03-10T05:54:18.638217+0000 mon.a (mon.0) 265 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:54:20.012 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:54:19 vm02 bash[55303]: cephadm 2026-03-10T05:54:18.639303+0000 mgr.y (mgr.24992) 113 : cephadm [INF] Deploying daemon osd.0 on vm02 2026-03-10T05:54:20.012 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:54:19 vm02 bash[55303]: cephadm 2026-03-10T05:54:18.639303+0000 mgr.y (mgr.24992) 113 : cephadm [INF] Deploying daemon osd.0 on vm02 2026-03-10T05:54:20.012 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:54:19 vm02 bash[55303]: cluster 2026-03-10T05:54:19.474460+0000 mon.a (mon.0) 266 : cluster [INF] osd.0 marked itself down and dead 2026-03-10T05:54:20.012 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:54:19 vm02 bash[55303]: cluster 2026-03-10T05:54:19.474460+0000 mon.a (mon.0) 266 : cluster [INF] osd.0 marked itself down and dead 2026-03-10T05:54:20.012 INFO:journalctl@ceph.osd.0.vm02.stdout:Mar 10 05:54:19 vm02 bash[62809]: ceph-107483ae-1c44-11f1-b530-c1172cd6122a-osd-0 2026-03-10T05:54:20.297 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:54:20 vm02 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:54:20.297 INFO:journalctl@ceph.mgr.y.vm02.stdout:Mar 10 05:54:20 vm02 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:54:20.297 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:54:20 vm02 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:54:20.297 INFO:journalctl@ceph.osd.0.vm02.stdout:Mar 10 05:54:20 vm02 systemd[1]: ceph-107483ae-1c44-11f1-b530-c1172cd6122a@osd.0.service: Deactivated successfully. 2026-03-10T05:54:20.297 INFO:journalctl@ceph.osd.0.vm02.stdout:Mar 10 05:54:20 vm02 systemd[1]: Stopped Ceph osd.0 for 107483ae-1c44-11f1-b530-c1172cd6122a. 2026-03-10T05:54:20.297 INFO:journalctl@ceph.osd.0.vm02.stdout:Mar 10 05:54:20 vm02 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:54:20.298 INFO:journalctl@ceph.osd.1.vm02.stdout:Mar 10 05:54:20 vm02 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:54:20.298 INFO:journalctl@ceph.alertmanager.a.vm02.stdout:Mar 10 05:54:20 vm02 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:54:20.298 INFO:journalctl@ceph.osd.2.vm02.stdout:Mar 10 05:54:20 vm02 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:54:20.298 INFO:journalctl@ceph.osd.3.vm02.stdout:Mar 10 05:54:20 vm02 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:54:20.298 INFO:journalctl@ceph.node-exporter.a.vm02.stdout:Mar 10 05:54:20 vm02 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:54:20.584 INFO:journalctl@ceph.osd.0.vm02.stdout:Mar 10 05:54:20 vm02 systemd[1]: Started Ceph osd.0 for 107483ae-1c44-11f1-b530-c1172cd6122a. 2026-03-10T05:54:20.584 INFO:journalctl@ceph.osd.0.vm02.stdout:Mar 10 05:54:20 vm02 bash[63017]: Running command: /usr/bin/ceph-authtool --gen-print-key 2026-03-10T05:54:20.584 INFO:journalctl@ceph.osd.0.vm02.stdout:Mar 10 05:54:20 vm02 bash[63017]: Running command: /usr/bin/ceph-authtool --gen-print-key 2026-03-10T05:54:21.000 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:54:20 vm05 bash[43541]: cluster 2026-03-10T05:54:18.838844+0000 mgr.y (mgr.24992) 114 : cluster [DBG] pgmap v47: 161 pgs: 161 active+clean; 457 KiB data, 149 MiB used, 160 GiB / 160 GiB avail; 1.0 KiB/s rd, 1 op/s 2026-03-10T05:54:21.000 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:54:20 vm05 bash[43541]: cluster 2026-03-10T05:54:18.838844+0000 mgr.y (mgr.24992) 114 : cluster [DBG] pgmap v47: 161 pgs: 161 active+clean; 457 KiB data, 149 MiB used, 160 GiB / 160 GiB avail; 1.0 KiB/s rd, 1 op/s 2026-03-10T05:54:21.000 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:54:20 vm05 bash[43541]: cluster 2026-03-10T05:54:19.635031+0000 mon.a (mon.0) 267 : cluster [WRN] Health check failed: 1 osds down (OSD_DOWN) 2026-03-10T05:54:21.000 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:54:20 vm05 bash[43541]: cluster 2026-03-10T05:54:19.635031+0000 mon.a (mon.0) 267 : cluster [WRN] Health check failed: 1 osds down (OSD_DOWN) 2026-03-10T05:54:21.000 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:54:20 vm05 bash[43541]: cluster 2026-03-10T05:54:19.668395+0000 mon.a (mon.0) 268 : cluster [DBG] osdmap e102: 8 total, 7 up, 8 in 2026-03-10T05:54:21.000 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:54:20 vm05 bash[43541]: cluster 2026-03-10T05:54:19.668395+0000 mon.a (mon.0) 268 : cluster [DBG] osdmap e102: 8 total, 7 up, 8 in 2026-03-10T05:54:21.000 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:54:20 vm05 bash[43541]: audit 2026-03-10T05:54:20.329426+0000 mon.a (mon.0) 269 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:54:21.000 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:54:20 vm05 bash[43541]: audit 2026-03-10T05:54:20.329426+0000 mon.a (mon.0) 269 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:54:21.000 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:54:20 vm05 bash[43541]: audit 2026-03-10T05:54:20.336846+0000 mon.a (mon.0) 270 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:54:21.000 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:54:20 vm05 bash[43541]: audit 2026-03-10T05:54:20.336846+0000 mon.a (mon.0) 270 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:54:21.084 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:54:20 vm02 bash[56371]: cluster 2026-03-10T05:54:18.838844+0000 mgr.y (mgr.24992) 114 : cluster [DBG] pgmap v47: 161 pgs: 161 active+clean; 457 KiB data, 149 MiB used, 160 GiB / 160 GiB avail; 1.0 KiB/s rd, 1 op/s 2026-03-10T05:54:21.084 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:54:20 vm02 bash[56371]: cluster 2026-03-10T05:54:18.838844+0000 mgr.y (mgr.24992) 114 : cluster [DBG] pgmap v47: 161 pgs: 161 active+clean; 457 KiB data, 149 MiB used, 160 GiB / 160 GiB avail; 1.0 KiB/s rd, 1 op/s 2026-03-10T05:54:21.084 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:54:20 vm02 bash[56371]: cluster 2026-03-10T05:54:19.635031+0000 mon.a (mon.0) 267 : cluster [WRN] Health check failed: 1 osds down (OSD_DOWN) 2026-03-10T05:54:21.084 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:54:20 vm02 bash[56371]: cluster 2026-03-10T05:54:19.635031+0000 mon.a (mon.0) 267 : cluster [WRN] Health check failed: 1 osds down (OSD_DOWN) 2026-03-10T05:54:21.084 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:54:20 vm02 bash[56371]: cluster 2026-03-10T05:54:19.668395+0000 mon.a (mon.0) 268 : cluster [DBG] osdmap e102: 8 total, 7 up, 8 in 2026-03-10T05:54:21.084 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:54:20 vm02 bash[56371]: cluster 2026-03-10T05:54:19.668395+0000 mon.a (mon.0) 268 : cluster [DBG] osdmap e102: 8 total, 7 up, 8 in 2026-03-10T05:54:21.084 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:54:20 vm02 bash[56371]: audit 2026-03-10T05:54:20.329426+0000 mon.a (mon.0) 269 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:54:21.084 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:54:20 vm02 bash[56371]: audit 2026-03-10T05:54:20.329426+0000 mon.a (mon.0) 269 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:54:21.084 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:54:20 vm02 bash[56371]: audit 2026-03-10T05:54:20.336846+0000 mon.a (mon.0) 270 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:54:21.084 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:54:20 vm02 bash[56371]: audit 2026-03-10T05:54:20.336846+0000 mon.a (mon.0) 270 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:54:21.086 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:54:20 vm02 bash[55303]: cluster 2026-03-10T05:54:18.838844+0000 mgr.y (mgr.24992) 114 : cluster [DBG] pgmap v47: 161 pgs: 161 active+clean; 457 KiB data, 149 MiB used, 160 GiB / 160 GiB avail; 1.0 KiB/s rd, 1 op/s 2026-03-10T05:54:21.086 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:54:20 vm02 bash[55303]: cluster 2026-03-10T05:54:18.838844+0000 mgr.y (mgr.24992) 114 : cluster [DBG] pgmap v47: 161 pgs: 161 active+clean; 457 KiB data, 149 MiB used, 160 GiB / 160 GiB avail; 1.0 KiB/s rd, 1 op/s 2026-03-10T05:54:21.086 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:54:20 vm02 bash[55303]: cluster 2026-03-10T05:54:19.635031+0000 mon.a (mon.0) 267 : cluster [WRN] Health check failed: 1 osds down (OSD_DOWN) 2026-03-10T05:54:21.086 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:54:20 vm02 bash[55303]: cluster 2026-03-10T05:54:19.635031+0000 mon.a (mon.0) 267 : cluster [WRN] Health check failed: 1 osds down (OSD_DOWN) 2026-03-10T05:54:21.086 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:54:20 vm02 bash[55303]: cluster 2026-03-10T05:54:19.668395+0000 mon.a (mon.0) 268 : cluster [DBG] osdmap e102: 8 total, 7 up, 8 in 2026-03-10T05:54:21.086 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:54:20 vm02 bash[55303]: cluster 2026-03-10T05:54:19.668395+0000 mon.a (mon.0) 268 : cluster [DBG] osdmap e102: 8 total, 7 up, 8 in 2026-03-10T05:54:21.086 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:54:20 vm02 bash[55303]: audit 2026-03-10T05:54:20.329426+0000 mon.a (mon.0) 269 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:54:21.086 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:54:20 vm02 bash[55303]: audit 2026-03-10T05:54:20.329426+0000 mon.a (mon.0) 269 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:54:21.086 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:54:20 vm02 bash[55303]: audit 2026-03-10T05:54:20.336846+0000 mon.a (mon.0) 270 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:54:21.086 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:54:20 vm02 bash[55303]: audit 2026-03-10T05:54:20.336846+0000 mon.a (mon.0) 270 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:54:21.154 INFO:teuthology.orchestra.run.vm02.stdout:true 2026-03-10T05:54:21.617 INFO:teuthology.orchestra.run.vm02.stdout:NAME HOST PORTS STATUS REFRESHED AGE MEM USE MEM LIM VERSION IMAGE ID CONTAINER ID 2026-03-10T05:54:21.617 INFO:teuthology.orchestra.run.vm02.stdout:alertmanager.a vm02 *:9093,9094 running (2m) 10s ago 7m 14.9M - 0.25.0 c8568f914cd2 7a7c5c2cddb6 2026-03-10T05:54:21.617 INFO:teuthology.orchestra.run.vm02.stdout:grafana.a vm05 *:3000 running (2m) 50s ago 6m 39.4M - dad864ee21e9 95c6d977988a 2026-03-10T05:54:21.617 INFO:teuthology.orchestra.run.vm02.stdout:iscsi.foo.vm02.mxbwmh vm02 running (105s) 10s ago 6m 44.0M - 3.5 e1d6a67b021e 62aba5b41046 2026-03-10T05:54:21.617 INFO:teuthology.orchestra.run.vm02.stdout:mgr.x vm05 *:8443,9283,8765 running (102s) 50s ago 9m 464M - 19.2.3-678-ge911bdeb 654f31e6858e 7579626ada90 2026-03-10T05:54:21.617 INFO:teuthology.orchestra.run.vm02.stdout:mgr.y vm02 *:8443,9283,8765 running (2m) 10s ago 10m 525M - 19.2.3-678-ge911bdeb 654f31e6858e ef46d0f7b15e 2026-03-10T05:54:21.617 INFO:teuthology.orchestra.run.vm02.stdout:mon.a vm02 running (75s) 10s ago 10m 43.0M 2048M 19.2.3-678-ge911bdeb 654f31e6858e df3a0a290a95 2026-03-10T05:54:21.617 INFO:teuthology.orchestra.run.vm02.stdout:mon.b vm05 running (56s) 50s ago 9m 19.2M 2048M 19.2.3-678-ge911bdeb 654f31e6858e 1da04b90d16b 2026-03-10T05:54:21.617 INFO:teuthology.orchestra.run.vm02.stdout:mon.c vm02 running (89s) 10s ago 9m 39.7M 2048M 19.2.3-678-ge911bdeb 654f31e6858e 7f2cdf1b7aa6 2026-03-10T05:54:21.617 INFO:teuthology.orchestra.run.vm02.stdout:node-exporter.a vm02 *:9100 running (2m) 10s ago 7m 7279k - 1.7.0 72c9c2088986 90288450bd1f 2026-03-10T05:54:21.617 INFO:teuthology.orchestra.run.vm02.stdout:node-exporter.b vm05 *:9100 running (2m) 50s ago 7m 7275k - 1.7.0 72c9c2088986 4e859143cb0e 2026-03-10T05:54:21.617 INFO:teuthology.orchestra.run.vm02.stdout:osd.0 vm02 starting - - - 4096M 2026-03-10T05:54:21.617 INFO:teuthology.orchestra.run.vm02.stdout:osd.1 vm02 running (9m) 10s ago 9m 55.1M 4096M 17.2.0 e1d6a67b021e 8c25a1e89677 2026-03-10T05:54:21.617 INFO:teuthology.orchestra.run.vm02.stdout:osd.2 vm02 running (15s) 10s ago 9m 30.1M 4096M 19.2.3-678-ge911bdeb 654f31e6858e 51dac2f581d9 2026-03-10T05:54:21.617 INFO:teuthology.orchestra.run.vm02.stdout:osd.3 vm02 running (32s) 10s ago 8m 67.4M 4096M 19.2.3-678-ge911bdeb 654f31e6858e 0eca961791f4 2026-03-10T05:54:21.617 INFO:teuthology.orchestra.run.vm02.stdout:osd.4 vm05 running (8m) 50s ago 8m 53.2M 4096M 17.2.0 e1d6a67b021e 4ffe1741f201 2026-03-10T05:54:21.617 INFO:teuthology.orchestra.run.vm02.stdout:osd.5 vm05 running (8m) 50s ago 8m 52.2M 4096M 17.2.0 e1d6a67b021e cba5583c238e 2026-03-10T05:54:21.617 INFO:teuthology.orchestra.run.vm02.stdout:osd.6 vm05 running (8m) 50s ago 8m 49.8M 4096M 17.2.0 e1d6a67b021e 9d1b370357d7 2026-03-10T05:54:21.617 INFO:teuthology.orchestra.run.vm02.stdout:osd.7 vm05 running (7m) 50s ago 7m 51.3M 4096M 17.2.0 e1d6a67b021e 8a4837b788cf 2026-03-10T05:54:21.617 INFO:teuthology.orchestra.run.vm02.stdout:prometheus.a vm05 *:9095 running (104s) 50s ago 7m 37.3M - 2.51.0 1d3b7f56885b 3328811f8f28 2026-03-10T05:54:21.617 INFO:teuthology.orchestra.run.vm02.stdout:rgw.foo.vm02.pbogjd vm02 *:8000 running (6m) 10s ago 6m 86.8M - 17.2.0 e1d6a67b021e 2ab2ffd1abaa 2026-03-10T05:54:21.617 INFO:teuthology.orchestra.run.vm02.stdout:rgw.foo.vm05.hvmsxl vm05 *:8000 running (6m) 50s ago 6m 85.8M - 17.2.0 e1d6a67b021e 85d1c77b7e9d 2026-03-10T05:54:21.617 INFO:teuthology.orchestra.run.vm02.stdout:rgw.smpl.vm02.pglcfm vm02 *:80 running (6m) 10s ago 6m 85.8M - 17.2.0 e1d6a67b021e ef152a460673 2026-03-10T05:54:21.617 INFO:teuthology.orchestra.run.vm02.stdout:rgw.smpl.vm05.hqqmap vm05 *:80 running (6m) 50s ago 6m 86.0M - 17.2.0 e1d6a67b021e 29c9ee794f34 2026-03-10T05:54:21.879 INFO:teuthology.orchestra.run.vm02.stdout:{ 2026-03-10T05:54:21.880 INFO:teuthology.orchestra.run.vm02.stdout: "mon": { 2026-03-10T05:54:21.880 INFO:teuthology.orchestra.run.vm02.stdout: "ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable)": 3 2026-03-10T05:54:21.880 INFO:teuthology.orchestra.run.vm02.stdout: }, 2026-03-10T05:54:21.880 INFO:teuthology.orchestra.run.vm02.stdout: "mgr": { 2026-03-10T05:54:21.880 INFO:teuthology.orchestra.run.vm02.stdout: "ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable)": 2 2026-03-10T05:54:21.880 INFO:teuthology.orchestra.run.vm02.stdout: }, 2026-03-10T05:54:21.880 INFO:teuthology.orchestra.run.vm02.stdout: "osd": { 2026-03-10T05:54:21.880 INFO:teuthology.orchestra.run.vm02.stdout: "ceph version 17.2.0 (43e2e60a7559d3f46c9d53f1ca875fd499a1e35e) quincy (stable)": 5, 2026-03-10T05:54:21.880 INFO:teuthology.orchestra.run.vm02.stdout: "ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable)": 2 2026-03-10T05:54:21.880 INFO:teuthology.orchestra.run.vm02.stdout: }, 2026-03-10T05:54:21.880 INFO:teuthology.orchestra.run.vm02.stdout: "rgw": { 2026-03-10T05:54:21.880 INFO:teuthology.orchestra.run.vm02.stdout: "ceph version 17.2.0 (43e2e60a7559d3f46c9d53f1ca875fd499a1e35e) quincy (stable)": 4 2026-03-10T05:54:21.880 INFO:teuthology.orchestra.run.vm02.stdout: }, 2026-03-10T05:54:21.880 INFO:teuthology.orchestra.run.vm02.stdout: "overall": { 2026-03-10T05:54:21.880 INFO:teuthology.orchestra.run.vm02.stdout: "ceph version 17.2.0 (43e2e60a7559d3f46c9d53f1ca875fd499a1e35e) quincy (stable)": 9, 2026-03-10T05:54:21.880 INFO:teuthology.orchestra.run.vm02.stdout: "ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable)": 7 2026-03-10T05:54:21.880 INFO:teuthology.orchestra.run.vm02.stdout: } 2026-03-10T05:54:21.880 INFO:teuthology.orchestra.run.vm02.stdout:} 2026-03-10T05:54:22.000 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:54:21 vm05 bash[43541]: cluster 2026-03-10T05:54:20.665967+0000 mon.a (mon.0) 271 : cluster [DBG] osdmap e103: 8 total, 7 up, 8 in 2026-03-10T05:54:22.000 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:54:21 vm05 bash[43541]: cluster 2026-03-10T05:54:20.665967+0000 mon.a (mon.0) 271 : cluster [DBG] osdmap e103: 8 total, 7 up, 8 in 2026-03-10T05:54:22.000 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:54:21 vm05 bash[43541]: audit 2026-03-10T05:54:20.740353+0000 mon.a (mon.0) 272 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:54:22.000 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:54:21 vm05 bash[43541]: audit 2026-03-10T05:54:20.740353+0000 mon.a (mon.0) 272 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:54:22.000 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:54:21 vm05 bash[43541]: audit 2026-03-10T05:54:20.751106+0000 mon.a (mon.0) 273 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:54:22.000 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:54:21 vm05 bash[43541]: audit 2026-03-10T05:54:20.751106+0000 mon.a (mon.0) 273 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:54:22.000 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:54:21 vm05 bash[43541]: audit 2026-03-10T05:54:20.889169+0000 mon.a (mon.0) 274 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:54:22.000 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:54:21 vm05 bash[43541]: audit 2026-03-10T05:54:20.889169+0000 mon.a (mon.0) 274 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:54:22.084 INFO:journalctl@ceph.osd.0.vm02.stdout:Mar 10 05:54:21 vm02 bash[63017]: --> Failed to activate via raw: did not find any matching OSD to activate 2026-03-10T05:54:22.084 INFO:journalctl@ceph.osd.0.vm02.stdout:Mar 10 05:54:21 vm02 bash[63017]: Running command: /usr/bin/ceph-authtool --gen-print-key 2026-03-10T05:54:22.084 INFO:journalctl@ceph.osd.0.vm02.stdout:Mar 10 05:54:21 vm02 bash[63017]: Running command: /usr/bin/ceph-authtool --gen-print-key 2026-03-10T05:54:22.084 INFO:journalctl@ceph.osd.0.vm02.stdout:Mar 10 05:54:21 vm02 bash[63017]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0 2026-03-10T05:54:22.084 INFO:journalctl@ceph.osd.0.vm02.stdout:Mar 10 05:54:21 vm02 bash[63017]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph-93a715bc-9b3e-4ec6-a229-32ec89235d79/osd-block-181bfe3a-c244-4b31-bf3a-c6074cc650d1 --path /var/lib/ceph/osd/ceph-0 --no-mon-config 2026-03-10T05:54:22.084 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:54:21 vm02 bash[56371]: cluster 2026-03-10T05:54:20.665967+0000 mon.a (mon.0) 271 : cluster [DBG] osdmap e103: 8 total, 7 up, 8 in 2026-03-10T05:54:22.084 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:54:21 vm02 bash[56371]: cluster 2026-03-10T05:54:20.665967+0000 mon.a (mon.0) 271 : cluster [DBG] osdmap e103: 8 total, 7 up, 8 in 2026-03-10T05:54:22.084 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:54:21 vm02 bash[56371]: audit 2026-03-10T05:54:20.740353+0000 mon.a (mon.0) 272 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:54:22.084 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:54:21 vm02 bash[56371]: audit 2026-03-10T05:54:20.740353+0000 mon.a (mon.0) 272 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:54:22.084 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:54:21 vm02 bash[56371]: audit 2026-03-10T05:54:20.751106+0000 mon.a (mon.0) 273 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:54:22.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:54:21 vm02 bash[56371]: audit 2026-03-10T05:54:20.751106+0000 mon.a (mon.0) 273 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:54:22.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:54:21 vm02 bash[56371]: audit 2026-03-10T05:54:20.889169+0000 mon.a (mon.0) 274 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:54:22.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:54:21 vm02 bash[56371]: audit 2026-03-10T05:54:20.889169+0000 mon.a (mon.0) 274 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:54:22.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:54:21 vm02 bash[55303]: cluster 2026-03-10T05:54:20.665967+0000 mon.a (mon.0) 271 : cluster [DBG] osdmap e103: 8 total, 7 up, 8 in 2026-03-10T05:54:22.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:54:21 vm02 bash[55303]: cluster 2026-03-10T05:54:20.665967+0000 mon.a (mon.0) 271 : cluster [DBG] osdmap e103: 8 total, 7 up, 8 in 2026-03-10T05:54:22.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:54:21 vm02 bash[55303]: audit 2026-03-10T05:54:20.740353+0000 mon.a (mon.0) 272 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:54:22.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:54:21 vm02 bash[55303]: audit 2026-03-10T05:54:20.740353+0000 mon.a (mon.0) 272 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:54:22.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:54:21 vm02 bash[55303]: audit 2026-03-10T05:54:20.751106+0000 mon.a (mon.0) 273 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:54:22.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:54:21 vm02 bash[55303]: audit 2026-03-10T05:54:20.751106+0000 mon.a (mon.0) 273 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:54:22.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:54:21 vm02 bash[55303]: audit 2026-03-10T05:54:20.889169+0000 mon.a (mon.0) 274 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:54:22.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:54:21 vm02 bash[55303]: audit 2026-03-10T05:54:20.889169+0000 mon.a (mon.0) 274 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:54:22.087 INFO:teuthology.orchestra.run.vm02.stdout:{ 2026-03-10T05:54:22.087 INFO:teuthology.orchestra.run.vm02.stdout: "target_image": "quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df", 2026-03-10T05:54:22.087 INFO:teuthology.orchestra.run.vm02.stdout: "in_progress": true, 2026-03-10T05:54:22.087 INFO:teuthology.orchestra.run.vm02.stdout: "which": "Upgrading all daemon types on all hosts", 2026-03-10T05:54:22.087 INFO:teuthology.orchestra.run.vm02.stdout: "services_complete": [ 2026-03-10T05:54:22.087 INFO:teuthology.orchestra.run.vm02.stdout: "mgr", 2026-03-10T05:54:22.087 INFO:teuthology.orchestra.run.vm02.stdout: "mon" 2026-03-10T05:54:22.087 INFO:teuthology.orchestra.run.vm02.stdout: ], 2026-03-10T05:54:22.087 INFO:teuthology.orchestra.run.vm02.stdout: "progress": "7/23 daemons upgraded", 2026-03-10T05:54:22.087 INFO:teuthology.orchestra.run.vm02.stdout: "message": "Currently upgrading osd daemons", 2026-03-10T05:54:22.087 INFO:teuthology.orchestra.run.vm02.stdout: "is_paused": false 2026-03-10T05:54:22.087 INFO:teuthology.orchestra.run.vm02.stdout:} 2026-03-10T05:54:22.386 INFO:teuthology.orchestra.run.vm02.stdout:HEALTH_WARN 1 osds down 2026-03-10T05:54:22.386 INFO:teuthology.orchestra.run.vm02.stdout:[WRN] OSD_DOWN: 1 osds down 2026-03-10T05:54:22.386 INFO:teuthology.orchestra.run.vm02.stdout: osd.0 (root=default,host=vm02) is down 2026-03-10T05:54:22.584 INFO:journalctl@ceph.osd.0.vm02.stdout:Mar 10 05:54:22 vm02 bash[63017]: Running command: /usr/bin/ln -snf /dev/ceph-93a715bc-9b3e-4ec6-a229-32ec89235d79/osd-block-181bfe3a-c244-4b31-bf3a-c6074cc650d1 /var/lib/ceph/osd/ceph-0/block 2026-03-10T05:54:22.585 INFO:journalctl@ceph.osd.0.vm02.stdout:Mar 10 05:54:22 vm02 bash[63017]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-0/block 2026-03-10T05:54:22.585 INFO:journalctl@ceph.osd.0.vm02.stdout:Mar 10 05:54:22 vm02 bash[63017]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0 2026-03-10T05:54:22.585 INFO:journalctl@ceph.osd.0.vm02.stdout:Mar 10 05:54:22 vm02 bash[63017]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0 2026-03-10T05:54:22.585 INFO:journalctl@ceph.osd.0.vm02.stdout:Mar 10 05:54:22 vm02 bash[63017]: --> ceph-volume lvm activate successful for osd ID: 0 2026-03-10T05:54:23.000 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:54:22 vm05 bash[43541]: cluster 2026-03-10T05:54:20.839156+0000 mgr.y (mgr.24992) 115 : cluster [DBG] pgmap v50: 161 pgs: 21 stale+active+clean, 140 active+clean; 457 KiB data, 149 MiB used, 160 GiB / 160 GiB avail; 639 B/s rd, 0 op/s 2026-03-10T05:54:23.000 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:54:22 vm05 bash[43541]: cluster 2026-03-10T05:54:20.839156+0000 mgr.y (mgr.24992) 115 : cluster [DBG] pgmap v50: 161 pgs: 21 stale+active+clean, 140 active+clean; 457 KiB data, 149 MiB used, 160 GiB / 160 GiB avail; 639 B/s rd, 0 op/s 2026-03-10T05:54:23.000 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:54:22 vm05 bash[43541]: audit 2026-03-10T05:54:21.132110+0000 mgr.y (mgr.24992) 116 : audit [DBG] from='client.54137 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T05:54:23.000 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:54:22 vm05 bash[43541]: audit 2026-03-10T05:54:21.132110+0000 mgr.y (mgr.24992) 116 : audit [DBG] from='client.54137 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T05:54:23.000 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:54:22 vm05 bash[43541]: audit 2026-03-10T05:54:21.411414+0000 mgr.y (mgr.24992) 117 : audit [DBG] from='client.34213 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T05:54:23.000 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:54:22 vm05 bash[43541]: audit 2026-03-10T05:54:21.411414+0000 mgr.y (mgr.24992) 117 : audit [DBG] from='client.34213 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T05:54:23.000 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:54:22 vm05 bash[43541]: audit 2026-03-10T05:54:21.611674+0000 mgr.y (mgr.24992) 118 : audit [DBG] from='client.34219 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T05:54:23.000 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:54:22 vm05 bash[43541]: audit 2026-03-10T05:54:21.611674+0000 mgr.y (mgr.24992) 118 : audit [DBG] from='client.34219 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T05:54:23.000 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:54:22 vm05 bash[43541]: audit 2026-03-10T05:54:21.878527+0000 mon.c (mon.1) 8 : audit [DBG] from='client.? 192.168.123.102:0/843168252' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T05:54:23.000 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:54:22 vm05 bash[43541]: audit 2026-03-10T05:54:21.878527+0000 mon.c (mon.1) 8 : audit [DBG] from='client.? 192.168.123.102:0/843168252' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T05:54:23.000 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:54:22 vm05 bash[43541]: audit 2026-03-10T05:54:22.385184+0000 mon.a (mon.0) 275 : audit [DBG] from='client.? 192.168.123.102:0/544192170' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch 2026-03-10T05:54:23.000 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:54:22 vm05 bash[43541]: audit 2026-03-10T05:54:22.385184+0000 mon.a (mon.0) 275 : audit [DBG] from='client.? 192.168.123.102:0/544192170' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch 2026-03-10T05:54:23.065 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:54:22 vm02 bash[56371]: cluster 2026-03-10T05:54:20.839156+0000 mgr.y (mgr.24992) 115 : cluster [DBG] pgmap v50: 161 pgs: 21 stale+active+clean, 140 active+clean; 457 KiB data, 149 MiB used, 160 GiB / 160 GiB avail; 639 B/s rd, 0 op/s 2026-03-10T05:54:23.066 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:54:22 vm02 bash[56371]: cluster 2026-03-10T05:54:20.839156+0000 mgr.y (mgr.24992) 115 : cluster [DBG] pgmap v50: 161 pgs: 21 stale+active+clean, 140 active+clean; 457 KiB data, 149 MiB used, 160 GiB / 160 GiB avail; 639 B/s rd, 0 op/s 2026-03-10T05:54:23.066 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:54:22 vm02 bash[56371]: audit 2026-03-10T05:54:21.132110+0000 mgr.y (mgr.24992) 116 : audit [DBG] from='client.54137 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T05:54:23.066 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:54:22 vm02 bash[56371]: audit 2026-03-10T05:54:21.132110+0000 mgr.y (mgr.24992) 116 : audit [DBG] from='client.54137 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T05:54:23.066 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:54:22 vm02 bash[56371]: audit 2026-03-10T05:54:21.411414+0000 mgr.y (mgr.24992) 117 : audit [DBG] from='client.34213 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T05:54:23.066 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:54:22 vm02 bash[56371]: audit 2026-03-10T05:54:21.411414+0000 mgr.y (mgr.24992) 117 : audit [DBG] from='client.34213 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T05:54:23.066 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:54:22 vm02 bash[56371]: audit 2026-03-10T05:54:21.611674+0000 mgr.y (mgr.24992) 118 : audit [DBG] from='client.34219 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T05:54:23.066 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:54:22 vm02 bash[56371]: audit 2026-03-10T05:54:21.611674+0000 mgr.y (mgr.24992) 118 : audit [DBG] from='client.34219 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T05:54:23.066 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:54:22 vm02 bash[56371]: audit 2026-03-10T05:54:21.878527+0000 mon.c (mon.1) 8 : audit [DBG] from='client.? 192.168.123.102:0/843168252' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T05:54:23.066 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:54:22 vm02 bash[56371]: audit 2026-03-10T05:54:21.878527+0000 mon.c (mon.1) 8 : audit [DBG] from='client.? 192.168.123.102:0/843168252' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T05:54:23.066 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:54:22 vm02 bash[56371]: audit 2026-03-10T05:54:22.385184+0000 mon.a (mon.0) 275 : audit [DBG] from='client.? 192.168.123.102:0/544192170' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch 2026-03-10T05:54:23.066 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:54:22 vm02 bash[56371]: audit 2026-03-10T05:54:22.385184+0000 mon.a (mon.0) 275 : audit [DBG] from='client.? 192.168.123.102:0/544192170' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch 2026-03-10T05:54:23.066 INFO:journalctl@ceph.mgr.y.vm02.stdout:Mar 10 05:54:22 vm02 bash[52264]: ::ffff:192.168.123.105 - - [10/Mar/2026:05:54:22] "GET /metrics HTTP/1.1" 200 37759 "" "Prometheus/2.51.0" 2026-03-10T05:54:23.066 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:54:22 vm02 bash[55303]: cluster 2026-03-10T05:54:20.839156+0000 mgr.y (mgr.24992) 115 : cluster [DBG] pgmap v50: 161 pgs: 21 stale+active+clean, 140 active+clean; 457 KiB data, 149 MiB used, 160 GiB / 160 GiB avail; 639 B/s rd, 0 op/s 2026-03-10T05:54:23.066 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:54:22 vm02 bash[55303]: cluster 2026-03-10T05:54:20.839156+0000 mgr.y (mgr.24992) 115 : cluster [DBG] pgmap v50: 161 pgs: 21 stale+active+clean, 140 active+clean; 457 KiB data, 149 MiB used, 160 GiB / 160 GiB avail; 639 B/s rd, 0 op/s 2026-03-10T05:54:23.066 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:54:22 vm02 bash[55303]: audit 2026-03-10T05:54:21.132110+0000 mgr.y (mgr.24992) 116 : audit [DBG] from='client.54137 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T05:54:23.066 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:54:22 vm02 bash[55303]: audit 2026-03-10T05:54:21.132110+0000 mgr.y (mgr.24992) 116 : audit [DBG] from='client.54137 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T05:54:23.066 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:54:22 vm02 bash[55303]: audit 2026-03-10T05:54:21.411414+0000 mgr.y (mgr.24992) 117 : audit [DBG] from='client.34213 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T05:54:23.066 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:54:22 vm02 bash[55303]: audit 2026-03-10T05:54:21.411414+0000 mgr.y (mgr.24992) 117 : audit [DBG] from='client.34213 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T05:54:23.066 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:54:22 vm02 bash[55303]: audit 2026-03-10T05:54:21.611674+0000 mgr.y (mgr.24992) 118 : audit [DBG] from='client.34219 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T05:54:23.066 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:54:22 vm02 bash[55303]: audit 2026-03-10T05:54:21.611674+0000 mgr.y (mgr.24992) 118 : audit [DBG] from='client.34219 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T05:54:23.066 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:54:22 vm02 bash[55303]: audit 2026-03-10T05:54:21.878527+0000 mon.c (mon.1) 8 : audit [DBG] from='client.? 192.168.123.102:0/843168252' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T05:54:23.066 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:54:22 vm02 bash[55303]: audit 2026-03-10T05:54:21.878527+0000 mon.c (mon.1) 8 : audit [DBG] from='client.? 192.168.123.102:0/843168252' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T05:54:23.066 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:54:22 vm02 bash[55303]: audit 2026-03-10T05:54:22.385184+0000 mon.a (mon.0) 275 : audit [DBG] from='client.? 192.168.123.102:0/544192170' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch 2026-03-10T05:54:23.066 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:54:22 vm02 bash[55303]: audit 2026-03-10T05:54:22.385184+0000 mon.a (mon.0) 275 : audit [DBG] from='client.? 192.168.123.102:0/544192170' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch 2026-03-10T05:54:23.334 INFO:journalctl@ceph.osd.0.vm02.stdout:Mar 10 05:54:23 vm02 bash[63533]: debug 2026-03-10T05:54:23.059+0000 7fb0e14bb740 -1 Falling back to public interface 2026-03-10T05:54:24.000 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:54:23 vm05 bash[43541]: audit 2026-03-10T05:54:22.086118+0000 mgr.y (mgr.24992) 119 : audit [DBG] from='client.44226 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T05:54:24.000 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:54:23 vm05 bash[43541]: audit 2026-03-10T05:54:22.086118+0000 mgr.y (mgr.24992) 119 : audit [DBG] from='client.44226 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T05:54:24.000 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:54:23 vm05 bash[43541]: cluster 2026-03-10T05:54:22.839681+0000 mgr.y (mgr.24992) 120 : cluster [DBG] pgmap v51: 161 pgs: 33 active+undersized, 19 active+undersized+degraded, 109 active+clean; 457 KiB data, 149 MiB used, 160 GiB / 160 GiB avail; 74/723 objects degraded (10.235%) 2026-03-10T05:54:24.000 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:54:23 vm05 bash[43541]: cluster 2026-03-10T05:54:22.839681+0000 mgr.y (mgr.24992) 120 : cluster [DBG] pgmap v51: 161 pgs: 33 active+undersized, 19 active+undersized+degraded, 109 active+clean; 457 KiB data, 149 MiB used, 160 GiB / 160 GiB avail; 74/723 objects degraded (10.235%) 2026-03-10T05:54:24.027 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:54:23 vm02 bash[56371]: audit 2026-03-10T05:54:22.086118+0000 mgr.y (mgr.24992) 119 : audit [DBG] from='client.44226 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T05:54:24.027 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:54:23 vm02 bash[56371]: audit 2026-03-10T05:54:22.086118+0000 mgr.y (mgr.24992) 119 : audit [DBG] from='client.44226 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T05:54:24.027 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:54:23 vm02 bash[56371]: cluster 2026-03-10T05:54:22.839681+0000 mgr.y (mgr.24992) 120 : cluster [DBG] pgmap v51: 161 pgs: 33 active+undersized, 19 active+undersized+degraded, 109 active+clean; 457 KiB data, 149 MiB used, 160 GiB / 160 GiB avail; 74/723 objects degraded (10.235%) 2026-03-10T05:54:24.027 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:54:23 vm02 bash[56371]: cluster 2026-03-10T05:54:22.839681+0000 mgr.y (mgr.24992) 120 : cluster [DBG] pgmap v51: 161 pgs: 33 active+undersized, 19 active+undersized+degraded, 109 active+clean; 457 KiB data, 149 MiB used, 160 GiB / 160 GiB avail; 74/723 objects degraded (10.235%) 2026-03-10T05:54:24.027 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:54:23 vm02 bash[55303]: audit 2026-03-10T05:54:22.086118+0000 mgr.y (mgr.24992) 119 : audit [DBG] from='client.44226 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T05:54:24.027 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:54:23 vm02 bash[55303]: audit 2026-03-10T05:54:22.086118+0000 mgr.y (mgr.24992) 119 : audit [DBG] from='client.44226 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T05:54:24.027 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:54:23 vm02 bash[55303]: cluster 2026-03-10T05:54:22.839681+0000 mgr.y (mgr.24992) 120 : cluster [DBG] pgmap v51: 161 pgs: 33 active+undersized, 19 active+undersized+degraded, 109 active+clean; 457 KiB data, 149 MiB used, 160 GiB / 160 GiB avail; 74/723 objects degraded (10.235%) 2026-03-10T05:54:24.027 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:54:23 vm02 bash[55303]: cluster 2026-03-10T05:54:22.839681+0000 mgr.y (mgr.24992) 120 : cluster [DBG] pgmap v51: 161 pgs: 33 active+undersized, 19 active+undersized+degraded, 109 active+clean; 457 KiB data, 149 MiB used, 160 GiB / 160 GiB avail; 74/723 objects degraded (10.235%) 2026-03-10T05:54:24.334 INFO:journalctl@ceph.osd.0.vm02.stdout:Mar 10 05:54:24 vm02 bash[63533]: debug 2026-03-10T05:54:24.023+0000 7fb0e14bb740 -1 osd.0 0 read_superblock omap replica is missing. 2026-03-10T05:54:24.334 INFO:journalctl@ceph.osd.0.vm02.stdout:Mar 10 05:54:24 vm02 bash[63533]: debug 2026-03-10T05:54:24.039+0000 7fb0e14bb740 -1 osd.0 101 log_to_monitors true 2026-03-10T05:54:24.500 INFO:journalctl@ceph.prometheus.a.vm05.stdout:Mar 10 05:54:24 vm05 bash[41269]: ts=2026-03-10T05:54:24.147Z caller=group.go:483 level=warn name=CephOSDFlapping index=13 component="rule manager" file=/etc/prometheus/alerting/ceph_alerts.yml group=osd msg="Evaluating rule failed" rule="alert: CephOSDFlapping\nexpr: (rate(ceph_osd_up[5m]) * on (ceph_daemon) group_left (hostname) ceph_osd_metadata)\n * 60 > 1\nlabels:\n oid: 1.3.6.1.4.1.50495.1.2.1.4.4\n severity: warning\n type: ceph_default\nannotations:\n description: OSD {{ $labels.ceph_daemon }} on {{ $labels.hostname }} was marked\n down and back up {{ $value | humanize }} times once a minute for 5 minutes. This\n may indicate a network issue (latency, packet loss, MTU mismatch) on the cluster\n network, or the public network if no cluster network is deployed. Check the network\n stats on the listed host(s).\n documentation: https://docs.ceph.com/en/latest/rados/troubleshooting/troubleshooting-osd#flapping-osds\n summary: Network issues are causing OSDs to flap (mark each other down)\n" err="found duplicate series for the match group {ceph_daemon=\"osd.0\"} on the right hand-side of the operation: [{__name__=\"ceph_osd_metadata\", ceph_daemon=\"osd.0\", ceph_version=\"ceph version 17.2.0 (43e2e60a7559d3f46c9d53f1ca875fd499a1e35e) quincy (stable)\", cluster=\"107483ae-1c44-11f1-b530-c1172cd6122a\", cluster_addr=\"192.168.123.102\", device_class=\"hdd\", hostname=\"vm02\", instance=\"ceph_cluster\", job=\"ceph\", objectstore=\"bluestore\", public_addr=\"192.168.123.102\"}, {__name__=\"ceph_osd_metadata\", ceph_daemon=\"osd.0\", ceph_version=\"ceph version 17.2.0 (43e2e60a7559d3f46c9d53f1ca875fd499a1e35e) quincy (stable)\", cluster_addr=\"192.168.123.102\", device_class=\"hdd\", hostname=\"vm02\", instance=\"192.168.123.105:9283\", job=\"ceph\", objectstore=\"bluestore\", public_addr=\"192.168.123.102\"}];many-to-many matching not allowed: matching labels must be unique on one side" 2026-03-10T05:54:25.000 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:54:24 vm05 bash[43541]: cluster 2026-03-10T05:54:23.706768+0000 mon.a (mon.0) 276 : cluster [WRN] Health check failed: Degraded data redundancy: 74/723 objects degraded (10.235%), 19 pgs degraded (PG_DEGRADED) 2026-03-10T05:54:25.000 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:54:24 vm05 bash[43541]: cluster 2026-03-10T05:54:23.706768+0000 mon.a (mon.0) 276 : cluster [WRN] Health check failed: Degraded data redundancy: 74/723 objects degraded (10.235%), 19 pgs degraded (PG_DEGRADED) 2026-03-10T05:54:25.000 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:54:24 vm05 bash[43541]: audit 2026-03-10T05:54:24.047002+0000 mon.a (mon.0) 277 : audit [INF] from='osd.0 ' entity='osd.0' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]: dispatch 2026-03-10T05:54:25.000 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:54:24 vm05 bash[43541]: audit 2026-03-10T05:54:24.047002+0000 mon.a (mon.0) 277 : audit [INF] from='osd.0 ' entity='osd.0' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]: dispatch 2026-03-10T05:54:25.000 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:54:24 vm05 bash[43541]: audit 2026-03-10T05:54:24.049949+0000 mon.b (mon.2) 4 : audit [INF] from='osd.0 [v2:192.168.123.102:6802/2981574516,v1:192.168.123.102:6803/2981574516]' entity='osd.0' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]: dispatch 2026-03-10T05:54:25.000 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:54:24 vm05 bash[43541]: audit 2026-03-10T05:54:24.049949+0000 mon.b (mon.2) 4 : audit [INF] from='osd.0 [v2:192.168.123.102:6802/2981574516,v1:192.168.123.102:6803/2981574516]' entity='osd.0' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]: dispatch 2026-03-10T05:54:25.084 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:54:24 vm02 bash[56371]: cluster 2026-03-10T05:54:23.706768+0000 mon.a (mon.0) 276 : cluster [WRN] Health check failed: Degraded data redundancy: 74/723 objects degraded (10.235%), 19 pgs degraded (PG_DEGRADED) 2026-03-10T05:54:25.084 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:54:24 vm02 bash[56371]: cluster 2026-03-10T05:54:23.706768+0000 mon.a (mon.0) 276 : cluster [WRN] Health check failed: Degraded data redundancy: 74/723 objects degraded (10.235%), 19 pgs degraded (PG_DEGRADED) 2026-03-10T05:54:25.084 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:54:24 vm02 bash[56371]: audit 2026-03-10T05:54:24.047002+0000 mon.a (mon.0) 277 : audit [INF] from='osd.0 ' entity='osd.0' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]: dispatch 2026-03-10T05:54:25.084 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:54:24 vm02 bash[56371]: audit 2026-03-10T05:54:24.047002+0000 mon.a (mon.0) 277 : audit [INF] from='osd.0 ' entity='osd.0' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]: dispatch 2026-03-10T05:54:25.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:54:24 vm02 bash[56371]: audit 2026-03-10T05:54:24.049949+0000 mon.b (mon.2) 4 : audit [INF] from='osd.0 [v2:192.168.123.102:6802/2981574516,v1:192.168.123.102:6803/2981574516]' entity='osd.0' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]: dispatch 2026-03-10T05:54:25.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:54:24 vm02 bash[56371]: audit 2026-03-10T05:54:24.049949+0000 mon.b (mon.2) 4 : audit [INF] from='osd.0 [v2:192.168.123.102:6802/2981574516,v1:192.168.123.102:6803/2981574516]' entity='osd.0' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]: dispatch 2026-03-10T05:54:25.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:54:24 vm02 bash[55303]: cluster 2026-03-10T05:54:23.706768+0000 mon.a (mon.0) 276 : cluster [WRN] Health check failed: Degraded data redundancy: 74/723 objects degraded (10.235%), 19 pgs degraded (PG_DEGRADED) 2026-03-10T05:54:25.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:54:24 vm02 bash[55303]: cluster 2026-03-10T05:54:23.706768+0000 mon.a (mon.0) 276 : cluster [WRN] Health check failed: Degraded data redundancy: 74/723 objects degraded (10.235%), 19 pgs degraded (PG_DEGRADED) 2026-03-10T05:54:25.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:54:24 vm02 bash[55303]: audit 2026-03-10T05:54:24.047002+0000 mon.a (mon.0) 277 : audit [INF] from='osd.0 ' entity='osd.0' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]: dispatch 2026-03-10T05:54:25.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:54:24 vm02 bash[55303]: audit 2026-03-10T05:54:24.047002+0000 mon.a (mon.0) 277 : audit [INF] from='osd.0 ' entity='osd.0' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]: dispatch 2026-03-10T05:54:25.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:54:24 vm02 bash[55303]: audit 2026-03-10T05:54:24.049949+0000 mon.b (mon.2) 4 : audit [INF] from='osd.0 [v2:192.168.123.102:6802/2981574516,v1:192.168.123.102:6803/2981574516]' entity='osd.0' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]: dispatch 2026-03-10T05:54:25.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:54:24 vm02 bash[55303]: audit 2026-03-10T05:54:24.049949+0000 mon.b (mon.2) 4 : audit [INF] from='osd.0 [v2:192.168.123.102:6802/2981574516,v1:192.168.123.102:6803/2981574516]' entity='osd.0' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]: dispatch 2026-03-10T05:54:26.084 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:54:25 vm02 bash[56371]: audit 2026-03-10T05:54:24.762702+0000 mon.a (mon.0) 278 : audit [INF] from='osd.0 ' entity='osd.0' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]': finished 2026-03-10T05:54:26.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:54:25 vm02 bash[56371]: audit 2026-03-10T05:54:24.762702+0000 mon.a (mon.0) 278 : audit [INF] from='osd.0 ' entity='osd.0' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]': finished 2026-03-10T05:54:26.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:54:25 vm02 bash[56371]: cluster 2026-03-10T05:54:24.772236+0000 mon.a (mon.0) 279 : cluster [DBG] osdmap e104: 8 total, 7 up, 8 in 2026-03-10T05:54:26.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:54:25 vm02 bash[56371]: cluster 2026-03-10T05:54:24.772236+0000 mon.a (mon.0) 279 : cluster [DBG] osdmap e104: 8 total, 7 up, 8 in 2026-03-10T05:54:26.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:54:25 vm02 bash[56371]: audit 2026-03-10T05:54:24.778587+0000 mon.a (mon.0) 280 : audit [INF] from='osd.0 ' entity='osd.0' cmd=[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=vm02", "root=default"]}]: dispatch 2026-03-10T05:54:26.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:54:25 vm02 bash[56371]: audit 2026-03-10T05:54:24.778587+0000 mon.a (mon.0) 280 : audit [INF] from='osd.0 ' entity='osd.0' cmd=[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=vm02", "root=default"]}]: dispatch 2026-03-10T05:54:26.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:54:25 vm02 bash[56371]: audit 2026-03-10T05:54:24.781713+0000 mon.b (mon.2) 5 : audit [INF] from='osd.0 [v2:192.168.123.102:6802/2981574516,v1:192.168.123.102:6803/2981574516]' entity='osd.0' cmd=[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=vm02", "root=default"]}]: dispatch 2026-03-10T05:54:26.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:54:25 vm02 bash[56371]: audit 2026-03-10T05:54:24.781713+0000 mon.b (mon.2) 5 : audit [INF] from='osd.0 [v2:192.168.123.102:6802/2981574516,v1:192.168.123.102:6803/2981574516]' entity='osd.0' cmd=[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=vm02", "root=default"]}]: dispatch 2026-03-10T05:54:26.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:54:25 vm02 bash[56371]: cluster 2026-03-10T05:54:24.840027+0000 mgr.y (mgr.24992) 121 : cluster [DBG] pgmap v53: 161 pgs: 37 active+undersized, 21 active+undersized+degraded, 103 active+clean; 457 KiB data, 149 MiB used, 160 GiB / 160 GiB avail; 76/723 objects degraded (10.512%) 2026-03-10T05:54:26.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:54:25 vm02 bash[56371]: cluster 2026-03-10T05:54:24.840027+0000 mgr.y (mgr.24992) 121 : cluster [DBG] pgmap v53: 161 pgs: 37 active+undersized, 21 active+undersized+degraded, 103 active+clean; 457 KiB data, 149 MiB used, 160 GiB / 160 GiB avail; 76/723 objects degraded (10.512%) 2026-03-10T05:54:26.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:54:25 vm02 bash[55303]: audit 2026-03-10T05:54:24.762702+0000 mon.a (mon.0) 278 : audit [INF] from='osd.0 ' entity='osd.0' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]': finished 2026-03-10T05:54:26.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:54:25 vm02 bash[55303]: audit 2026-03-10T05:54:24.762702+0000 mon.a (mon.0) 278 : audit [INF] from='osd.0 ' entity='osd.0' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]': finished 2026-03-10T05:54:26.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:54:25 vm02 bash[55303]: cluster 2026-03-10T05:54:24.772236+0000 mon.a (mon.0) 279 : cluster [DBG] osdmap e104: 8 total, 7 up, 8 in 2026-03-10T05:54:26.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:54:25 vm02 bash[55303]: cluster 2026-03-10T05:54:24.772236+0000 mon.a (mon.0) 279 : cluster [DBG] osdmap e104: 8 total, 7 up, 8 in 2026-03-10T05:54:26.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:54:25 vm02 bash[55303]: audit 2026-03-10T05:54:24.778587+0000 mon.a (mon.0) 280 : audit [INF] from='osd.0 ' entity='osd.0' cmd=[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=vm02", "root=default"]}]: dispatch 2026-03-10T05:54:26.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:54:25 vm02 bash[55303]: audit 2026-03-10T05:54:24.778587+0000 mon.a (mon.0) 280 : audit [INF] from='osd.0 ' entity='osd.0' cmd=[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=vm02", "root=default"]}]: dispatch 2026-03-10T05:54:26.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:54:25 vm02 bash[55303]: audit 2026-03-10T05:54:24.781713+0000 mon.b (mon.2) 5 : audit [INF] from='osd.0 [v2:192.168.123.102:6802/2981574516,v1:192.168.123.102:6803/2981574516]' entity='osd.0' cmd=[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=vm02", "root=default"]}]: dispatch 2026-03-10T05:54:26.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:54:25 vm02 bash[55303]: audit 2026-03-10T05:54:24.781713+0000 mon.b (mon.2) 5 : audit [INF] from='osd.0 [v2:192.168.123.102:6802/2981574516,v1:192.168.123.102:6803/2981574516]' entity='osd.0' cmd=[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=vm02", "root=default"]}]: dispatch 2026-03-10T05:54:26.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:54:25 vm02 bash[55303]: cluster 2026-03-10T05:54:24.840027+0000 mgr.y (mgr.24992) 121 : cluster [DBG] pgmap v53: 161 pgs: 37 active+undersized, 21 active+undersized+degraded, 103 active+clean; 457 KiB data, 149 MiB used, 160 GiB / 160 GiB avail; 76/723 objects degraded (10.512%) 2026-03-10T05:54:26.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:54:25 vm02 bash[55303]: cluster 2026-03-10T05:54:24.840027+0000 mgr.y (mgr.24992) 121 : cluster [DBG] pgmap v53: 161 pgs: 37 active+undersized, 21 active+undersized+degraded, 103 active+clean; 457 KiB data, 149 MiB used, 160 GiB / 160 GiB avail; 76/723 objects degraded (10.512%) 2026-03-10T05:54:26.250 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:54:25 vm05 bash[43541]: audit 2026-03-10T05:54:24.762702+0000 mon.a (mon.0) 278 : audit [INF] from='osd.0 ' entity='osd.0' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]': finished 2026-03-10T05:54:26.250 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:54:25 vm05 bash[43541]: audit 2026-03-10T05:54:24.762702+0000 mon.a (mon.0) 278 : audit [INF] from='osd.0 ' entity='osd.0' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]': finished 2026-03-10T05:54:26.250 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:54:25 vm05 bash[43541]: cluster 2026-03-10T05:54:24.772236+0000 mon.a (mon.0) 279 : cluster [DBG] osdmap e104: 8 total, 7 up, 8 in 2026-03-10T05:54:26.250 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:54:25 vm05 bash[43541]: cluster 2026-03-10T05:54:24.772236+0000 mon.a (mon.0) 279 : cluster [DBG] osdmap e104: 8 total, 7 up, 8 in 2026-03-10T05:54:26.250 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:54:25 vm05 bash[43541]: audit 2026-03-10T05:54:24.778587+0000 mon.a (mon.0) 280 : audit [INF] from='osd.0 ' entity='osd.0' cmd=[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=vm02", "root=default"]}]: dispatch 2026-03-10T05:54:26.250 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:54:25 vm05 bash[43541]: audit 2026-03-10T05:54:24.778587+0000 mon.a (mon.0) 280 : audit [INF] from='osd.0 ' entity='osd.0' cmd=[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=vm02", "root=default"]}]: dispatch 2026-03-10T05:54:26.250 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:54:25 vm05 bash[43541]: audit 2026-03-10T05:54:24.781713+0000 mon.b (mon.2) 5 : audit [INF] from='osd.0 [v2:192.168.123.102:6802/2981574516,v1:192.168.123.102:6803/2981574516]' entity='osd.0' cmd=[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=vm02", "root=default"]}]: dispatch 2026-03-10T05:54:26.250 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:54:25 vm05 bash[43541]: audit 2026-03-10T05:54:24.781713+0000 mon.b (mon.2) 5 : audit [INF] from='osd.0 [v2:192.168.123.102:6802/2981574516,v1:192.168.123.102:6803/2981574516]' entity='osd.0' cmd=[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=vm02", "root=default"]}]: dispatch 2026-03-10T05:54:26.250 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:54:25 vm05 bash[43541]: cluster 2026-03-10T05:54:24.840027+0000 mgr.y (mgr.24992) 121 : cluster [DBG] pgmap v53: 161 pgs: 37 active+undersized, 21 active+undersized+degraded, 103 active+clean; 457 KiB data, 149 MiB used, 160 GiB / 160 GiB avail; 76/723 objects degraded (10.512%) 2026-03-10T05:54:26.250 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:54:25 vm05 bash[43541]: cluster 2026-03-10T05:54:24.840027+0000 mgr.y (mgr.24992) 121 : cluster [DBG] pgmap v53: 161 pgs: 37 active+undersized, 21 active+undersized+degraded, 103 active+clean; 457 KiB data, 149 MiB used, 160 GiB / 160 GiB avail; 76/723 objects degraded (10.512%) 2026-03-10T05:54:27.086 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:54:26 vm02 bash[56371]: audit 2026-03-10T05:54:25.902700+0000 mon.a (mon.0) 281 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:54:27.086 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:54:26 vm02 bash[56371]: audit 2026-03-10T05:54:25.902700+0000 mon.a (mon.0) 281 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:54:27.086 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:54:26 vm02 bash[56371]: audit 2026-03-10T05:54:25.903564+0000 mon.a (mon.0) 282 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T05:54:27.086 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:54:26 vm02 bash[56371]: audit 2026-03-10T05:54:25.903564+0000 mon.a (mon.0) 282 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T05:54:27.086 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:54:26 vm02 bash[55303]: audit 2026-03-10T05:54:25.902700+0000 mon.a (mon.0) 281 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:54:27.086 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:54:26 vm02 bash[55303]: audit 2026-03-10T05:54:25.902700+0000 mon.a (mon.0) 281 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:54:27.086 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:54:26 vm02 bash[55303]: audit 2026-03-10T05:54:25.903564+0000 mon.a (mon.0) 282 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T05:54:27.086 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:54:26 vm02 bash[55303]: audit 2026-03-10T05:54:25.903564+0000 mon.a (mon.0) 282 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T05:54:27.086 INFO:journalctl@ceph.osd.0.vm02.stdout:Mar 10 05:54:26 vm02 bash[63533]: debug 2026-03-10T05:54:26.739+0000 7fb0d8a65640 -1 osd.0 101 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory 2026-03-10T05:54:27.250 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:54:26 vm05 bash[43541]: audit 2026-03-10T05:54:25.902700+0000 mon.a (mon.0) 281 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:54:27.250 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:54:26 vm05 bash[43541]: audit 2026-03-10T05:54:25.902700+0000 mon.a (mon.0) 281 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:54:27.250 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:54:26 vm05 bash[43541]: audit 2026-03-10T05:54:25.903564+0000 mon.a (mon.0) 282 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T05:54:27.250 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:54:26 vm05 bash[43541]: audit 2026-03-10T05:54:25.903564+0000 mon.a (mon.0) 282 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T05:54:27.250 INFO:journalctl@ceph.prometheus.a.vm05.stdout:Mar 10 05:54:26 vm05 bash[41269]: ts=2026-03-10T05:54:26.948Z caller=group.go:483 level=warn name=CephNodeDiskspaceWarning index=4 component="rule manager" file=/etc/prometheus/alerting/ceph_alerts.yml group=nodes msg="Evaluating rule failed" rule="alert: CephNodeDiskspaceWarning\nexpr: predict_linear(node_filesystem_free_bytes{device=~\"/.*\"}[2d], 3600 * 24 * 5)\n * on (instance) group_left (nodename) node_uname_info < 0\nlabels:\n oid: 1.3.6.1.4.1.50495.1.2.1.8.4\n severity: warning\n type: ceph_default\nannotations:\n description: Mountpoint {{ $labels.mountpoint }} on {{ $labels.nodename }} will\n be full in less than 5 days based on the 48 hour trailing fill rate.\n summary: Host filesystem free space is getting low\n" err="found duplicate series for the match group {instance=\"vm05\"} on the right hand-side of the operation: [{__name__=\"node_uname_info\", cluster=\"107483ae-1c44-11f1-b530-c1172cd6122a\", domainname=\"(none)\", instance=\"vm05\", job=\"node\", machine=\"x86_64\", nodename=\"vm05\", release=\"5.15.0-1092-kvm\", sysname=\"Linux\", version=\"#97-Ubuntu SMP Fri Jan 23 15:00:24 UTC 2026\"}, {__name__=\"node_uname_info\", domainname=\"(none)\", instance=\"vm05\", job=\"node\", machine=\"x86_64\", nodename=\"vm05\", release=\"5.15.0-1092-kvm\", sysname=\"Linux\", version=\"#97-Ubuntu SMP Fri Jan 23 15:00:24 UTC 2026\"}];many-to-many matching not allowed: matching labels must be unique on one side" 2026-03-10T05:54:28.250 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:54:27 vm05 bash[43541]: cluster 2026-03-10T05:54:26.727219+0000 osd.0 (osd.0) 1 : cluster [WRN] OSD bench result of 18520.170015 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.0. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd]. 2026-03-10T05:54:28.250 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:54:27 vm05 bash[43541]: cluster 2026-03-10T05:54:26.727219+0000 osd.0 (osd.0) 1 : cluster [WRN] OSD bench result of 18520.170015 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.0. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd]. 2026-03-10T05:54:28.250 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:54:27 vm05 bash[43541]: cluster 2026-03-10T05:54:26.840535+0000 mgr.y (mgr.24992) 122 : cluster [DBG] pgmap v54: 161 pgs: 37 active+undersized, 21 active+undersized+degraded, 103 active+clean; 457 KiB data, 168 MiB used, 160 GiB / 160 GiB avail; 76/723 objects degraded (10.512%) 2026-03-10T05:54:28.250 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:54:27 vm05 bash[43541]: cluster 2026-03-10T05:54:26.840535+0000 mgr.y (mgr.24992) 122 : cluster [DBG] pgmap v54: 161 pgs: 37 active+undersized, 21 active+undersized+degraded, 103 active+clean; 457 KiB data, 168 MiB used, 160 GiB / 160 GiB avail; 76/723 objects degraded (10.512%) 2026-03-10T05:54:28.250 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:54:27 vm05 bash[43541]: cluster 2026-03-10T05:54:26.911009+0000 mon.a (mon.0) 283 : cluster [INF] Health check cleared: OSD_DOWN (was: 1 osds down) 2026-03-10T05:54:28.250 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:54:27 vm05 bash[43541]: cluster 2026-03-10T05:54:26.911009+0000 mon.a (mon.0) 283 : cluster [INF] Health check cleared: OSD_DOWN (was: 1 osds down) 2026-03-10T05:54:28.250 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:54:27 vm05 bash[43541]: audit 2026-03-10T05:54:26.925124+0000 mgr.y (mgr.24992) 123 : audit [DBG] from='client.25048 -' entity='client.iscsi.foo.vm02.mxbwmh' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T05:54:28.250 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:54:27 vm05 bash[43541]: audit 2026-03-10T05:54:26.925124+0000 mgr.y (mgr.24992) 123 : audit [DBG] from='client.25048 -' entity='client.iscsi.foo.vm02.mxbwmh' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T05:54:28.250 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:54:27 vm05 bash[43541]: cluster 2026-03-10T05:54:26.936869+0000 mon.a (mon.0) 284 : cluster [INF] osd.0 [v2:192.168.123.102:6802/2981574516,v1:192.168.123.102:6803/2981574516] boot 2026-03-10T05:54:28.250 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:54:27 vm05 bash[43541]: cluster 2026-03-10T05:54:26.936869+0000 mon.a (mon.0) 284 : cluster [INF] osd.0 [v2:192.168.123.102:6802/2981574516,v1:192.168.123.102:6803/2981574516] boot 2026-03-10T05:54:28.250 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:54:27 vm05 bash[43541]: cluster 2026-03-10T05:54:26.936886+0000 mon.a (mon.0) 285 : cluster [DBG] osdmap e105: 8 total, 8 up, 8 in 2026-03-10T05:54:28.250 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:54:27 vm05 bash[43541]: cluster 2026-03-10T05:54:26.936886+0000 mon.a (mon.0) 285 : cluster [DBG] osdmap e105: 8 total, 8 up, 8 in 2026-03-10T05:54:28.250 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:54:27 vm05 bash[43541]: audit 2026-03-10T05:54:26.938319+0000 mon.a (mon.0) 286 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T05:54:28.250 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:54:27 vm05 bash[43541]: audit 2026-03-10T05:54:26.938319+0000 mon.a (mon.0) 286 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T05:54:28.250 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:54:27 vm05 bash[43541]: audit 2026-03-10T05:54:27.360637+0000 mon.a (mon.0) 287 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:54:28.250 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:54:27 vm05 bash[43541]: audit 2026-03-10T05:54:27.360637+0000 mon.a (mon.0) 287 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:54:28.250 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:54:27 vm05 bash[43541]: audit 2026-03-10T05:54:27.368681+0000 mon.a (mon.0) 288 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:54:28.250 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:54:27 vm05 bash[43541]: audit 2026-03-10T05:54:27.368681+0000 mon.a (mon.0) 288 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:54:28.335 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:54:27 vm02 bash[56371]: cluster 2026-03-10T05:54:26.727219+0000 osd.0 (osd.0) 1 : cluster [WRN] OSD bench result of 18520.170015 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.0. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd]. 2026-03-10T05:54:28.335 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:54:27 vm02 bash[56371]: cluster 2026-03-10T05:54:26.727219+0000 osd.0 (osd.0) 1 : cluster [WRN] OSD bench result of 18520.170015 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.0. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd]. 2026-03-10T05:54:28.335 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:54:27 vm02 bash[56371]: cluster 2026-03-10T05:54:26.840535+0000 mgr.y (mgr.24992) 122 : cluster [DBG] pgmap v54: 161 pgs: 37 active+undersized, 21 active+undersized+degraded, 103 active+clean; 457 KiB data, 168 MiB used, 160 GiB / 160 GiB avail; 76/723 objects degraded (10.512%) 2026-03-10T05:54:28.335 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:54:27 vm02 bash[56371]: cluster 2026-03-10T05:54:26.840535+0000 mgr.y (mgr.24992) 122 : cluster [DBG] pgmap v54: 161 pgs: 37 active+undersized, 21 active+undersized+degraded, 103 active+clean; 457 KiB data, 168 MiB used, 160 GiB / 160 GiB avail; 76/723 objects degraded (10.512%) 2026-03-10T05:54:28.335 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:54:27 vm02 bash[56371]: cluster 2026-03-10T05:54:26.911009+0000 mon.a (mon.0) 283 : cluster [INF] Health check cleared: OSD_DOWN (was: 1 osds down) 2026-03-10T05:54:28.335 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:54:27 vm02 bash[56371]: cluster 2026-03-10T05:54:26.911009+0000 mon.a (mon.0) 283 : cluster [INF] Health check cleared: OSD_DOWN (was: 1 osds down) 2026-03-10T05:54:28.335 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:54:27 vm02 bash[56371]: audit 2026-03-10T05:54:26.925124+0000 mgr.y (mgr.24992) 123 : audit [DBG] from='client.25048 -' entity='client.iscsi.foo.vm02.mxbwmh' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T05:54:28.335 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:54:27 vm02 bash[56371]: audit 2026-03-10T05:54:26.925124+0000 mgr.y (mgr.24992) 123 : audit [DBG] from='client.25048 -' entity='client.iscsi.foo.vm02.mxbwmh' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T05:54:28.335 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:54:27 vm02 bash[56371]: cluster 2026-03-10T05:54:26.936869+0000 mon.a (mon.0) 284 : cluster [INF] osd.0 [v2:192.168.123.102:6802/2981574516,v1:192.168.123.102:6803/2981574516] boot 2026-03-10T05:54:28.335 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:54:27 vm02 bash[56371]: cluster 2026-03-10T05:54:26.936869+0000 mon.a (mon.0) 284 : cluster [INF] osd.0 [v2:192.168.123.102:6802/2981574516,v1:192.168.123.102:6803/2981574516] boot 2026-03-10T05:54:28.335 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:54:27 vm02 bash[56371]: cluster 2026-03-10T05:54:26.936886+0000 mon.a (mon.0) 285 : cluster [DBG] osdmap e105: 8 total, 8 up, 8 in 2026-03-10T05:54:28.335 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:54:27 vm02 bash[56371]: cluster 2026-03-10T05:54:26.936886+0000 mon.a (mon.0) 285 : cluster [DBG] osdmap e105: 8 total, 8 up, 8 in 2026-03-10T05:54:28.335 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:54:27 vm02 bash[56371]: audit 2026-03-10T05:54:26.938319+0000 mon.a (mon.0) 286 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T05:54:28.335 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:54:27 vm02 bash[56371]: audit 2026-03-10T05:54:26.938319+0000 mon.a (mon.0) 286 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T05:54:28.335 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:54:27 vm02 bash[56371]: audit 2026-03-10T05:54:27.360637+0000 mon.a (mon.0) 287 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:54:28.335 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:54:27 vm02 bash[56371]: audit 2026-03-10T05:54:27.360637+0000 mon.a (mon.0) 287 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:54:28.335 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:54:27 vm02 bash[56371]: audit 2026-03-10T05:54:27.368681+0000 mon.a (mon.0) 288 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:54:28.335 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:54:27 vm02 bash[56371]: audit 2026-03-10T05:54:27.368681+0000 mon.a (mon.0) 288 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:54:28.335 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:54:27 vm02 bash[55303]: cluster 2026-03-10T05:54:26.727219+0000 osd.0 (osd.0) 1 : cluster [WRN] OSD bench result of 18520.170015 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.0. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd]. 2026-03-10T05:54:28.335 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:54:27 vm02 bash[55303]: cluster 2026-03-10T05:54:26.727219+0000 osd.0 (osd.0) 1 : cluster [WRN] OSD bench result of 18520.170015 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.0. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd]. 2026-03-10T05:54:28.335 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:54:27 vm02 bash[55303]: cluster 2026-03-10T05:54:26.840535+0000 mgr.y (mgr.24992) 122 : cluster [DBG] pgmap v54: 161 pgs: 37 active+undersized, 21 active+undersized+degraded, 103 active+clean; 457 KiB data, 168 MiB used, 160 GiB / 160 GiB avail; 76/723 objects degraded (10.512%) 2026-03-10T05:54:28.335 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:54:27 vm02 bash[55303]: cluster 2026-03-10T05:54:26.840535+0000 mgr.y (mgr.24992) 122 : cluster [DBG] pgmap v54: 161 pgs: 37 active+undersized, 21 active+undersized+degraded, 103 active+clean; 457 KiB data, 168 MiB used, 160 GiB / 160 GiB avail; 76/723 objects degraded (10.512%) 2026-03-10T05:54:28.335 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:54:27 vm02 bash[55303]: cluster 2026-03-10T05:54:26.911009+0000 mon.a (mon.0) 283 : cluster [INF] Health check cleared: OSD_DOWN (was: 1 osds down) 2026-03-10T05:54:28.335 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:54:27 vm02 bash[55303]: cluster 2026-03-10T05:54:26.911009+0000 mon.a (mon.0) 283 : cluster [INF] Health check cleared: OSD_DOWN (was: 1 osds down) 2026-03-10T05:54:28.335 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:54:27 vm02 bash[55303]: audit 2026-03-10T05:54:26.925124+0000 mgr.y (mgr.24992) 123 : audit [DBG] from='client.25048 -' entity='client.iscsi.foo.vm02.mxbwmh' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T05:54:28.335 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:54:27 vm02 bash[55303]: audit 2026-03-10T05:54:26.925124+0000 mgr.y (mgr.24992) 123 : audit [DBG] from='client.25048 -' entity='client.iscsi.foo.vm02.mxbwmh' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T05:54:28.335 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:54:27 vm02 bash[55303]: cluster 2026-03-10T05:54:26.936869+0000 mon.a (mon.0) 284 : cluster [INF] osd.0 [v2:192.168.123.102:6802/2981574516,v1:192.168.123.102:6803/2981574516] boot 2026-03-10T05:54:28.335 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:54:27 vm02 bash[55303]: cluster 2026-03-10T05:54:26.936869+0000 mon.a (mon.0) 284 : cluster [INF] osd.0 [v2:192.168.123.102:6802/2981574516,v1:192.168.123.102:6803/2981574516] boot 2026-03-10T05:54:28.335 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:54:27 vm02 bash[55303]: cluster 2026-03-10T05:54:26.936886+0000 mon.a (mon.0) 285 : cluster [DBG] osdmap e105: 8 total, 8 up, 8 in 2026-03-10T05:54:28.335 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:54:27 vm02 bash[55303]: cluster 2026-03-10T05:54:26.936886+0000 mon.a (mon.0) 285 : cluster [DBG] osdmap e105: 8 total, 8 up, 8 in 2026-03-10T05:54:28.335 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:54:27 vm02 bash[55303]: audit 2026-03-10T05:54:26.938319+0000 mon.a (mon.0) 286 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T05:54:28.335 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:54:27 vm02 bash[55303]: audit 2026-03-10T05:54:26.938319+0000 mon.a (mon.0) 286 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T05:54:28.335 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:54:27 vm02 bash[55303]: audit 2026-03-10T05:54:27.360637+0000 mon.a (mon.0) 287 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:54:28.335 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:54:27 vm02 bash[55303]: audit 2026-03-10T05:54:27.360637+0000 mon.a (mon.0) 287 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:54:28.335 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:54:27 vm02 bash[55303]: audit 2026-03-10T05:54:27.368681+0000 mon.a (mon.0) 288 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:54:28.335 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:54:27 vm02 bash[55303]: audit 2026-03-10T05:54:27.368681+0000 mon.a (mon.0) 288 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:54:29.250 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:54:28 vm05 bash[43541]: cluster 2026-03-10T05:54:27.944184+0000 mon.a (mon.0) 289 : cluster [DBG] osdmap e106: 8 total, 8 up, 8 in 2026-03-10T05:54:29.250 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:54:28 vm05 bash[43541]: cluster 2026-03-10T05:54:27.944184+0000 mon.a (mon.0) 289 : cluster [DBG] osdmap e106: 8 total, 8 up, 8 in 2026-03-10T05:54:29.250 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:54:28 vm05 bash[43541]: audit 2026-03-10T05:54:28.002289+0000 mon.a (mon.0) 290 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:54:29.250 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:54:28 vm05 bash[43541]: audit 2026-03-10T05:54:28.002289+0000 mon.a (mon.0) 290 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:54:29.250 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:54:28 vm05 bash[43541]: audit 2026-03-10T05:54:28.010903+0000 mon.a (mon.0) 291 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:54:29.250 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:54:28 vm05 bash[43541]: audit 2026-03-10T05:54:28.010903+0000 mon.a (mon.0) 291 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:54:29.334 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:54:28 vm02 bash[56371]: cluster 2026-03-10T05:54:27.944184+0000 mon.a (mon.0) 289 : cluster [DBG] osdmap e106: 8 total, 8 up, 8 in 2026-03-10T05:54:29.334 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:54:28 vm02 bash[56371]: cluster 2026-03-10T05:54:27.944184+0000 mon.a (mon.0) 289 : cluster [DBG] osdmap e106: 8 total, 8 up, 8 in 2026-03-10T05:54:29.335 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:54:28 vm02 bash[56371]: audit 2026-03-10T05:54:28.002289+0000 mon.a (mon.0) 290 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:54:29.335 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:54:28 vm02 bash[56371]: audit 2026-03-10T05:54:28.002289+0000 mon.a (mon.0) 290 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:54:29.335 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:54:28 vm02 bash[56371]: audit 2026-03-10T05:54:28.010903+0000 mon.a (mon.0) 291 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:54:29.335 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:54:28 vm02 bash[56371]: audit 2026-03-10T05:54:28.010903+0000 mon.a (mon.0) 291 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:54:29.335 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:54:28 vm02 bash[55303]: cluster 2026-03-10T05:54:27.944184+0000 mon.a (mon.0) 289 : cluster [DBG] osdmap e106: 8 total, 8 up, 8 in 2026-03-10T05:54:29.335 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:54:28 vm02 bash[55303]: cluster 2026-03-10T05:54:27.944184+0000 mon.a (mon.0) 289 : cluster [DBG] osdmap e106: 8 total, 8 up, 8 in 2026-03-10T05:54:29.335 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:54:28 vm02 bash[55303]: audit 2026-03-10T05:54:28.002289+0000 mon.a (mon.0) 290 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:54:29.335 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:54:28 vm02 bash[55303]: audit 2026-03-10T05:54:28.002289+0000 mon.a (mon.0) 290 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:54:29.335 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:54:28 vm02 bash[55303]: audit 2026-03-10T05:54:28.010903+0000 mon.a (mon.0) 291 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:54:29.335 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:54:28 vm02 bash[55303]: audit 2026-03-10T05:54:28.010903+0000 mon.a (mon.0) 291 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:54:30.250 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:54:29 vm05 bash[43541]: cluster 2026-03-10T05:54:28.840960+0000 mgr.y (mgr.24992) 124 : cluster [DBG] pgmap v57: 161 pgs: 6 peering, 33 active+undersized, 19 active+undersized+degraded, 103 active+clean; 457 KiB data, 168 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s; 74/723 objects degraded (10.235%) 2026-03-10T05:54:30.250 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:54:29 vm05 bash[43541]: cluster 2026-03-10T05:54:28.840960+0000 mgr.y (mgr.24992) 124 : cluster [DBG] pgmap v57: 161 pgs: 6 peering, 33 active+undersized, 19 active+undersized+degraded, 103 active+clean; 457 KiB data, 168 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s; 74/723 objects degraded (10.235%) 2026-03-10T05:54:30.250 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:54:29 vm05 bash[43541]: cluster 2026-03-10T05:54:29.008768+0000 mon.a (mon.0) 292 : cluster [WRN] Health check update: Degraded data redundancy: 74/723 objects degraded (10.235%), 19 pgs degraded (PG_DEGRADED) 2026-03-10T05:54:30.250 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:54:29 vm05 bash[43541]: cluster 2026-03-10T05:54:29.008768+0000 mon.a (mon.0) 292 : cluster [WRN] Health check update: Degraded data redundancy: 74/723 objects degraded (10.235%), 19 pgs degraded (PG_DEGRADED) 2026-03-10T05:54:30.334 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:54:29 vm02 bash[56371]: cluster 2026-03-10T05:54:28.840960+0000 mgr.y (mgr.24992) 124 : cluster [DBG] pgmap v57: 161 pgs: 6 peering, 33 active+undersized, 19 active+undersized+degraded, 103 active+clean; 457 KiB data, 168 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s; 74/723 objects degraded (10.235%) 2026-03-10T05:54:30.335 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:54:29 vm02 bash[56371]: cluster 2026-03-10T05:54:28.840960+0000 mgr.y (mgr.24992) 124 : cluster [DBG] pgmap v57: 161 pgs: 6 peering, 33 active+undersized, 19 active+undersized+degraded, 103 active+clean; 457 KiB data, 168 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s; 74/723 objects degraded (10.235%) 2026-03-10T05:54:30.335 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:54:29 vm02 bash[56371]: cluster 2026-03-10T05:54:29.008768+0000 mon.a (mon.0) 292 : cluster [WRN] Health check update: Degraded data redundancy: 74/723 objects degraded (10.235%), 19 pgs degraded (PG_DEGRADED) 2026-03-10T05:54:30.335 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:54:29 vm02 bash[56371]: cluster 2026-03-10T05:54:29.008768+0000 mon.a (mon.0) 292 : cluster [WRN] Health check update: Degraded data redundancy: 74/723 objects degraded (10.235%), 19 pgs degraded (PG_DEGRADED) 2026-03-10T05:54:30.335 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:54:29 vm02 bash[55303]: cluster 2026-03-10T05:54:28.840960+0000 mgr.y (mgr.24992) 124 : cluster [DBG] pgmap v57: 161 pgs: 6 peering, 33 active+undersized, 19 active+undersized+degraded, 103 active+clean; 457 KiB data, 168 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s; 74/723 objects degraded (10.235%) 2026-03-10T05:54:30.335 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:54:29 vm02 bash[55303]: cluster 2026-03-10T05:54:28.840960+0000 mgr.y (mgr.24992) 124 : cluster [DBG] pgmap v57: 161 pgs: 6 peering, 33 active+undersized, 19 active+undersized+degraded, 103 active+clean; 457 KiB data, 168 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s; 74/723 objects degraded (10.235%) 2026-03-10T05:54:30.335 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:54:29 vm02 bash[55303]: cluster 2026-03-10T05:54:29.008768+0000 mon.a (mon.0) 292 : cluster [WRN] Health check update: Degraded data redundancy: 74/723 objects degraded (10.235%), 19 pgs degraded (PG_DEGRADED) 2026-03-10T05:54:30.335 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:54:29 vm02 bash[55303]: cluster 2026-03-10T05:54:29.008768+0000 mon.a (mon.0) 292 : cluster [WRN] Health check update: Degraded data redundancy: 74/723 objects degraded (10.235%), 19 pgs degraded (PG_DEGRADED) 2026-03-10T05:54:32.250 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:54:31 vm05 bash[43541]: cluster 2026-03-10T05:54:30.841362+0000 mgr.y (mgr.24992) 125 : cluster [DBG] pgmap v58: 161 pgs: 6 peering, 21 active+undersized, 10 active+undersized+degraded, 124 active+clean; 457 KiB data, 168 MiB used, 160 GiB / 160 GiB avail; 843 B/s rd, 0 op/s; 39/723 objects degraded (5.394%) 2026-03-10T05:54:32.250 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:54:31 vm05 bash[43541]: cluster 2026-03-10T05:54:30.841362+0000 mgr.y (mgr.24992) 125 : cluster [DBG] pgmap v58: 161 pgs: 6 peering, 21 active+undersized, 10 active+undersized+degraded, 124 active+clean; 457 KiB data, 168 MiB used, 160 GiB / 160 GiB avail; 843 B/s rd, 0 op/s; 39/723 objects degraded (5.394%) 2026-03-10T05:54:32.334 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:54:31 vm02 bash[56371]: cluster 2026-03-10T05:54:30.841362+0000 mgr.y (mgr.24992) 125 : cluster [DBG] pgmap v58: 161 pgs: 6 peering, 21 active+undersized, 10 active+undersized+degraded, 124 active+clean; 457 KiB data, 168 MiB used, 160 GiB / 160 GiB avail; 843 B/s rd, 0 op/s; 39/723 objects degraded (5.394%) 2026-03-10T05:54:32.334 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:54:31 vm02 bash[56371]: cluster 2026-03-10T05:54:30.841362+0000 mgr.y (mgr.24992) 125 : cluster [DBG] pgmap v58: 161 pgs: 6 peering, 21 active+undersized, 10 active+undersized+degraded, 124 active+clean; 457 KiB data, 168 MiB used, 160 GiB / 160 GiB avail; 843 B/s rd, 0 op/s; 39/723 objects degraded (5.394%) 2026-03-10T05:54:32.334 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:54:31 vm02 bash[55303]: cluster 2026-03-10T05:54:30.841362+0000 mgr.y (mgr.24992) 125 : cluster [DBG] pgmap v58: 161 pgs: 6 peering, 21 active+undersized, 10 active+undersized+degraded, 124 active+clean; 457 KiB data, 168 MiB used, 160 GiB / 160 GiB avail; 843 B/s rd, 0 op/s; 39/723 objects degraded (5.394%) 2026-03-10T05:54:32.334 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:54:31 vm02 bash[55303]: cluster 2026-03-10T05:54:30.841362+0000 mgr.y (mgr.24992) 125 : cluster [DBG] pgmap v58: 161 pgs: 6 peering, 21 active+undersized, 10 active+undersized+degraded, 124 active+clean; 457 KiB data, 168 MiB used, 160 GiB / 160 GiB avail; 843 B/s rd, 0 op/s; 39/723 objects degraded (5.394%) 2026-03-10T05:54:33.084 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:54:33 vm02 bash[56371]: cluster 2026-03-10T05:54:32.964950+0000 mon.a (mon.0) 293 : cluster [INF] Health check cleared: PG_DEGRADED (was: Degraded data redundancy: 39/723 objects degraded (5.394%), 10 pgs degraded) 2026-03-10T05:54:33.084 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:54:33 vm02 bash[56371]: cluster 2026-03-10T05:54:32.964950+0000 mon.a (mon.0) 293 : cluster [INF] Health check cleared: PG_DEGRADED (was: Degraded data redundancy: 39/723 objects degraded (5.394%), 10 pgs degraded) 2026-03-10T05:54:33.084 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:54:33 vm02 bash[56371]: cluster 2026-03-10T05:54:32.964973+0000 mon.a (mon.0) 294 : cluster [INF] Cluster is now healthy 2026-03-10T05:54:33.084 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:54:33 vm02 bash[56371]: cluster 2026-03-10T05:54:32.964973+0000 mon.a (mon.0) 294 : cluster [INF] Cluster is now healthy 2026-03-10T05:54:33.085 INFO:journalctl@ceph.mgr.y.vm02.stdout:Mar 10 05:54:32 vm02 bash[52264]: ::ffff:192.168.123.105 - - [10/Mar/2026:05:54:32] "GET /metrics HTTP/1.1" 200 37825 "" "Prometheus/2.51.0" 2026-03-10T05:54:33.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:54:33 vm02 bash[55303]: cluster 2026-03-10T05:54:32.964950+0000 mon.a (mon.0) 293 : cluster [INF] Health check cleared: PG_DEGRADED (was: Degraded data redundancy: 39/723 objects degraded (5.394%), 10 pgs degraded) 2026-03-10T05:54:33.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:54:33 vm02 bash[55303]: cluster 2026-03-10T05:54:32.964950+0000 mon.a (mon.0) 293 : cluster [INF] Health check cleared: PG_DEGRADED (was: Degraded data redundancy: 39/723 objects degraded (5.394%), 10 pgs degraded) 2026-03-10T05:54:33.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:54:33 vm02 bash[55303]: cluster 2026-03-10T05:54:32.964973+0000 mon.a (mon.0) 294 : cluster [INF] Cluster is now healthy 2026-03-10T05:54:33.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:54:33 vm02 bash[55303]: cluster 2026-03-10T05:54:32.964973+0000 mon.a (mon.0) 294 : cluster [INF] Cluster is now healthy 2026-03-10T05:54:33.500 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:54:33 vm05 bash[43541]: cluster 2026-03-10T05:54:32.964950+0000 mon.a (mon.0) 293 : cluster [INF] Health check cleared: PG_DEGRADED (was: Degraded data redundancy: 39/723 objects degraded (5.394%), 10 pgs degraded) 2026-03-10T05:54:33.500 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:54:33 vm05 bash[43541]: cluster 2026-03-10T05:54:32.964950+0000 mon.a (mon.0) 293 : cluster [INF] Health check cleared: PG_DEGRADED (was: Degraded data redundancy: 39/723 objects degraded (5.394%), 10 pgs degraded) 2026-03-10T05:54:33.500 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:54:33 vm05 bash[43541]: cluster 2026-03-10T05:54:32.964973+0000 mon.a (mon.0) 294 : cluster [INF] Cluster is now healthy 2026-03-10T05:54:33.500 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:54:33 vm05 bash[43541]: cluster 2026-03-10T05:54:32.964973+0000 mon.a (mon.0) 294 : cluster [INF] Cluster is now healthy 2026-03-10T05:54:34.084 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:54:34 vm02 bash[56371]: cluster 2026-03-10T05:54:32.842011+0000 mgr.y (mgr.24992) 126 : cluster [DBG] pgmap v59: 161 pgs: 6 peering, 155 active+clean; 457 KiB data, 168 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T05:54:34.084 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:54:34 vm02 bash[56371]: cluster 2026-03-10T05:54:32.842011+0000 mgr.y (mgr.24992) 126 : cluster [DBG] pgmap v59: 161 pgs: 6 peering, 155 active+clean; 457 KiB data, 168 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T05:54:34.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:54:34 vm02 bash[55303]: cluster 2026-03-10T05:54:32.842011+0000 mgr.y (mgr.24992) 126 : cluster [DBG] pgmap v59: 161 pgs: 6 peering, 155 active+clean; 457 KiB data, 168 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T05:54:34.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:54:34 vm02 bash[55303]: cluster 2026-03-10T05:54:32.842011+0000 mgr.y (mgr.24992) 126 : cluster [DBG] pgmap v59: 161 pgs: 6 peering, 155 active+clean; 457 KiB data, 168 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T05:54:34.500 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:54:34 vm05 bash[43541]: cluster 2026-03-10T05:54:32.842011+0000 mgr.y (mgr.24992) 126 : cluster [DBG] pgmap v59: 161 pgs: 6 peering, 155 active+clean; 457 KiB data, 168 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T05:54:34.500 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:54:34 vm05 bash[43541]: cluster 2026-03-10T05:54:32.842011+0000 mgr.y (mgr.24992) 126 : cluster [DBG] pgmap v59: 161 pgs: 6 peering, 155 active+clean; 457 KiB data, 168 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T05:54:34.500 INFO:journalctl@ceph.prometheus.a.vm05.stdout:Mar 10 05:54:34 vm05 bash[41269]: ts=2026-03-10T05:54:34.147Z caller=alerting.go:391 level=warn component="rule manager" alert="unsupported value type" msg="Expanding alert template failed" err="error executing template __alert_CephOSDDown: template: __alert_CephOSDDown:1:358: executing \"__alert_CephOSDDown\" at : error calling query: found duplicate series for the match group {ceph_daemon=\"osd.0\"} on the right hand-side of the operation: [{__name__=\"ceph_osd_metadata\", ceph_daemon=\"osd.0\", ceph_version=\"ceph version 17.2.0 (43e2e60a7559d3f46c9d53f1ca875fd499a1e35e) quincy (stable)\", cluster=\"107483ae-1c44-11f1-b530-c1172cd6122a\", cluster_addr=\"192.168.123.102\", device_class=\"hdd\", hostname=\"vm02\", instance=\"ceph_cluster\", job=\"ceph\", objectstore=\"bluestore\", public_addr=\"192.168.123.102\"}, {__name__=\"ceph_osd_metadata\", ceph_daemon=\"osd.0\", ceph_version=\"ceph version 17.2.0 (43e2e60a7559d3f46c9d53f1ca875fd499a1e35e) quincy (stable)\", cluster_addr=\"192.168.123.102\", device_class=\"hdd\", hostname=\"vm02\", instance=\"192.168.123.105:9283\", job=\"ceph\", objectstore=\"bluestore\", public_addr=\"192.168.123.102\"}];many-to-many matching not allowed: matching labels must be unique on one side" data="unsupported value type" 2026-03-10T05:54:34.500 INFO:journalctl@ceph.prometheus.a.vm05.stdout:Mar 10 05:54:34 vm05 bash[41269]: ts=2026-03-10T05:54:34.147Z caller=group.go:483 level=warn name=CephOSDFlapping index=13 component="rule manager" file=/etc/prometheus/alerting/ceph_alerts.yml group=osd msg="Evaluating rule failed" rule="alert: CephOSDFlapping\nexpr: (rate(ceph_osd_up[5m]) * on (ceph_daemon) group_left (hostname) ceph_osd_metadata)\n * 60 > 1\nlabels:\n oid: 1.3.6.1.4.1.50495.1.2.1.4.4\n severity: warning\n type: ceph_default\nannotations:\n description: OSD {{ $labels.ceph_daemon }} on {{ $labels.hostname }} was marked\n down and back up {{ $value | humanize }} times once a minute for 5 minutes. This\n may indicate a network issue (latency, packet loss, MTU mismatch) on the cluster\n network, or the public network if no cluster network is deployed. Check the network\n stats on the listed host(s).\n documentation: https://docs.ceph.com/en/latest/rados/troubleshooting/troubleshooting-osd#flapping-osds\n summary: Network issues are causing OSDs to flap (mark each other down)\n" err="found duplicate series for the match group {ceph_daemon=\"osd.0\"} on the right hand-side of the operation: [{__name__=\"ceph_osd_metadata\", ceph_daemon=\"osd.0\", ceph_version=\"ceph version 17.2.0 (43e2e60a7559d3f46c9d53f1ca875fd499a1e35e) quincy (stable)\", cluster=\"107483ae-1c44-11f1-b530-c1172cd6122a\", cluster_addr=\"192.168.123.102\", device_class=\"hdd\", hostname=\"vm02\", instance=\"ceph_cluster\", job=\"ceph\", objectstore=\"bluestore\", public_addr=\"192.168.123.102\"}, {__name__=\"ceph_osd_metadata\", ceph_daemon=\"osd.0\", ceph_version=\"ceph version 17.2.0 (43e2e60a7559d3f46c9d53f1ca875fd499a1e35e) quincy (stable)\", cluster_addr=\"192.168.123.102\", device_class=\"hdd\", hostname=\"vm02\", instance=\"192.168.123.105:9283\", job=\"ceph\", objectstore=\"bluestore\", public_addr=\"192.168.123.102\"}];many-to-many matching not allowed: matching labels must be unique on one side" 2026-03-10T05:54:34.834 INFO:journalctl@ceph.mgr.y.vm02.stdout:Mar 10 05:54:34 vm02 bash[52264]: debug 2026-03-10T05:54:34.595+0000 7fd64b74b640 -1 mgr.server reply reply (16) Device or resource busy unsafe to stop osd(s) at this time (2 PGs are or would become offline) 2026-03-10T05:54:35.834 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:54:35 vm02 bash[56371]: audit 2026-03-10T05:54:34.547917+0000 mon.a (mon.0) 295 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:54:35.834 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:54:35 vm02 bash[56371]: audit 2026-03-10T05:54:34.547917+0000 mon.a (mon.0) 295 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:54:35.834 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:54:35 vm02 bash[56371]: audit 2026-03-10T05:54:34.553221+0000 mon.a (mon.0) 296 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:54:35.834 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:54:35 vm02 bash[56371]: audit 2026-03-10T05:54:34.553221+0000 mon.a (mon.0) 296 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:54:35.834 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:54:35 vm02 bash[56371]: audit 2026-03-10T05:54:34.554540+0000 mon.a (mon.0) 297 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:54:35.835 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:54:35 vm02 bash[56371]: audit 2026-03-10T05:54:34.554540+0000 mon.a (mon.0) 297 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:54:35.835 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:54:35 vm02 bash[56371]: audit 2026-03-10T05:54:34.555001+0000 mon.a (mon.0) 298 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T05:54:35.835 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:54:35 vm02 bash[56371]: audit 2026-03-10T05:54:34.555001+0000 mon.a (mon.0) 298 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T05:54:35.835 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:54:35 vm02 bash[56371]: audit 2026-03-10T05:54:34.558556+0000 mon.a (mon.0) 299 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:54:35.835 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:54:35 vm02 bash[56371]: audit 2026-03-10T05:54:34.558556+0000 mon.a (mon.0) 299 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:54:35.835 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:54:35 vm02 bash[56371]: audit 2026-03-10T05:54:34.597241+0000 mon.a (mon.0) 300 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T05:54:35.835 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:54:35 vm02 bash[56371]: audit 2026-03-10T05:54:34.597241+0000 mon.a (mon.0) 300 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T05:54:35.835 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:54:35 vm02 bash[56371]: audit 2026-03-10T05:54:34.598551+0000 mon.a (mon.0) 301 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T05:54:35.835 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:54:35 vm02 bash[56371]: audit 2026-03-10T05:54:34.598551+0000 mon.a (mon.0) 301 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T05:54:35.835 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:54:35 vm02 bash[56371]: audit 2026-03-10T05:54:34.599227+0000 mon.a (mon.0) 302 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T05:54:35.835 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:54:35 vm02 bash[56371]: audit 2026-03-10T05:54:34.599227+0000 mon.a (mon.0) 302 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T05:54:35.835 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:54:35 vm02 bash[56371]: audit 2026-03-10T05:54:34.599749+0000 mon.a (mon.0) 303 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T05:54:35.835 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:54:35 vm02 bash[56371]: audit 2026-03-10T05:54:34.599749+0000 mon.a (mon.0) 303 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T05:54:35.835 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:54:35 vm02 bash[56371]: audit 2026-03-10T05:54:34.600388+0000 mon.a (mon.0) 304 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "osd ok-to-stop", "ids": ["1"], "max": 16}]: dispatch 2026-03-10T05:54:35.835 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:54:35 vm02 bash[56371]: audit 2026-03-10T05:54:34.600388+0000 mon.a (mon.0) 304 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "osd ok-to-stop", "ids": ["1"], "max": 16}]: dispatch 2026-03-10T05:54:35.835 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:54:35 vm02 bash[56371]: audit 2026-03-10T05:54:34.600510+0000 mgr.y (mgr.24992) 127 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "osd ok-to-stop", "ids": ["1"], "max": 16}]: dispatch 2026-03-10T05:54:35.835 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:54:35 vm02 bash[56371]: audit 2026-03-10T05:54:34.600510+0000 mgr.y (mgr.24992) 127 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "osd ok-to-stop", "ids": ["1"], "max": 16}]: dispatch 2026-03-10T05:54:35.835 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:54:35 vm02 bash[56371]: cephadm 2026-03-10T05:54:34.601579+0000 mgr.y (mgr.24992) 128 : cephadm [INF] Upgrade: unsafe to stop osd(s) at this time (2 PGs are or would become offline) 2026-03-10T05:54:35.835 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:54:35 vm02 bash[56371]: cephadm 2026-03-10T05:54:34.601579+0000 mgr.y (mgr.24992) 128 : cephadm [INF] Upgrade: unsafe to stop osd(s) at this time (2 PGs are or would become offline) 2026-03-10T05:54:35.835 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:54:35 vm02 bash[55303]: audit 2026-03-10T05:54:34.547917+0000 mon.a (mon.0) 295 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:54:35.835 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:54:35 vm02 bash[55303]: audit 2026-03-10T05:54:34.547917+0000 mon.a (mon.0) 295 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:54:35.835 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:54:35 vm02 bash[55303]: audit 2026-03-10T05:54:34.553221+0000 mon.a (mon.0) 296 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:54:35.835 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:54:35 vm02 bash[55303]: audit 2026-03-10T05:54:34.553221+0000 mon.a (mon.0) 296 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:54:35.835 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:54:35 vm02 bash[55303]: audit 2026-03-10T05:54:34.554540+0000 mon.a (mon.0) 297 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:54:35.835 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:54:35 vm02 bash[55303]: audit 2026-03-10T05:54:34.554540+0000 mon.a (mon.0) 297 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:54:35.835 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:54:35 vm02 bash[55303]: audit 2026-03-10T05:54:34.555001+0000 mon.a (mon.0) 298 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T05:54:35.835 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:54:35 vm02 bash[55303]: audit 2026-03-10T05:54:34.555001+0000 mon.a (mon.0) 298 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T05:54:35.835 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:54:35 vm02 bash[55303]: audit 2026-03-10T05:54:34.558556+0000 mon.a (mon.0) 299 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:54:35.835 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:54:35 vm02 bash[55303]: audit 2026-03-10T05:54:34.558556+0000 mon.a (mon.0) 299 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:54:35.835 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:54:35 vm02 bash[55303]: audit 2026-03-10T05:54:34.597241+0000 mon.a (mon.0) 300 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T05:54:35.835 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:54:35 vm02 bash[55303]: audit 2026-03-10T05:54:34.597241+0000 mon.a (mon.0) 300 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T05:54:35.835 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:54:35 vm02 bash[55303]: audit 2026-03-10T05:54:34.598551+0000 mon.a (mon.0) 301 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T05:54:35.835 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:54:35 vm02 bash[55303]: audit 2026-03-10T05:54:34.598551+0000 mon.a (mon.0) 301 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T05:54:35.835 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:54:35 vm02 bash[55303]: audit 2026-03-10T05:54:34.599227+0000 mon.a (mon.0) 302 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T05:54:35.835 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:54:35 vm02 bash[55303]: audit 2026-03-10T05:54:34.599227+0000 mon.a (mon.0) 302 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T05:54:35.835 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:54:35 vm02 bash[55303]: audit 2026-03-10T05:54:34.599749+0000 mon.a (mon.0) 303 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T05:54:35.835 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:54:35 vm02 bash[55303]: audit 2026-03-10T05:54:34.599749+0000 mon.a (mon.0) 303 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T05:54:35.835 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:54:35 vm02 bash[55303]: audit 2026-03-10T05:54:34.600388+0000 mon.a (mon.0) 304 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "osd ok-to-stop", "ids": ["1"], "max": 16}]: dispatch 2026-03-10T05:54:35.835 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:54:35 vm02 bash[55303]: audit 2026-03-10T05:54:34.600388+0000 mon.a (mon.0) 304 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "osd ok-to-stop", "ids": ["1"], "max": 16}]: dispatch 2026-03-10T05:54:35.835 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:54:35 vm02 bash[55303]: audit 2026-03-10T05:54:34.600510+0000 mgr.y (mgr.24992) 127 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "osd ok-to-stop", "ids": ["1"], "max": 16}]: dispatch 2026-03-10T05:54:35.835 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:54:35 vm02 bash[55303]: audit 2026-03-10T05:54:34.600510+0000 mgr.y (mgr.24992) 127 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "osd ok-to-stop", "ids": ["1"], "max": 16}]: dispatch 2026-03-10T05:54:35.835 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:54:35 vm02 bash[55303]: cephadm 2026-03-10T05:54:34.601579+0000 mgr.y (mgr.24992) 128 : cephadm [INF] Upgrade: unsafe to stop osd(s) at this time (2 PGs are or would become offline) 2026-03-10T05:54:35.835 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:54:35 vm02 bash[55303]: cephadm 2026-03-10T05:54:34.601579+0000 mgr.y (mgr.24992) 128 : cephadm [INF] Upgrade: unsafe to stop osd(s) at this time (2 PGs are or would become offline) 2026-03-10T05:54:36.000 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:54:35 vm05 bash[43541]: audit 2026-03-10T05:54:34.547917+0000 mon.a (mon.0) 295 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:54:36.000 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:54:35 vm05 bash[43541]: audit 2026-03-10T05:54:34.547917+0000 mon.a (mon.0) 295 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:54:36.000 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:54:35 vm05 bash[43541]: audit 2026-03-10T05:54:34.553221+0000 mon.a (mon.0) 296 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:54:36.000 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:54:35 vm05 bash[43541]: audit 2026-03-10T05:54:34.553221+0000 mon.a (mon.0) 296 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:54:36.000 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:54:35 vm05 bash[43541]: audit 2026-03-10T05:54:34.554540+0000 mon.a (mon.0) 297 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:54:36.000 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:54:35 vm05 bash[43541]: audit 2026-03-10T05:54:34.554540+0000 mon.a (mon.0) 297 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:54:36.000 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:54:35 vm05 bash[43541]: audit 2026-03-10T05:54:34.555001+0000 mon.a (mon.0) 298 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T05:54:36.000 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:54:35 vm05 bash[43541]: audit 2026-03-10T05:54:34.555001+0000 mon.a (mon.0) 298 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T05:54:36.000 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:54:35 vm05 bash[43541]: audit 2026-03-10T05:54:34.558556+0000 mon.a (mon.0) 299 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:54:36.000 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:54:35 vm05 bash[43541]: audit 2026-03-10T05:54:34.558556+0000 mon.a (mon.0) 299 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:54:36.000 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:54:35 vm05 bash[43541]: audit 2026-03-10T05:54:34.597241+0000 mon.a (mon.0) 300 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T05:54:36.000 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:54:35 vm05 bash[43541]: audit 2026-03-10T05:54:34.597241+0000 mon.a (mon.0) 300 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T05:54:36.000 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:54:35 vm05 bash[43541]: audit 2026-03-10T05:54:34.598551+0000 mon.a (mon.0) 301 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T05:54:36.000 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:54:35 vm05 bash[43541]: audit 2026-03-10T05:54:34.598551+0000 mon.a (mon.0) 301 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T05:54:36.000 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:54:35 vm05 bash[43541]: audit 2026-03-10T05:54:34.599227+0000 mon.a (mon.0) 302 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T05:54:36.000 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:54:35 vm05 bash[43541]: audit 2026-03-10T05:54:34.599227+0000 mon.a (mon.0) 302 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T05:54:36.000 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:54:35 vm05 bash[43541]: audit 2026-03-10T05:54:34.599749+0000 mon.a (mon.0) 303 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T05:54:36.000 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:54:35 vm05 bash[43541]: audit 2026-03-10T05:54:34.599749+0000 mon.a (mon.0) 303 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T05:54:36.000 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:54:35 vm05 bash[43541]: audit 2026-03-10T05:54:34.600388+0000 mon.a (mon.0) 304 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "osd ok-to-stop", "ids": ["1"], "max": 16}]: dispatch 2026-03-10T05:54:36.000 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:54:35 vm05 bash[43541]: audit 2026-03-10T05:54:34.600388+0000 mon.a (mon.0) 304 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "osd ok-to-stop", "ids": ["1"], "max": 16}]: dispatch 2026-03-10T05:54:36.000 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:54:35 vm05 bash[43541]: audit 2026-03-10T05:54:34.600510+0000 mgr.y (mgr.24992) 127 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "osd ok-to-stop", "ids": ["1"], "max": 16}]: dispatch 2026-03-10T05:54:36.000 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:54:35 vm05 bash[43541]: audit 2026-03-10T05:54:34.600510+0000 mgr.y (mgr.24992) 127 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "osd ok-to-stop", "ids": ["1"], "max": 16}]: dispatch 2026-03-10T05:54:36.000 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:54:35 vm05 bash[43541]: cephadm 2026-03-10T05:54:34.601579+0000 mgr.y (mgr.24992) 128 : cephadm [INF] Upgrade: unsafe to stop osd(s) at this time (2 PGs are or would become offline) 2026-03-10T05:54:36.000 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:54:35 vm05 bash[43541]: cephadm 2026-03-10T05:54:34.601579+0000 mgr.y (mgr.24992) 128 : cephadm [INF] Upgrade: unsafe to stop osd(s) at this time (2 PGs are or would become offline) 2026-03-10T05:54:36.834 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:54:36 vm02 bash[56371]: cluster 2026-03-10T05:54:34.842329+0000 mgr.y (mgr.24992) 129 : cluster [DBG] pgmap v60: 161 pgs: 161 active+clean; 457 KiB data, 168 MiB used, 160 GiB / 160 GiB avail; 639 B/s rd, 0 op/s 2026-03-10T05:54:36.834 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:54:36 vm02 bash[56371]: cluster 2026-03-10T05:54:34.842329+0000 mgr.y (mgr.24992) 129 : cluster [DBG] pgmap v60: 161 pgs: 161 active+clean; 457 KiB data, 168 MiB used, 160 GiB / 160 GiB avail; 639 B/s rd, 0 op/s 2026-03-10T05:54:36.834 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:54:36 vm02 bash[56371]: audit 2026-03-10T05:54:35.899100+0000 mon.a (mon.0) 305 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:54:36.834 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:54:36 vm02 bash[56371]: audit 2026-03-10T05:54:35.899100+0000 mon.a (mon.0) 305 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:54:36.834 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:54:36 vm02 bash[55303]: cluster 2026-03-10T05:54:34.842329+0000 mgr.y (mgr.24992) 129 : cluster [DBG] pgmap v60: 161 pgs: 161 active+clean; 457 KiB data, 168 MiB used, 160 GiB / 160 GiB avail; 639 B/s rd, 0 op/s 2026-03-10T05:54:36.834 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:54:36 vm02 bash[55303]: cluster 2026-03-10T05:54:34.842329+0000 mgr.y (mgr.24992) 129 : cluster [DBG] pgmap v60: 161 pgs: 161 active+clean; 457 KiB data, 168 MiB used, 160 GiB / 160 GiB avail; 639 B/s rd, 0 op/s 2026-03-10T05:54:36.834 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:54:36 vm02 bash[55303]: audit 2026-03-10T05:54:35.899100+0000 mon.a (mon.0) 305 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:54:36.835 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:54:36 vm02 bash[55303]: audit 2026-03-10T05:54:35.899100+0000 mon.a (mon.0) 305 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:54:36.946 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:54:36 vm05 bash[43541]: cluster 2026-03-10T05:54:34.842329+0000 mgr.y (mgr.24992) 129 : cluster [DBG] pgmap v60: 161 pgs: 161 active+clean; 457 KiB data, 168 MiB used, 160 GiB / 160 GiB avail; 639 B/s rd, 0 op/s 2026-03-10T05:54:36.946 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:54:36 vm05 bash[43541]: cluster 2026-03-10T05:54:34.842329+0000 mgr.y (mgr.24992) 129 : cluster [DBG] pgmap v60: 161 pgs: 161 active+clean; 457 KiB data, 168 MiB used, 160 GiB / 160 GiB avail; 639 B/s rd, 0 op/s 2026-03-10T05:54:36.946 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:54:36 vm05 bash[43541]: audit 2026-03-10T05:54:35.899100+0000 mon.a (mon.0) 305 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:54:36.946 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:54:36 vm05 bash[43541]: audit 2026-03-10T05:54:35.899100+0000 mon.a (mon.0) 305 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:54:37.250 INFO:journalctl@ceph.prometheus.a.vm05.stdout:Mar 10 05:54:36 vm05 bash[41269]: ts=2026-03-10T05:54:36.948Z caller=group.go:483 level=warn name=CephNodeDiskspaceWarning index=4 component="rule manager" file=/etc/prometheus/alerting/ceph_alerts.yml group=nodes msg="Evaluating rule failed" rule="alert: CephNodeDiskspaceWarning\nexpr: predict_linear(node_filesystem_free_bytes{device=~\"/.*\"}[2d], 3600 * 24 * 5)\n * on (instance) group_left (nodename) node_uname_info < 0\nlabels:\n oid: 1.3.6.1.4.1.50495.1.2.1.8.4\n severity: warning\n type: ceph_default\nannotations:\n description: Mountpoint {{ $labels.mountpoint }} on {{ $labels.nodename }} will\n be full in less than 5 days based on the 48 hour trailing fill rate.\n summary: Host filesystem free space is getting low\n" err="found duplicate series for the match group {instance=\"vm05\"} on the right hand-side of the operation: [{__name__=\"node_uname_info\", cluster=\"107483ae-1c44-11f1-b530-c1172cd6122a\", domainname=\"(none)\", instance=\"vm05\", job=\"node\", machine=\"x86_64\", nodename=\"vm05\", release=\"5.15.0-1092-kvm\", sysname=\"Linux\", version=\"#97-Ubuntu SMP Fri Jan 23 15:00:24 UTC 2026\"}, {__name__=\"node_uname_info\", domainname=\"(none)\", instance=\"vm05\", job=\"node\", machine=\"x86_64\", nodename=\"vm05\", release=\"5.15.0-1092-kvm\", sysname=\"Linux\", version=\"#97-Ubuntu SMP Fri Jan 23 15:00:24 UTC 2026\"}];many-to-many matching not allowed: matching labels must be unique on one side" 2026-03-10T05:54:39.000 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:54:38 vm05 bash[43541]: cluster 2026-03-10T05:54:36.842698+0000 mgr.y (mgr.24992) 130 : cluster [DBG] pgmap v61: 161 pgs: 161 active+clean; 457 KiB data, 168 MiB used, 160 GiB / 160 GiB avail; 1.0 KiB/s rd, 1 op/s 2026-03-10T05:54:39.000 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:54:38 vm05 bash[43541]: cluster 2026-03-10T05:54:36.842698+0000 mgr.y (mgr.24992) 130 : cluster [DBG] pgmap v61: 161 pgs: 161 active+clean; 457 KiB data, 168 MiB used, 160 GiB / 160 GiB avail; 1.0 KiB/s rd, 1 op/s 2026-03-10T05:54:39.000 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:54:38 vm05 bash[43541]: audit 2026-03-10T05:54:36.934009+0000 mgr.y (mgr.24992) 131 : audit [DBG] from='client.25048 -' entity='client.iscsi.foo.vm02.mxbwmh' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T05:54:39.000 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:54:38 vm05 bash[43541]: audit 2026-03-10T05:54:36.934009+0000 mgr.y (mgr.24992) 131 : audit [DBG] from='client.25048 -' entity='client.iscsi.foo.vm02.mxbwmh' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T05:54:39.084 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:54:38 vm02 bash[56371]: cluster 2026-03-10T05:54:36.842698+0000 mgr.y (mgr.24992) 130 : cluster [DBG] pgmap v61: 161 pgs: 161 active+clean; 457 KiB data, 168 MiB used, 160 GiB / 160 GiB avail; 1.0 KiB/s rd, 1 op/s 2026-03-10T05:54:39.084 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:54:38 vm02 bash[56371]: cluster 2026-03-10T05:54:36.842698+0000 mgr.y (mgr.24992) 130 : cluster [DBG] pgmap v61: 161 pgs: 161 active+clean; 457 KiB data, 168 MiB used, 160 GiB / 160 GiB avail; 1.0 KiB/s rd, 1 op/s 2026-03-10T05:54:39.084 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:54:38 vm02 bash[56371]: audit 2026-03-10T05:54:36.934009+0000 mgr.y (mgr.24992) 131 : audit [DBG] from='client.25048 -' entity='client.iscsi.foo.vm02.mxbwmh' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T05:54:39.084 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:54:38 vm02 bash[56371]: audit 2026-03-10T05:54:36.934009+0000 mgr.y (mgr.24992) 131 : audit [DBG] from='client.25048 -' entity='client.iscsi.foo.vm02.mxbwmh' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T05:54:39.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:54:38 vm02 bash[55303]: cluster 2026-03-10T05:54:36.842698+0000 mgr.y (mgr.24992) 130 : cluster [DBG] pgmap v61: 161 pgs: 161 active+clean; 457 KiB data, 168 MiB used, 160 GiB / 160 GiB avail; 1.0 KiB/s rd, 1 op/s 2026-03-10T05:54:39.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:54:38 vm02 bash[55303]: cluster 2026-03-10T05:54:36.842698+0000 mgr.y (mgr.24992) 130 : cluster [DBG] pgmap v61: 161 pgs: 161 active+clean; 457 KiB data, 168 MiB used, 160 GiB / 160 GiB avail; 1.0 KiB/s rd, 1 op/s 2026-03-10T05:54:39.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:54:38 vm02 bash[55303]: audit 2026-03-10T05:54:36.934009+0000 mgr.y (mgr.24992) 131 : audit [DBG] from='client.25048 -' entity='client.iscsi.foo.vm02.mxbwmh' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T05:54:39.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:54:38 vm02 bash[55303]: audit 2026-03-10T05:54:36.934009+0000 mgr.y (mgr.24992) 131 : audit [DBG] from='client.25048 -' entity='client.iscsi.foo.vm02.mxbwmh' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T05:54:41.000 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:54:40 vm05 bash[43541]: cluster 2026-03-10T05:54:38.843014+0000 mgr.y (mgr.24992) 132 : cluster [DBG] pgmap v62: 161 pgs: 161 active+clean; 457 KiB data, 168 MiB used, 160 GiB / 160 GiB avail; 939 B/s rd, 0 op/s 2026-03-10T05:54:41.000 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:54:40 vm05 bash[43541]: cluster 2026-03-10T05:54:38.843014+0000 mgr.y (mgr.24992) 132 : cluster [DBG] pgmap v62: 161 pgs: 161 active+clean; 457 KiB data, 168 MiB used, 160 GiB / 160 GiB avail; 939 B/s rd, 0 op/s 2026-03-10T05:54:41.084 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:54:40 vm02 bash[56371]: cluster 2026-03-10T05:54:38.843014+0000 mgr.y (mgr.24992) 132 : cluster [DBG] pgmap v62: 161 pgs: 161 active+clean; 457 KiB data, 168 MiB used, 160 GiB / 160 GiB avail; 939 B/s rd, 0 op/s 2026-03-10T05:54:41.084 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:54:40 vm02 bash[56371]: cluster 2026-03-10T05:54:38.843014+0000 mgr.y (mgr.24992) 132 : cluster [DBG] pgmap v62: 161 pgs: 161 active+clean; 457 KiB data, 168 MiB used, 160 GiB / 160 GiB avail; 939 B/s rd, 0 op/s 2026-03-10T05:54:41.084 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:54:40 vm02 bash[55303]: cluster 2026-03-10T05:54:38.843014+0000 mgr.y (mgr.24992) 132 : cluster [DBG] pgmap v62: 161 pgs: 161 active+clean; 457 KiB data, 168 MiB used, 160 GiB / 160 GiB avail; 939 B/s rd, 0 op/s 2026-03-10T05:54:41.084 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:54:40 vm02 bash[55303]: cluster 2026-03-10T05:54:38.843014+0000 mgr.y (mgr.24992) 132 : cluster [DBG] pgmap v62: 161 pgs: 161 active+clean; 457 KiB data, 168 MiB used, 160 GiB / 160 GiB avail; 939 B/s rd, 0 op/s 2026-03-10T05:54:42.250 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:54:41 vm05 bash[43541]: cluster 2026-03-10T05:54:40.843409+0000 mgr.y (mgr.24992) 133 : cluster [DBG] pgmap v63: 161 pgs: 161 active+clean; 457 KiB data, 168 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T05:54:42.250 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:54:41 vm05 bash[43541]: cluster 2026-03-10T05:54:40.843409+0000 mgr.y (mgr.24992) 133 : cluster [DBG] pgmap v63: 161 pgs: 161 active+clean; 457 KiB data, 168 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T05:54:42.250 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:54:41 vm05 bash[43541]: audit 2026-03-10T05:54:40.887306+0000 mon.a (mon.0) 306 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:54:42.250 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:54:41 vm05 bash[43541]: audit 2026-03-10T05:54:40.887306+0000 mon.a (mon.0) 306 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:54:42.250 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:54:41 vm05 bash[43541]: audit 2026-03-10T05:54:40.888432+0000 mon.a (mon.0) 307 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T05:54:42.250 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:54:41 vm05 bash[43541]: audit 2026-03-10T05:54:40.888432+0000 mon.a (mon.0) 307 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T05:54:42.334 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:54:41 vm02 bash[56371]: cluster 2026-03-10T05:54:40.843409+0000 mgr.y (mgr.24992) 133 : cluster [DBG] pgmap v63: 161 pgs: 161 active+clean; 457 KiB data, 168 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T05:54:42.334 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:54:41 vm02 bash[56371]: cluster 2026-03-10T05:54:40.843409+0000 mgr.y (mgr.24992) 133 : cluster [DBG] pgmap v63: 161 pgs: 161 active+clean; 457 KiB data, 168 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T05:54:42.334 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:54:41 vm02 bash[56371]: audit 2026-03-10T05:54:40.887306+0000 mon.a (mon.0) 306 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:54:42.334 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:54:41 vm02 bash[56371]: audit 2026-03-10T05:54:40.887306+0000 mon.a (mon.0) 306 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:54:42.334 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:54:41 vm02 bash[56371]: audit 2026-03-10T05:54:40.888432+0000 mon.a (mon.0) 307 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T05:54:42.334 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:54:41 vm02 bash[56371]: audit 2026-03-10T05:54:40.888432+0000 mon.a (mon.0) 307 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T05:54:42.335 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:54:41 vm02 bash[55303]: cluster 2026-03-10T05:54:40.843409+0000 mgr.y (mgr.24992) 133 : cluster [DBG] pgmap v63: 161 pgs: 161 active+clean; 457 KiB data, 168 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T05:54:42.335 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:54:41 vm02 bash[55303]: cluster 2026-03-10T05:54:40.843409+0000 mgr.y (mgr.24992) 133 : cluster [DBG] pgmap v63: 161 pgs: 161 active+clean; 457 KiB data, 168 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T05:54:42.335 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:54:41 vm02 bash[55303]: audit 2026-03-10T05:54:40.887306+0000 mon.a (mon.0) 306 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:54:42.335 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:54:41 vm02 bash[55303]: audit 2026-03-10T05:54:40.887306+0000 mon.a (mon.0) 306 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:54:42.335 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:54:41 vm02 bash[55303]: audit 2026-03-10T05:54:40.888432+0000 mon.a (mon.0) 307 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T05:54:42.335 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:54:41 vm02 bash[55303]: audit 2026-03-10T05:54:40.888432+0000 mon.a (mon.0) 307 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T05:54:43.334 INFO:journalctl@ceph.mgr.y.vm02.stdout:Mar 10 05:54:42 vm02 bash[52264]: ::ffff:192.168.123.105 - - [10/Mar/2026:05:54:42] "GET /metrics HTTP/1.1" 200 37980 "" "Prometheus/2.51.0" 2026-03-10T05:54:44.145 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:54:43 vm05 bash[43541]: cluster 2026-03-10T05:54:42.843879+0000 mgr.y (mgr.24992) 134 : cluster [DBG] pgmap v64: 161 pgs: 161 active+clean; 457 KiB data, 168 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T05:54:44.145 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:54:43 vm05 bash[43541]: cluster 2026-03-10T05:54:42.843879+0000 mgr.y (mgr.24992) 134 : cluster [DBG] pgmap v64: 161 pgs: 161 active+clean; 457 KiB data, 168 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T05:54:44.334 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:54:43 vm02 bash[56371]: cluster 2026-03-10T05:54:42.843879+0000 mgr.y (mgr.24992) 134 : cluster [DBG] pgmap v64: 161 pgs: 161 active+clean; 457 KiB data, 168 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T05:54:44.334 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:54:43 vm02 bash[56371]: cluster 2026-03-10T05:54:42.843879+0000 mgr.y (mgr.24992) 134 : cluster [DBG] pgmap v64: 161 pgs: 161 active+clean; 457 KiB data, 168 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T05:54:44.334 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:54:43 vm02 bash[55303]: cluster 2026-03-10T05:54:42.843879+0000 mgr.y (mgr.24992) 134 : cluster [DBG] pgmap v64: 161 pgs: 161 active+clean; 457 KiB data, 168 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T05:54:44.334 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:54:43 vm02 bash[55303]: cluster 2026-03-10T05:54:42.843879+0000 mgr.y (mgr.24992) 134 : cluster [DBG] pgmap v64: 161 pgs: 161 active+clean; 457 KiB data, 168 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T05:54:44.500 INFO:journalctl@ceph.prometheus.a.vm05.stdout:Mar 10 05:54:44 vm05 bash[41269]: ts=2026-03-10T05:54:44.148Z caller=group.go:483 level=warn name=CephOSDFlapping index=13 component="rule manager" file=/etc/prometheus/alerting/ceph_alerts.yml group=osd msg="Evaluating rule failed" rule="alert: CephOSDFlapping\nexpr: (rate(ceph_osd_up[5m]) * on (ceph_daemon) group_left (hostname) ceph_osd_metadata)\n * 60 > 1\nlabels:\n oid: 1.3.6.1.4.1.50495.1.2.1.4.4\n severity: warning\n type: ceph_default\nannotations:\n description: OSD {{ $labels.ceph_daemon }} on {{ $labels.hostname }} was marked\n down and back up {{ $value | humanize }} times once a minute for 5 minutes. This\n may indicate a network issue (latency, packet loss, MTU mismatch) on the cluster\n network, or the public network if no cluster network is deployed. Check the network\n stats on the listed host(s).\n documentation: https://docs.ceph.com/en/latest/rados/troubleshooting/troubleshooting-osd#flapping-osds\n summary: Network issues are causing OSDs to flap (mark each other down)\n" err="found duplicate series for the match group {ceph_daemon=\"osd.1\"} on the right hand-side of the operation: [{__name__=\"ceph_osd_metadata\", ceph_daemon=\"osd.1\", ceph_version=\"ceph version 17.2.0 (43e2e60a7559d3f46c9d53f1ca875fd499a1e35e) quincy (stable)\", cluster=\"107483ae-1c44-11f1-b530-c1172cd6122a\", cluster_addr=\"192.168.123.102\", device_class=\"hdd\", hostname=\"vm02\", instance=\"ceph_cluster\", job=\"ceph\", objectstore=\"bluestore\", public_addr=\"192.168.123.102\"}, {__name__=\"ceph_osd_metadata\", ceph_daemon=\"osd.1\", ceph_version=\"ceph version 17.2.0 (43e2e60a7559d3f46c9d53f1ca875fd499a1e35e) quincy (stable)\", cluster_addr=\"192.168.123.102\", device_class=\"hdd\", hostname=\"vm02\", instance=\"192.168.123.105:9283\", job=\"ceph\", objectstore=\"bluestore\", public_addr=\"192.168.123.102\"}];many-to-many matching not allowed: matching labels must be unique on one side" 2026-03-10T05:54:46.250 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:54:45 vm05 bash[43541]: cluster 2026-03-10T05:54:44.844150+0000 mgr.y (mgr.24992) 135 : cluster [DBG] pgmap v65: 161 pgs: 161 active+clean; 457 KiB data, 168 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T05:54:46.250 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:54:45 vm05 bash[43541]: cluster 2026-03-10T05:54:44.844150+0000 mgr.y (mgr.24992) 135 : cluster [DBG] pgmap v65: 161 pgs: 161 active+clean; 457 KiB data, 168 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T05:54:46.334 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:54:45 vm02 bash[56371]: cluster 2026-03-10T05:54:44.844150+0000 mgr.y (mgr.24992) 135 : cluster [DBG] pgmap v65: 161 pgs: 161 active+clean; 457 KiB data, 168 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T05:54:46.334 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:54:45 vm02 bash[56371]: cluster 2026-03-10T05:54:44.844150+0000 mgr.y (mgr.24992) 135 : cluster [DBG] pgmap v65: 161 pgs: 161 active+clean; 457 KiB data, 168 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T05:54:46.335 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:54:45 vm02 bash[55303]: cluster 2026-03-10T05:54:44.844150+0000 mgr.y (mgr.24992) 135 : cluster [DBG] pgmap v65: 161 pgs: 161 active+clean; 457 KiB data, 168 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T05:54:46.335 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:54:45 vm02 bash[55303]: cluster 2026-03-10T05:54:44.844150+0000 mgr.y (mgr.24992) 135 : cluster [DBG] pgmap v65: 161 pgs: 161 active+clean; 457 KiB data, 168 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T05:54:47.250 INFO:journalctl@ceph.prometheus.a.vm05.stdout:Mar 10 05:54:46 vm05 bash[41269]: ts=2026-03-10T05:54:46.948Z caller=group.go:483 level=warn name=CephNodeDiskspaceWarning index=4 component="rule manager" file=/etc/prometheus/alerting/ceph_alerts.yml group=nodes msg="Evaluating rule failed" rule="alert: CephNodeDiskspaceWarning\nexpr: predict_linear(node_filesystem_free_bytes{device=~\"/.*\"}[2d], 3600 * 24 * 5)\n * on (instance) group_left (nodename) node_uname_info < 0\nlabels:\n oid: 1.3.6.1.4.1.50495.1.2.1.8.4\n severity: warning\n type: ceph_default\nannotations:\n description: Mountpoint {{ $labels.mountpoint }} on {{ $labels.nodename }} will\n be full in less than 5 days based on the 48 hour trailing fill rate.\n summary: Host filesystem free space is getting low\n" err="found duplicate series for the match group {instance=\"vm05\"} on the right hand-side of the operation: [{__name__=\"node_uname_info\", cluster=\"107483ae-1c44-11f1-b530-c1172cd6122a\", domainname=\"(none)\", instance=\"vm05\", job=\"node\", machine=\"x86_64\", nodename=\"vm05\", release=\"5.15.0-1092-kvm\", sysname=\"Linux\", version=\"#97-Ubuntu SMP Fri Jan 23 15:00:24 UTC 2026\"}, {__name__=\"node_uname_info\", domainname=\"(none)\", instance=\"vm05\", job=\"node\", machine=\"x86_64\", nodename=\"vm05\", release=\"5.15.0-1092-kvm\", sysname=\"Linux\", version=\"#97-Ubuntu SMP Fri Jan 23 15:00:24 UTC 2026\"}];many-to-many matching not allowed: matching labels must be unique on one side" 2026-03-10T05:54:48.250 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:54:47 vm05 bash[43541]: cluster 2026-03-10T05:54:46.844632+0000 mgr.y (mgr.24992) 136 : cluster [DBG] pgmap v66: 161 pgs: 161 active+clean; 457 KiB data, 168 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T05:54:48.250 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:54:47 vm05 bash[43541]: cluster 2026-03-10T05:54:46.844632+0000 mgr.y (mgr.24992) 136 : cluster [DBG] pgmap v66: 161 pgs: 161 active+clean; 457 KiB data, 168 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T05:54:48.250 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:54:47 vm05 bash[43541]: audit 2026-03-10T05:54:46.940031+0000 mgr.y (mgr.24992) 137 : audit [DBG] from='client.25048 -' entity='client.iscsi.foo.vm02.mxbwmh' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T05:54:48.250 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:54:47 vm05 bash[43541]: audit 2026-03-10T05:54:46.940031+0000 mgr.y (mgr.24992) 137 : audit [DBG] from='client.25048 -' entity='client.iscsi.foo.vm02.mxbwmh' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T05:54:48.334 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:54:47 vm02 bash[56371]: cluster 2026-03-10T05:54:46.844632+0000 mgr.y (mgr.24992) 136 : cluster [DBG] pgmap v66: 161 pgs: 161 active+clean; 457 KiB data, 168 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T05:54:48.334 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:54:47 vm02 bash[56371]: cluster 2026-03-10T05:54:46.844632+0000 mgr.y (mgr.24992) 136 : cluster [DBG] pgmap v66: 161 pgs: 161 active+clean; 457 KiB data, 168 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T05:54:48.334 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:54:47 vm02 bash[56371]: audit 2026-03-10T05:54:46.940031+0000 mgr.y (mgr.24992) 137 : audit [DBG] from='client.25048 -' entity='client.iscsi.foo.vm02.mxbwmh' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T05:54:48.335 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:54:47 vm02 bash[56371]: audit 2026-03-10T05:54:46.940031+0000 mgr.y (mgr.24992) 137 : audit [DBG] from='client.25048 -' entity='client.iscsi.foo.vm02.mxbwmh' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T05:54:48.335 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:54:47 vm02 bash[55303]: cluster 2026-03-10T05:54:46.844632+0000 mgr.y (mgr.24992) 136 : cluster [DBG] pgmap v66: 161 pgs: 161 active+clean; 457 KiB data, 168 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T05:54:48.335 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:54:47 vm02 bash[55303]: cluster 2026-03-10T05:54:46.844632+0000 mgr.y (mgr.24992) 136 : cluster [DBG] pgmap v66: 161 pgs: 161 active+clean; 457 KiB data, 168 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T05:54:48.335 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:54:47 vm02 bash[55303]: audit 2026-03-10T05:54:46.940031+0000 mgr.y (mgr.24992) 137 : audit [DBG] from='client.25048 -' entity='client.iscsi.foo.vm02.mxbwmh' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T05:54:48.335 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:54:47 vm02 bash[55303]: audit 2026-03-10T05:54:46.940031+0000 mgr.y (mgr.24992) 137 : audit [DBG] from='client.25048 -' entity='client.iscsi.foo.vm02.mxbwmh' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T05:54:50.250 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:54:49 vm05 bash[43541]: cluster 2026-03-10T05:54:48.844940+0000 mgr.y (mgr.24992) 138 : cluster [DBG] pgmap v67: 161 pgs: 161 active+clean; 457 KiB data, 168 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T05:54:50.250 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:54:49 vm05 bash[43541]: cluster 2026-03-10T05:54:48.844940+0000 mgr.y (mgr.24992) 138 : cluster [DBG] pgmap v67: 161 pgs: 161 active+clean; 457 KiB data, 168 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T05:54:50.250 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:54:49 vm05 bash[43541]: audit 2026-03-10T05:54:49.614562+0000 mon.a (mon.0) 308 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "osd ok-to-stop", "ids": ["1"], "max": 16}]: dispatch 2026-03-10T05:54:50.250 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:54:49 vm05 bash[43541]: audit 2026-03-10T05:54:49.614562+0000 mon.a (mon.0) 308 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "osd ok-to-stop", "ids": ["1"], "max": 16}]: dispatch 2026-03-10T05:54:50.250 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:54:49 vm05 bash[43541]: audit 2026-03-10T05:54:49.614705+0000 mgr.y (mgr.24992) 139 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "osd ok-to-stop", "ids": ["1"], "max": 16}]: dispatch 2026-03-10T05:54:50.250 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:54:49 vm05 bash[43541]: audit 2026-03-10T05:54:49.614705+0000 mgr.y (mgr.24992) 139 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "osd ok-to-stop", "ids": ["1"], "max": 16}]: dispatch 2026-03-10T05:54:50.250 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:54:49 vm05 bash[43541]: cephadm 2026-03-10T05:54:49.615463+0000 mgr.y (mgr.24992) 140 : cephadm [INF] Upgrade: osd.1 is safe to restart 2026-03-10T05:54:50.250 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:54:49 vm05 bash[43541]: cephadm 2026-03-10T05:54:49.615463+0000 mgr.y (mgr.24992) 140 : cephadm [INF] Upgrade: osd.1 is safe to restart 2026-03-10T05:54:50.334 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:54:49 vm02 bash[56371]: cluster 2026-03-10T05:54:48.844940+0000 mgr.y (mgr.24992) 138 : cluster [DBG] pgmap v67: 161 pgs: 161 active+clean; 457 KiB data, 168 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T05:54:50.334 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:54:49 vm02 bash[56371]: cluster 2026-03-10T05:54:48.844940+0000 mgr.y (mgr.24992) 138 : cluster [DBG] pgmap v67: 161 pgs: 161 active+clean; 457 KiB data, 168 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T05:54:50.334 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:54:49 vm02 bash[56371]: audit 2026-03-10T05:54:49.614562+0000 mon.a (mon.0) 308 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "osd ok-to-stop", "ids": ["1"], "max": 16}]: dispatch 2026-03-10T05:54:50.334 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:54:49 vm02 bash[56371]: audit 2026-03-10T05:54:49.614562+0000 mon.a (mon.0) 308 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "osd ok-to-stop", "ids": ["1"], "max": 16}]: dispatch 2026-03-10T05:54:50.334 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:54:49 vm02 bash[56371]: audit 2026-03-10T05:54:49.614705+0000 mgr.y (mgr.24992) 139 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "osd ok-to-stop", "ids": ["1"], "max": 16}]: dispatch 2026-03-10T05:54:50.334 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:54:49 vm02 bash[56371]: audit 2026-03-10T05:54:49.614705+0000 mgr.y (mgr.24992) 139 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "osd ok-to-stop", "ids": ["1"], "max": 16}]: dispatch 2026-03-10T05:54:50.334 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:54:49 vm02 bash[56371]: cephadm 2026-03-10T05:54:49.615463+0000 mgr.y (mgr.24992) 140 : cephadm [INF] Upgrade: osd.1 is safe to restart 2026-03-10T05:54:50.335 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:54:49 vm02 bash[56371]: cephadm 2026-03-10T05:54:49.615463+0000 mgr.y (mgr.24992) 140 : cephadm [INF] Upgrade: osd.1 is safe to restart 2026-03-10T05:54:50.335 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:54:49 vm02 bash[55303]: cluster 2026-03-10T05:54:48.844940+0000 mgr.y (mgr.24992) 138 : cluster [DBG] pgmap v67: 161 pgs: 161 active+clean; 457 KiB data, 168 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T05:54:50.335 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:54:49 vm02 bash[55303]: cluster 2026-03-10T05:54:48.844940+0000 mgr.y (mgr.24992) 138 : cluster [DBG] pgmap v67: 161 pgs: 161 active+clean; 457 KiB data, 168 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T05:54:50.335 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:54:49 vm02 bash[55303]: audit 2026-03-10T05:54:49.614562+0000 mon.a (mon.0) 308 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "osd ok-to-stop", "ids": ["1"], "max": 16}]: dispatch 2026-03-10T05:54:50.335 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:54:49 vm02 bash[55303]: audit 2026-03-10T05:54:49.614562+0000 mon.a (mon.0) 308 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "osd ok-to-stop", "ids": ["1"], "max": 16}]: dispatch 2026-03-10T05:54:50.335 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:54:49 vm02 bash[55303]: audit 2026-03-10T05:54:49.614705+0000 mgr.y (mgr.24992) 139 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "osd ok-to-stop", "ids": ["1"], "max": 16}]: dispatch 2026-03-10T05:54:50.335 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:54:49 vm02 bash[55303]: audit 2026-03-10T05:54:49.614705+0000 mgr.y (mgr.24992) 139 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "osd ok-to-stop", "ids": ["1"], "max": 16}]: dispatch 2026-03-10T05:54:50.335 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:54:49 vm02 bash[55303]: cephadm 2026-03-10T05:54:49.615463+0000 mgr.y (mgr.24992) 140 : cephadm [INF] Upgrade: osd.1 is safe to restart 2026-03-10T05:54:50.335 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:54:49 vm02 bash[55303]: cephadm 2026-03-10T05:54:49.615463+0000 mgr.y (mgr.24992) 140 : cephadm [INF] Upgrade: osd.1 is safe to restart 2026-03-10T05:54:51.430 INFO:journalctl@ceph.osd.0.vm02.stdout:Mar 10 05:54:51 vm02 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:54:51.434 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:54:51 vm02 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:54:51.435 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:54:51 vm02 bash[56371]: cephadm 2026-03-10T05:54:50.270879+0000 mgr.y (mgr.24992) 141 : cephadm [INF] Upgrade: Updating osd.1 2026-03-10T05:54:51.435 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:54:51 vm02 bash[56371]: cephadm 2026-03-10T05:54:50.270879+0000 mgr.y (mgr.24992) 141 : cephadm [INF] Upgrade: Updating osd.1 2026-03-10T05:54:51.435 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:54:51 vm02 bash[56371]: audit 2026-03-10T05:54:50.355206+0000 mon.a (mon.0) 309 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:54:51.435 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:54:51 vm02 bash[56371]: audit 2026-03-10T05:54:50.355206+0000 mon.a (mon.0) 309 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:54:51.435 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:54:51 vm02 bash[56371]: audit 2026-03-10T05:54:50.357138+0000 mon.a (mon.0) 310 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.1"}]: dispatch 2026-03-10T05:54:51.435 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:54:51 vm02 bash[56371]: audit 2026-03-10T05:54:50.357138+0000 mon.a (mon.0) 310 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.1"}]: dispatch 2026-03-10T05:54:51.435 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:54:51 vm02 bash[56371]: audit 2026-03-10T05:54:50.357910+0000 mon.a (mon.0) 311 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:54:51.435 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:54:51 vm02 bash[56371]: audit 2026-03-10T05:54:50.357910+0000 mon.a (mon.0) 311 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:54:51.435 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:54:51 vm02 bash[56371]: cephadm 2026-03-10T05:54:50.359408+0000 mgr.y (mgr.24992) 142 : cephadm [INF] Deploying daemon osd.1 on vm02 2026-03-10T05:54:51.435 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:54:51 vm02 bash[56371]: cephadm 2026-03-10T05:54:50.359408+0000 mgr.y (mgr.24992) 142 : cephadm [INF] Deploying daemon osd.1 on vm02 2026-03-10T05:54:51.435 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:54:51 vm02 bash[56371]: cluster 2026-03-10T05:54:51.296405+0000 mon.a (mon.0) 312 : cluster [INF] osd.1 marked itself down and dead 2026-03-10T05:54:51.435 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:54:51 vm02 bash[56371]: cluster 2026-03-10T05:54:51.296405+0000 mon.a (mon.0) 312 : cluster [INF] osd.1 marked itself down and dead 2026-03-10T05:54:51.435 INFO:journalctl@ceph.mgr.y.vm02.stdout:Mar 10 05:54:51 vm02 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:54:51.435 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:54:51 vm02 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:54:51.435 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:54:51 vm02 bash[55303]: cephadm 2026-03-10T05:54:50.270879+0000 mgr.y (mgr.24992) 141 : cephadm [INF] Upgrade: Updating osd.1 2026-03-10T05:54:51.435 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:54:51 vm02 bash[55303]: cephadm 2026-03-10T05:54:50.270879+0000 mgr.y (mgr.24992) 141 : cephadm [INF] Upgrade: Updating osd.1 2026-03-10T05:54:51.435 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:54:51 vm02 bash[55303]: audit 2026-03-10T05:54:50.355206+0000 mon.a (mon.0) 309 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:54:51.435 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:54:51 vm02 bash[55303]: audit 2026-03-10T05:54:50.355206+0000 mon.a (mon.0) 309 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:54:51.435 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:54:51 vm02 bash[55303]: audit 2026-03-10T05:54:50.357138+0000 mon.a (mon.0) 310 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.1"}]: dispatch 2026-03-10T05:54:51.435 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:54:51 vm02 bash[55303]: audit 2026-03-10T05:54:50.357138+0000 mon.a (mon.0) 310 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.1"}]: dispatch 2026-03-10T05:54:51.435 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:54:51 vm02 bash[55303]: audit 2026-03-10T05:54:50.357910+0000 mon.a (mon.0) 311 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:54:51.435 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:54:51 vm02 bash[55303]: audit 2026-03-10T05:54:50.357910+0000 mon.a (mon.0) 311 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:54:51.435 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:54:51 vm02 bash[55303]: cephadm 2026-03-10T05:54:50.359408+0000 mgr.y (mgr.24992) 142 : cephadm [INF] Deploying daemon osd.1 on vm02 2026-03-10T05:54:51.435 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:54:51 vm02 bash[55303]: cephadm 2026-03-10T05:54:50.359408+0000 mgr.y (mgr.24992) 142 : cephadm [INF] Deploying daemon osd.1 on vm02 2026-03-10T05:54:51.435 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:54:51 vm02 bash[55303]: cluster 2026-03-10T05:54:51.296405+0000 mon.a (mon.0) 312 : cluster [INF] osd.1 marked itself down and dead 2026-03-10T05:54:51.435 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:54:51 vm02 bash[55303]: cluster 2026-03-10T05:54:51.296405+0000 mon.a (mon.0) 312 : cluster [INF] osd.1 marked itself down and dead 2026-03-10T05:54:51.435 INFO:journalctl@ceph.osd.1.vm02.stdout:Mar 10 05:54:51 vm02 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:54:51.435 INFO:journalctl@ceph.osd.1.vm02.stdout:Mar 10 05:54:51 vm02 systemd[1]: Stopping Ceph osd.1 for 107483ae-1c44-11f1-b530-c1172cd6122a... 2026-03-10T05:54:51.435 INFO:journalctl@ceph.osd.1.vm02.stdout:Mar 10 05:54:51 vm02 bash[28375]: debug 2026-03-10T05:54:51.291+0000 7f08b447c700 -1 received signal: Terminated from /sbin/docker-init -- /usr/bin/ceph-osd -n osd.1 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-stderr=true --default-log-stderr-prefix=debug (PID: 1) UID: 0 2026-03-10T05:54:51.435 INFO:journalctl@ceph.osd.1.vm02.stdout:Mar 10 05:54:51 vm02 bash[28375]: debug 2026-03-10T05:54:51.291+0000 7f08b447c700 -1 osd.1 106 *** Got signal Terminated *** 2026-03-10T05:54:51.435 INFO:journalctl@ceph.osd.1.vm02.stdout:Mar 10 05:54:51 vm02 bash[28375]: debug 2026-03-10T05:54:51.291+0000 7f08b447c700 -1 osd.1 106 *** Immediate shutdown (osd_fast_shutdown=true) *** 2026-03-10T05:54:51.435 INFO:journalctl@ceph.osd.2.vm02.stdout:Mar 10 05:54:51 vm02 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:54:51.435 INFO:journalctl@ceph.osd.3.vm02.stdout:Mar 10 05:54:51 vm02 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:54:51.435 INFO:journalctl@ceph.alertmanager.a.vm02.stdout:Mar 10 05:54:51 vm02 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:54:51.436 INFO:journalctl@ceph.node-exporter.a.vm02.stdout:Mar 10 05:54:51 vm02 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:54:51.736 INFO:journalctl@ceph.osd.1.vm02.stdout:Mar 10 05:54:51 vm02 bash[65019]: ceph-107483ae-1c44-11f1-b530-c1172cd6122a-osd-1 2026-03-10T05:54:51.750 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:54:51 vm05 bash[43541]: cephadm 2026-03-10T05:54:50.270879+0000 mgr.y (mgr.24992) 141 : cephadm [INF] Upgrade: Updating osd.1 2026-03-10T05:54:51.750 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:54:51 vm05 bash[43541]: cephadm 2026-03-10T05:54:50.270879+0000 mgr.y (mgr.24992) 141 : cephadm [INF] Upgrade: Updating osd.1 2026-03-10T05:54:51.750 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:54:51 vm05 bash[43541]: audit 2026-03-10T05:54:50.355206+0000 mon.a (mon.0) 309 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:54:51.750 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:54:51 vm05 bash[43541]: audit 2026-03-10T05:54:50.355206+0000 mon.a (mon.0) 309 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:54:51.750 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:54:51 vm05 bash[43541]: audit 2026-03-10T05:54:50.357138+0000 mon.a (mon.0) 310 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.1"}]: dispatch 2026-03-10T05:54:51.750 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:54:51 vm05 bash[43541]: audit 2026-03-10T05:54:50.357138+0000 mon.a (mon.0) 310 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.1"}]: dispatch 2026-03-10T05:54:51.750 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:54:51 vm05 bash[43541]: audit 2026-03-10T05:54:50.357910+0000 mon.a (mon.0) 311 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:54:51.750 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:54:51 vm05 bash[43541]: audit 2026-03-10T05:54:50.357910+0000 mon.a (mon.0) 311 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:54:51.750 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:54:51 vm05 bash[43541]: cephadm 2026-03-10T05:54:50.359408+0000 mgr.y (mgr.24992) 142 : cephadm [INF] Deploying daemon osd.1 on vm02 2026-03-10T05:54:51.750 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:54:51 vm05 bash[43541]: cephadm 2026-03-10T05:54:50.359408+0000 mgr.y (mgr.24992) 142 : cephadm [INF] Deploying daemon osd.1 on vm02 2026-03-10T05:54:51.750 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:54:51 vm05 bash[43541]: cluster 2026-03-10T05:54:51.296405+0000 mon.a (mon.0) 312 : cluster [INF] osd.1 marked itself down and dead 2026-03-10T05:54:51.750 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:54:51 vm05 bash[43541]: cluster 2026-03-10T05:54:51.296405+0000 mon.a (mon.0) 312 : cluster [INF] osd.1 marked itself down and dead 2026-03-10T05:54:52.038 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:54:51 vm02 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:54:52.038 INFO:journalctl@ceph.mgr.y.vm02.stdout:Mar 10 05:54:51 vm02 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:54:52.038 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:54:51 vm02 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:54:52.038 INFO:journalctl@ceph.osd.0.vm02.stdout:Mar 10 05:54:51 vm02 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:54:52.038 INFO:journalctl@ceph.osd.1.vm02.stdout:Mar 10 05:54:51 vm02 systemd[1]: ceph-107483ae-1c44-11f1-b530-c1172cd6122a@osd.1.service: Deactivated successfully. 2026-03-10T05:54:52.038 INFO:journalctl@ceph.osd.1.vm02.stdout:Mar 10 05:54:51 vm02 systemd[1]: Stopped Ceph osd.1 for 107483ae-1c44-11f1-b530-c1172cd6122a. 2026-03-10T05:54:52.038 INFO:journalctl@ceph.osd.1.vm02.stdout:Mar 10 05:54:51 vm02 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:54:52.039 INFO:journalctl@ceph.osd.1.vm02.stdout:Mar 10 05:54:51 vm02 systemd[1]: Started Ceph osd.1 for 107483ae-1c44-11f1-b530-c1172cd6122a. 2026-03-10T05:54:52.039 INFO:journalctl@ceph.osd.2.vm02.stdout:Mar 10 05:54:51 vm02 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:54:52.039 INFO:journalctl@ceph.osd.3.vm02.stdout:Mar 10 05:54:51 vm02 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:54:52.039 INFO:journalctl@ceph.alertmanager.a.vm02.stdout:Mar 10 05:54:51 vm02 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:54:52.039 INFO:journalctl@ceph.node-exporter.a.vm02.stdout:Mar 10 05:54:51 vm02 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:54:52.334 INFO:journalctl@ceph.osd.1.vm02.stdout:Mar 10 05:54:52 vm02 bash[65234]: Running command: /usr/bin/ceph-authtool --gen-print-key 2026-03-10T05:54:52.334 INFO:journalctl@ceph.osd.1.vm02.stdout:Mar 10 05:54:52 vm02 bash[65234]: Running command: /usr/bin/ceph-authtool --gen-print-key 2026-03-10T05:54:52.719 INFO:teuthology.orchestra.run.vm02.stdout:true 2026-03-10T05:54:52.750 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:54:52 vm05 bash[43541]: cluster 2026-03-10T05:54:50.845311+0000 mgr.y (mgr.24992) 143 : cluster [DBG] pgmap v68: 161 pgs: 161 active+clean; 457 KiB data, 168 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T05:54:52.750 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:54:52 vm05 bash[43541]: cluster 2026-03-10T05:54:50.845311+0000 mgr.y (mgr.24992) 143 : cluster [DBG] pgmap v68: 161 pgs: 161 active+clean; 457 KiB data, 168 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T05:54:52.750 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:54:52 vm05 bash[43541]: cluster 2026-03-10T05:54:51.354758+0000 mon.a (mon.0) 313 : cluster [WRN] Health check failed: 1 osds down (OSD_DOWN) 2026-03-10T05:54:52.750 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:54:52 vm05 bash[43541]: cluster 2026-03-10T05:54:51.354758+0000 mon.a (mon.0) 313 : cluster [WRN] Health check failed: 1 osds down (OSD_DOWN) 2026-03-10T05:54:52.750 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:54:52 vm05 bash[43541]: cluster 2026-03-10T05:54:51.367283+0000 mon.a (mon.0) 314 : cluster [DBG] osdmap e107: 8 total, 7 up, 8 in 2026-03-10T05:54:52.750 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:54:52 vm05 bash[43541]: cluster 2026-03-10T05:54:51.367283+0000 mon.a (mon.0) 314 : cluster [DBG] osdmap e107: 8 total, 7 up, 8 in 2026-03-10T05:54:52.750 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:54:52 vm05 bash[43541]: audit 2026-03-10T05:54:51.994484+0000 mon.a (mon.0) 315 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:54:52.750 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:54:52 vm05 bash[43541]: audit 2026-03-10T05:54:51.994484+0000 mon.a (mon.0) 315 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:54:52.750 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:54:52 vm05 bash[43541]: audit 2026-03-10T05:54:52.002492+0000 mon.a (mon.0) 316 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:54:52.750 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:54:52 vm05 bash[43541]: audit 2026-03-10T05:54:52.002492+0000 mon.a (mon.0) 316 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:54:52.750 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:54:52 vm05 bash[43541]: audit 2026-03-10T05:54:52.300415+0000 mon.a (mon.0) 317 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:54:52.750 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:54:52 vm05 bash[43541]: audit 2026-03-10T05:54:52.300415+0000 mon.a (mon.0) 317 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:54:52.750 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:54:52 vm05 bash[43541]: audit 2026-03-10T05:54:52.306464+0000 mon.a (mon.0) 318 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:54:52.750 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:54:52 vm05 bash[43541]: audit 2026-03-10T05:54:52.306464+0000 mon.a (mon.0) 318 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:54:52.835 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:54:52 vm02 bash[56371]: cluster 2026-03-10T05:54:50.845311+0000 mgr.y (mgr.24992) 143 : cluster [DBG] pgmap v68: 161 pgs: 161 active+clean; 457 KiB data, 168 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T05:54:52.835 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:54:52 vm02 bash[56371]: cluster 2026-03-10T05:54:50.845311+0000 mgr.y (mgr.24992) 143 : cluster [DBG] pgmap v68: 161 pgs: 161 active+clean; 457 KiB data, 168 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T05:54:52.835 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:54:52 vm02 bash[56371]: cluster 2026-03-10T05:54:51.354758+0000 mon.a (mon.0) 313 : cluster [WRN] Health check failed: 1 osds down (OSD_DOWN) 2026-03-10T05:54:52.835 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:54:52 vm02 bash[56371]: cluster 2026-03-10T05:54:51.354758+0000 mon.a (mon.0) 313 : cluster [WRN] Health check failed: 1 osds down (OSD_DOWN) 2026-03-10T05:54:52.835 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:54:52 vm02 bash[56371]: cluster 2026-03-10T05:54:51.367283+0000 mon.a (mon.0) 314 : cluster [DBG] osdmap e107: 8 total, 7 up, 8 in 2026-03-10T05:54:52.835 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:54:52 vm02 bash[56371]: cluster 2026-03-10T05:54:51.367283+0000 mon.a (mon.0) 314 : cluster [DBG] osdmap e107: 8 total, 7 up, 8 in 2026-03-10T05:54:52.835 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:54:52 vm02 bash[56371]: audit 2026-03-10T05:54:51.994484+0000 mon.a (mon.0) 315 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:54:52.835 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:54:52 vm02 bash[56371]: audit 2026-03-10T05:54:51.994484+0000 mon.a (mon.0) 315 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:54:52.835 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:54:52 vm02 bash[56371]: audit 2026-03-10T05:54:52.002492+0000 mon.a (mon.0) 316 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:54:52.835 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:54:52 vm02 bash[56371]: audit 2026-03-10T05:54:52.002492+0000 mon.a (mon.0) 316 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:54:52.835 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:54:52 vm02 bash[56371]: audit 2026-03-10T05:54:52.300415+0000 mon.a (mon.0) 317 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:54:52.835 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:54:52 vm02 bash[56371]: audit 2026-03-10T05:54:52.300415+0000 mon.a (mon.0) 317 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:54:52.835 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:54:52 vm02 bash[56371]: audit 2026-03-10T05:54:52.306464+0000 mon.a (mon.0) 318 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:54:52.835 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:54:52 vm02 bash[56371]: audit 2026-03-10T05:54:52.306464+0000 mon.a (mon.0) 318 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:54:52.835 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:54:52 vm02 bash[55303]: cluster 2026-03-10T05:54:50.845311+0000 mgr.y (mgr.24992) 143 : cluster [DBG] pgmap v68: 161 pgs: 161 active+clean; 457 KiB data, 168 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T05:54:52.835 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:54:52 vm02 bash[55303]: cluster 2026-03-10T05:54:50.845311+0000 mgr.y (mgr.24992) 143 : cluster [DBG] pgmap v68: 161 pgs: 161 active+clean; 457 KiB data, 168 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T05:54:52.835 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:54:52 vm02 bash[55303]: cluster 2026-03-10T05:54:51.354758+0000 mon.a (mon.0) 313 : cluster [WRN] Health check failed: 1 osds down (OSD_DOWN) 2026-03-10T05:54:52.835 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:54:52 vm02 bash[55303]: cluster 2026-03-10T05:54:51.354758+0000 mon.a (mon.0) 313 : cluster [WRN] Health check failed: 1 osds down (OSD_DOWN) 2026-03-10T05:54:52.835 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:54:52 vm02 bash[55303]: cluster 2026-03-10T05:54:51.367283+0000 mon.a (mon.0) 314 : cluster [DBG] osdmap e107: 8 total, 7 up, 8 in 2026-03-10T05:54:52.835 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:54:52 vm02 bash[55303]: cluster 2026-03-10T05:54:51.367283+0000 mon.a (mon.0) 314 : cluster [DBG] osdmap e107: 8 total, 7 up, 8 in 2026-03-10T05:54:52.835 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:54:52 vm02 bash[55303]: audit 2026-03-10T05:54:51.994484+0000 mon.a (mon.0) 315 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:54:52.835 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:54:52 vm02 bash[55303]: audit 2026-03-10T05:54:51.994484+0000 mon.a (mon.0) 315 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:54:52.835 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:54:52 vm02 bash[55303]: audit 2026-03-10T05:54:52.002492+0000 mon.a (mon.0) 316 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:54:52.835 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:54:52 vm02 bash[55303]: audit 2026-03-10T05:54:52.002492+0000 mon.a (mon.0) 316 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:54:52.835 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:54:52 vm02 bash[55303]: audit 2026-03-10T05:54:52.300415+0000 mon.a (mon.0) 317 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:54:52.835 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:54:52 vm02 bash[55303]: audit 2026-03-10T05:54:52.300415+0000 mon.a (mon.0) 317 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:54:52.835 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:54:52 vm02 bash[55303]: audit 2026-03-10T05:54:52.306464+0000 mon.a (mon.0) 318 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:54:52.835 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:54:52 vm02 bash[55303]: audit 2026-03-10T05:54:52.306464+0000 mon.a (mon.0) 318 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:54:53.169 INFO:teuthology.orchestra.run.vm02.stdout:NAME HOST PORTS STATUS REFRESHED AGE MEM USE MEM LIM VERSION IMAGE ID CONTAINER ID 2026-03-10T05:54:53.169 INFO:teuthology.orchestra.run.vm02.stdout:alertmanager.a vm02 *:9093,9094 running (2m) 25s ago 7m 14.9M - 0.25.0 c8568f914cd2 7a7c5c2cddb6 2026-03-10T05:54:53.169 INFO:teuthology.orchestra.run.vm02.stdout:grafana.a vm05 *:3000 running (2m) 82s ago 7m 39.4M - dad864ee21e9 95c6d977988a 2026-03-10T05:54:53.169 INFO:teuthology.orchestra.run.vm02.stdout:iscsi.foo.vm02.mxbwmh vm02 running (2m) 25s ago 7m 44.0M - 3.5 e1d6a67b021e 62aba5b41046 2026-03-10T05:54:53.169 INFO:teuthology.orchestra.run.vm02.stdout:mgr.x vm05 *:8443,9283,8765 running (2m) 82s ago 10m 464M - 19.2.3-678-ge911bdeb 654f31e6858e 7579626ada90 2026-03-10T05:54:53.169 INFO:teuthology.orchestra.run.vm02.stdout:mgr.y vm02 *:8443,9283,8765 running (2m) 25s ago 11m 527M - 19.2.3-678-ge911bdeb 654f31e6858e ef46d0f7b15e 2026-03-10T05:54:53.169 INFO:teuthology.orchestra.run.vm02.stdout:mon.a vm02 running (107s) 25s ago 11m 45.6M 2048M 19.2.3-678-ge911bdeb 654f31e6858e df3a0a290a95 2026-03-10T05:54:53.169 INFO:teuthology.orchestra.run.vm02.stdout:mon.b vm05 running (87s) 82s ago 10m 19.2M 2048M 19.2.3-678-ge911bdeb 654f31e6858e 1da04b90d16b 2026-03-10T05:54:53.169 INFO:teuthology.orchestra.run.vm02.stdout:mon.c vm02 running (2m) 25s ago 10m 41.9M 2048M 19.2.3-678-ge911bdeb 654f31e6858e 7f2cdf1b7aa6 2026-03-10T05:54:53.169 INFO:teuthology.orchestra.run.vm02.stdout:node-exporter.a vm02 *:9100 running (2m) 25s ago 7m 7499k - 1.7.0 72c9c2088986 90288450bd1f 2026-03-10T05:54:53.169 INFO:teuthology.orchestra.run.vm02.stdout:node-exporter.b vm05 *:9100 running (2m) 82s ago 7m 7275k - 1.7.0 72c9c2088986 4e859143cb0e 2026-03-10T05:54:53.169 INFO:teuthology.orchestra.run.vm02.stdout:osd.0 vm02 running (30s) 25s ago 10m 31.5M 4096M 19.2.3-678-ge911bdeb 654f31e6858e 640360275f83 2026-03-10T05:54:53.169 INFO:teuthology.orchestra.run.vm02.stdout:osd.1 vm02 starting - - - 4096M 2026-03-10T05:54:53.169 INFO:teuthology.orchestra.run.vm02.stdout:osd.2 vm02 running (47s) 25s ago 9m 43.7M 4096M 19.2.3-678-ge911bdeb 654f31e6858e 51dac2f581d9 2026-03-10T05:54:53.169 INFO:teuthology.orchestra.run.vm02.stdout:osd.3 vm02 running (64s) 25s ago 9m 68.3M 4096M 19.2.3-678-ge911bdeb 654f31e6858e 0eca961791f4 2026-03-10T05:54:53.169 INFO:teuthology.orchestra.run.vm02.stdout:osd.4 vm05 running (9m) 82s ago 9m 53.2M 4096M 17.2.0 e1d6a67b021e 4ffe1741f201 2026-03-10T05:54:53.169 INFO:teuthology.orchestra.run.vm02.stdout:osd.5 vm05 running (8m) 82s ago 8m 52.2M 4096M 17.2.0 e1d6a67b021e cba5583c238e 2026-03-10T05:54:53.169 INFO:teuthology.orchestra.run.vm02.stdout:osd.6 vm05 running (8m) 82s ago 8m 49.8M 4096M 17.2.0 e1d6a67b021e 9d1b370357d7 2026-03-10T05:54:53.169 INFO:teuthology.orchestra.run.vm02.stdout:osd.7 vm05 running (8m) 82s ago 8m 51.3M 4096M 17.2.0 e1d6a67b021e 8a4837b788cf 2026-03-10T05:54:53.169 INFO:teuthology.orchestra.run.vm02.stdout:prometheus.a vm05 *:9095 running (2m) 82s ago 7m 37.3M - 2.51.0 1d3b7f56885b 3328811f8f28 2026-03-10T05:54:53.169 INFO:teuthology.orchestra.run.vm02.stdout:rgw.foo.vm02.pbogjd vm02 *:8000 running (7m) 25s ago 7m 87.1M - 17.2.0 e1d6a67b021e 2ab2ffd1abaa 2026-03-10T05:54:53.169 INFO:teuthology.orchestra.run.vm02.stdout:rgw.foo.vm05.hvmsxl vm05 *:8000 running (7m) 82s ago 7m 85.8M - 17.2.0 e1d6a67b021e 85d1c77b7e9d 2026-03-10T05:54:53.169 INFO:teuthology.orchestra.run.vm02.stdout:rgw.smpl.vm02.pglcfm vm02 *:80 running (7m) 25s ago 7m 85.9M - 17.2.0 e1d6a67b021e ef152a460673 2026-03-10T05:54:53.169 INFO:teuthology.orchestra.run.vm02.stdout:rgw.smpl.vm05.hqqmap vm05 *:80 running (7m) 82s ago 7m 86.0M - 17.2.0 e1d6a67b021e 29c9ee794f34 2026-03-10T05:54:53.261 INFO:journalctl@ceph.mgr.y.vm02.stdout:Mar 10 05:54:52 vm02 bash[52264]: ::ffff:192.168.123.105 - - [10/Mar/2026:05:54:52] "GET /metrics HTTP/1.1" 200 37980 "" "Prometheus/2.51.0" 2026-03-10T05:54:53.441 INFO:teuthology.orchestra.run.vm02.stdout:{ 2026-03-10T05:54:53.441 INFO:teuthology.orchestra.run.vm02.stdout: "mon": { 2026-03-10T05:54:53.441 INFO:teuthology.orchestra.run.vm02.stdout: "ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable)": 3 2026-03-10T05:54:53.441 INFO:teuthology.orchestra.run.vm02.stdout: }, 2026-03-10T05:54:53.441 INFO:teuthology.orchestra.run.vm02.stdout: "mgr": { 2026-03-10T05:54:53.441 INFO:teuthology.orchestra.run.vm02.stdout: "ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable)": 2 2026-03-10T05:54:53.441 INFO:teuthology.orchestra.run.vm02.stdout: }, 2026-03-10T05:54:53.441 INFO:teuthology.orchestra.run.vm02.stdout: "osd": { 2026-03-10T05:54:53.441 INFO:teuthology.orchestra.run.vm02.stdout: "ceph version 17.2.0 (43e2e60a7559d3f46c9d53f1ca875fd499a1e35e) quincy (stable)": 4, 2026-03-10T05:54:53.441 INFO:teuthology.orchestra.run.vm02.stdout: "ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable)": 3 2026-03-10T05:54:53.441 INFO:teuthology.orchestra.run.vm02.stdout: }, 2026-03-10T05:54:53.441 INFO:teuthology.orchestra.run.vm02.stdout: "rgw": { 2026-03-10T05:54:53.441 INFO:teuthology.orchestra.run.vm02.stdout: "ceph version 17.2.0 (43e2e60a7559d3f46c9d53f1ca875fd499a1e35e) quincy (stable)": 4 2026-03-10T05:54:53.441 INFO:teuthology.orchestra.run.vm02.stdout: }, 2026-03-10T05:54:53.441 INFO:teuthology.orchestra.run.vm02.stdout: "overall": { 2026-03-10T05:54:53.441 INFO:teuthology.orchestra.run.vm02.stdout: "ceph version 17.2.0 (43e2e60a7559d3f46c9d53f1ca875fd499a1e35e) quincy (stable)": 8, 2026-03-10T05:54:53.441 INFO:teuthology.orchestra.run.vm02.stdout: "ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable)": 8 2026-03-10T05:54:53.441 INFO:teuthology.orchestra.run.vm02.stdout: } 2026-03-10T05:54:53.441 INFO:teuthology.orchestra.run.vm02.stdout:} 2026-03-10T05:54:53.584 INFO:journalctl@ceph.osd.1.vm02.stdout:Mar 10 05:54:53 vm02 bash[65234]: --> Failed to activate via raw: did not find any matching OSD to activate 2026-03-10T05:54:53.585 INFO:journalctl@ceph.osd.1.vm02.stdout:Mar 10 05:54:53 vm02 bash[65234]: Running command: /usr/bin/ceph-authtool --gen-print-key 2026-03-10T05:54:53.585 INFO:journalctl@ceph.osd.1.vm02.stdout:Mar 10 05:54:53 vm02 bash[65234]: Running command: /usr/bin/ceph-authtool --gen-print-key 2026-03-10T05:54:53.585 INFO:journalctl@ceph.osd.1.vm02.stdout:Mar 10 05:54:53 vm02 bash[65234]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1 2026-03-10T05:54:53.585 INFO:journalctl@ceph.osd.1.vm02.stdout:Mar 10 05:54:53 vm02 bash[65234]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph-21805841-bb4d-423b-9e61-78c38e72741e/osd-block-c0820da9-42eb-422f-88aa-598d51d4e5e7 --path /var/lib/ceph/osd/ceph-1 --no-mon-config 2026-03-10T05:54:53.585 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:54:53 vm02 bash[55303]: cluster 2026-03-10T05:54:52.385505+0000 mon.a (mon.0) 319 : cluster [DBG] osdmap e108: 8 total, 7 up, 8 in 2026-03-10T05:54:53.585 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:54:53 vm02 bash[55303]: cluster 2026-03-10T05:54:52.385505+0000 mon.a (mon.0) 319 : cluster [DBG] osdmap e108: 8 total, 7 up, 8 in 2026-03-10T05:54:53.585 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:54:53 vm02 bash[55303]: audit 2026-03-10T05:54:52.697948+0000 mgr.y (mgr.24992) 144 : audit [DBG] from='client.34240 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T05:54:53.585 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:54:53 vm02 bash[55303]: audit 2026-03-10T05:54:52.697948+0000 mgr.y (mgr.24992) 144 : audit [DBG] from='client.34240 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T05:54:53.585 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:54:53 vm02 bash[56371]: cluster 2026-03-10T05:54:52.385505+0000 mon.a (mon.0) 319 : cluster [DBG] osdmap e108: 8 total, 7 up, 8 in 2026-03-10T05:54:53.585 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:54:53 vm02 bash[56371]: cluster 2026-03-10T05:54:52.385505+0000 mon.a (mon.0) 319 : cluster [DBG] osdmap e108: 8 total, 7 up, 8 in 2026-03-10T05:54:53.585 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:54:53 vm02 bash[56371]: audit 2026-03-10T05:54:52.697948+0000 mgr.y (mgr.24992) 144 : audit [DBG] from='client.34240 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T05:54:53.585 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:54:53 vm02 bash[56371]: audit 2026-03-10T05:54:52.697948+0000 mgr.y (mgr.24992) 144 : audit [DBG] from='client.34240 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T05:54:53.637 INFO:teuthology.orchestra.run.vm02.stdout:{ 2026-03-10T05:54:53.637 INFO:teuthology.orchestra.run.vm02.stdout: "target_image": "quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df", 2026-03-10T05:54:53.637 INFO:teuthology.orchestra.run.vm02.stdout: "in_progress": true, 2026-03-10T05:54:53.637 INFO:teuthology.orchestra.run.vm02.stdout: "which": "Upgrading all daemon types on all hosts", 2026-03-10T05:54:53.638 INFO:teuthology.orchestra.run.vm02.stdout: "services_complete": [ 2026-03-10T05:54:53.638 INFO:teuthology.orchestra.run.vm02.stdout: "mgr", 2026-03-10T05:54:53.638 INFO:teuthology.orchestra.run.vm02.stdout: "mon" 2026-03-10T05:54:53.638 INFO:teuthology.orchestra.run.vm02.stdout: ], 2026-03-10T05:54:53.638 INFO:teuthology.orchestra.run.vm02.stdout: "progress": "8/23 daemons upgraded", 2026-03-10T05:54:53.638 INFO:teuthology.orchestra.run.vm02.stdout: "message": "Currently upgrading osd daemons", 2026-03-10T05:54:53.638 INFO:teuthology.orchestra.run.vm02.stdout: "is_paused": false 2026-03-10T05:54:53.638 INFO:teuthology.orchestra.run.vm02.stdout:} 2026-03-10T05:54:53.750 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:54:53 vm05 bash[43541]: cluster 2026-03-10T05:54:52.385505+0000 mon.a (mon.0) 319 : cluster [DBG] osdmap e108: 8 total, 7 up, 8 in 2026-03-10T05:54:53.750 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:54:53 vm05 bash[43541]: cluster 2026-03-10T05:54:52.385505+0000 mon.a (mon.0) 319 : cluster [DBG] osdmap e108: 8 total, 7 up, 8 in 2026-03-10T05:54:53.750 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:54:53 vm05 bash[43541]: audit 2026-03-10T05:54:52.697948+0000 mgr.y (mgr.24992) 144 : audit [DBG] from='client.34240 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T05:54:53.750 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:54:53 vm05 bash[43541]: audit 2026-03-10T05:54:52.697948+0000 mgr.y (mgr.24992) 144 : audit [DBG] from='client.34240 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T05:54:53.934 INFO:teuthology.orchestra.run.vm02.stdout:HEALTH_WARN 1 osds down; Reduced data availability: 6 pgs inactive, 11 pgs peering 2026-03-10T05:54:53.934 INFO:teuthology.orchestra.run.vm02.stdout:[WRN] OSD_DOWN: 1 osds down 2026-03-10T05:54:53.934 INFO:teuthology.orchestra.run.vm02.stdout: osd.1 (root=default,host=vm02) is down 2026-03-10T05:54:53.934 INFO:teuthology.orchestra.run.vm02.stdout:[WRN] PG_AVAILABILITY: Reduced data availability: 6 pgs inactive, 11 pgs peering 2026-03-10T05:54:53.934 INFO:teuthology.orchestra.run.vm02.stdout: pg 2.6 is stuck peering for 7m, current state peering, last acting [6,4] 2026-03-10T05:54:53.934 INFO:teuthology.orchestra.run.vm02.stdout: pg 2.9 is stuck peering for 7m, current state peering, last acting [7,3] 2026-03-10T05:54:53.934 INFO:teuthology.orchestra.run.vm02.stdout: pg 2.a is stuck peering for 7m, current state peering, last acting [3,7] 2026-03-10T05:54:53.934 INFO:teuthology.orchestra.run.vm02.stdout: pg 2.d is stuck peering for 7m, current state peering, last acting [4,3] 2026-03-10T05:54:53.934 INFO:teuthology.orchestra.run.vm02.stdout: pg 3.19 is stuck peering for 7m, current state peering, last acting [3,4] 2026-03-10T05:54:53.934 INFO:teuthology.orchestra.run.vm02.stdout: pg 4.2 is stuck peering for 7m, current state peering, last acting [5,4] 2026-03-10T05:54:53.934 INFO:teuthology.orchestra.run.vm02.stdout: pg 4.f is stuck peering for 60s, current state peering, last acting [3,4] 2026-03-10T05:54:53.934 INFO:teuthology.orchestra.run.vm02.stdout: pg 5.12 is stuck peering for 60s, current state peering, last acting [5,3] 2026-03-10T05:54:53.934 INFO:teuthology.orchestra.run.vm02.stdout: pg 5.19 is stuck peering for 7m, current state peering, last acting [5,7] 2026-03-10T05:54:53.934 INFO:teuthology.orchestra.run.vm02.stdout: pg 6.4 is stuck peering for 60s, current state peering, last acting [5,3] 2026-03-10T05:54:53.934 INFO:teuthology.orchestra.run.vm02.stdout: pg 6.1d is stuck peering for 7m, current state peering, last acting [5,4] 2026-03-10T05:54:54.084 INFO:journalctl@ceph.osd.1.vm02.stdout:Mar 10 05:54:53 vm02 bash[65234]: Running command: /usr/bin/ln -snf /dev/ceph-21805841-bb4d-423b-9e61-78c38e72741e/osd-block-c0820da9-42eb-422f-88aa-598d51d4e5e7 /var/lib/ceph/osd/ceph-1/block 2026-03-10T05:54:54.084 INFO:journalctl@ceph.osd.1.vm02.stdout:Mar 10 05:54:53 vm02 bash[65234]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-1/block 2026-03-10T05:54:54.084 INFO:journalctl@ceph.osd.1.vm02.stdout:Mar 10 05:54:53 vm02 bash[65234]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-1 2026-03-10T05:54:54.084 INFO:journalctl@ceph.osd.1.vm02.stdout:Mar 10 05:54:53 vm02 bash[65234]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1 2026-03-10T05:54:54.085 INFO:journalctl@ceph.osd.1.vm02.stdout:Mar 10 05:54:53 vm02 bash[65234]: --> ceph-volume lvm activate successful for osd ID: 1 2026-03-10T05:54:54.395 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:54:54 vm05 bash[43541]: cluster 2026-03-10T05:54:52.845655+0000 mgr.y (mgr.24992) 145 : cluster [DBG] pgmap v71: 161 pgs: 50 peering, 4 stale+active+clean, 107 active+clean; 457 KiB data, 169 MiB used, 160 GiB / 160 GiB avail 2026-03-10T05:54:54.395 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:54:54 vm05 bash[43541]: cluster 2026-03-10T05:54:52.845655+0000 mgr.y (mgr.24992) 145 : cluster [DBG] pgmap v71: 161 pgs: 50 peering, 4 stale+active+clean, 107 active+clean; 457 KiB data, 169 MiB used, 160 GiB / 160 GiB avail 2026-03-10T05:54:54.395 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:54:54 vm05 bash[43541]: audit 2026-03-10T05:54:52.970597+0000 mgr.y (mgr.24992) 146 : audit [DBG] from='client.34246 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T05:54:54.395 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:54:54 vm05 bash[43541]: audit 2026-03-10T05:54:52.970597+0000 mgr.y (mgr.24992) 146 : audit [DBG] from='client.34246 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T05:54:54.395 INFO:journalctl@ceph.prometheus.a.vm05.stdout:Mar 10 05:54:54 vm05 bash[41269]: ts=2026-03-10T05:54:54.147Z caller=group.go:483 level=warn name=CephOSDFlapping index=13 component="rule manager" file=/etc/prometheus/alerting/ceph_alerts.yml group=osd msg="Evaluating rule failed" rule="alert: CephOSDFlapping\nexpr: (rate(ceph_osd_up[5m]) * on (ceph_daemon) group_left (hostname) ceph_osd_metadata)\n * 60 > 1\nlabels:\n oid: 1.3.6.1.4.1.50495.1.2.1.4.4\n severity: warning\n type: ceph_default\nannotations:\n description: OSD {{ $labels.ceph_daemon }} on {{ $labels.hostname }} was marked\n down and back up {{ $value | humanize }} times once a minute for 5 minutes. This\n may indicate a network issue (latency, packet loss, MTU mismatch) on the cluster\n network, or the public network if no cluster network is deployed. Check the network\n stats on the listed host(s).\n documentation: https://docs.ceph.com/en/latest/rados/troubleshooting/troubleshooting-osd#flapping-osds\n summary: Network issues are causing OSDs to flap (mark each other down)\n" err="found duplicate series for the match group {ceph_daemon=\"osd.1\"} on the right hand-side of the operation: [{__name__=\"ceph_osd_metadata\", ceph_daemon=\"osd.1\", ceph_version=\"ceph version 17.2.0 (43e2e60a7559d3f46c9d53f1ca875fd499a1e35e) quincy (stable)\", cluster=\"107483ae-1c44-11f1-b530-c1172cd6122a\", cluster_addr=\"192.168.123.102\", device_class=\"hdd\", hostname=\"vm02\", instance=\"ceph_cluster\", job=\"ceph\", objectstore=\"bluestore\", public_addr=\"192.168.123.102\"}, {__name__=\"ceph_osd_metadata\", ceph_daemon=\"osd.1\", ceph_version=\"ceph version 17.2.0 (43e2e60a7559d3f46c9d53f1ca875fd499a1e35e) quincy (stable)\", cluster_addr=\"192.168.123.102\", device_class=\"hdd\", hostname=\"vm02\", instance=\"192.168.123.105:9283\", job=\"ceph\", objectstore=\"bluestore\", public_addr=\"192.168.123.102\"}];many-to-many matching not allowed: matching labels must be unique on one side" 2026-03-10T05:54:54.750 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:54:54 vm05 bash[43541]: audit 2026-03-10T05:54:53.164374+0000 mgr.y (mgr.24992) 147 : audit [DBG] from='client.44250 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T05:54:54.750 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:54:54 vm05 bash[43541]: audit 2026-03-10T05:54:53.164374+0000 mgr.y (mgr.24992) 147 : audit [DBG] from='client.44250 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T05:54:54.750 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:54:54 vm05 bash[43541]: cluster 2026-03-10T05:54:53.377067+0000 mon.a (mon.0) 320 : cluster [WRN] Health check failed: Reduced data availability: 6 pgs inactive, 11 pgs peering (PG_AVAILABILITY) 2026-03-10T05:54:54.750 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:54:54 vm05 bash[43541]: cluster 2026-03-10T05:54:53.377067+0000 mon.a (mon.0) 320 : cluster [WRN] Health check failed: Reduced data availability: 6 pgs inactive, 11 pgs peering (PG_AVAILABILITY) 2026-03-10T05:54:54.750 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:54:54 vm05 bash[43541]: audit 2026-03-10T05:54:53.443925+0000 mon.b (mon.2) 6 : audit [DBG] from='client.? 192.168.123.102:0/3558506292' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T05:54:54.750 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:54:54 vm05 bash[43541]: audit 2026-03-10T05:54:53.443925+0000 mon.b (mon.2) 6 : audit [DBG] from='client.? 192.168.123.102:0/3558506292' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T05:54:54.750 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:54:54 vm05 bash[43541]: audit 2026-03-10T05:54:53.636604+0000 mgr.y (mgr.24992) 148 : audit [DBG] from='client.34264 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T05:54:54.750 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:54:54 vm05 bash[43541]: audit 2026-03-10T05:54:53.636604+0000 mgr.y (mgr.24992) 148 : audit [DBG] from='client.34264 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T05:54:54.750 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:54:54 vm05 bash[43541]: audit 2026-03-10T05:54:53.933545+0000 mon.c (mon.1) 9 : audit [DBG] from='client.? 192.168.123.102:0/2508999959' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch 2026-03-10T05:54:54.750 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:54:54 vm05 bash[43541]: audit 2026-03-10T05:54:53.933545+0000 mon.c (mon.1) 9 : audit [DBG] from='client.? 192.168.123.102:0/2508999959' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch 2026-03-10T05:54:54.834 INFO:journalctl@ceph.osd.1.vm02.stdout:Mar 10 05:54:54 vm02 bash[65730]: debug 2026-03-10T05:54:54.551+0000 7f854af77740 -1 Falling back to public interface 2026-03-10T05:54:54.834 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:54:54 vm02 bash[56371]: cluster 2026-03-10T05:54:52.845655+0000 mgr.y (mgr.24992) 145 : cluster [DBG] pgmap v71: 161 pgs: 50 peering, 4 stale+active+clean, 107 active+clean; 457 KiB data, 169 MiB used, 160 GiB / 160 GiB avail 2026-03-10T05:54:54.834 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:54:54 vm02 bash[56371]: cluster 2026-03-10T05:54:52.845655+0000 mgr.y (mgr.24992) 145 : cluster [DBG] pgmap v71: 161 pgs: 50 peering, 4 stale+active+clean, 107 active+clean; 457 KiB data, 169 MiB used, 160 GiB / 160 GiB avail 2026-03-10T05:54:54.835 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:54:54 vm02 bash[56371]: audit 2026-03-10T05:54:52.970597+0000 mgr.y (mgr.24992) 146 : audit [DBG] from='client.34246 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T05:54:54.835 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:54:54 vm02 bash[56371]: audit 2026-03-10T05:54:52.970597+0000 mgr.y (mgr.24992) 146 : audit [DBG] from='client.34246 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T05:54:54.835 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:54:54 vm02 bash[56371]: audit 2026-03-10T05:54:53.164374+0000 mgr.y (mgr.24992) 147 : audit [DBG] from='client.44250 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T05:54:54.835 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:54:54 vm02 bash[56371]: audit 2026-03-10T05:54:53.164374+0000 mgr.y (mgr.24992) 147 : audit [DBG] from='client.44250 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T05:54:54.835 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:54:54 vm02 bash[56371]: cluster 2026-03-10T05:54:53.377067+0000 mon.a (mon.0) 320 : cluster [WRN] Health check failed: Reduced data availability: 6 pgs inactive, 11 pgs peering (PG_AVAILABILITY) 2026-03-10T05:54:54.835 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:54:54 vm02 bash[56371]: cluster 2026-03-10T05:54:53.377067+0000 mon.a (mon.0) 320 : cluster [WRN] Health check failed: Reduced data availability: 6 pgs inactive, 11 pgs peering (PG_AVAILABILITY) 2026-03-10T05:54:54.835 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:54:54 vm02 bash[56371]: audit 2026-03-10T05:54:53.443925+0000 mon.b (mon.2) 6 : audit [DBG] from='client.? 192.168.123.102:0/3558506292' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T05:54:54.835 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:54:54 vm02 bash[56371]: audit 2026-03-10T05:54:53.443925+0000 mon.b (mon.2) 6 : audit [DBG] from='client.? 192.168.123.102:0/3558506292' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T05:54:54.835 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:54:54 vm02 bash[56371]: audit 2026-03-10T05:54:53.636604+0000 mgr.y (mgr.24992) 148 : audit [DBG] from='client.34264 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T05:54:54.835 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:54:54 vm02 bash[56371]: audit 2026-03-10T05:54:53.636604+0000 mgr.y (mgr.24992) 148 : audit [DBG] from='client.34264 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T05:54:54.835 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:54:54 vm02 bash[56371]: audit 2026-03-10T05:54:53.933545+0000 mon.c (mon.1) 9 : audit [DBG] from='client.? 192.168.123.102:0/2508999959' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch 2026-03-10T05:54:54.835 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:54:54 vm02 bash[56371]: audit 2026-03-10T05:54:53.933545+0000 mon.c (mon.1) 9 : audit [DBG] from='client.? 192.168.123.102:0/2508999959' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch 2026-03-10T05:54:54.835 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:54:54 vm02 bash[55303]: cluster 2026-03-10T05:54:52.845655+0000 mgr.y (mgr.24992) 145 : cluster [DBG] pgmap v71: 161 pgs: 50 peering, 4 stale+active+clean, 107 active+clean; 457 KiB data, 169 MiB used, 160 GiB / 160 GiB avail 2026-03-10T05:54:54.835 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:54:54 vm02 bash[55303]: cluster 2026-03-10T05:54:52.845655+0000 mgr.y (mgr.24992) 145 : cluster [DBG] pgmap v71: 161 pgs: 50 peering, 4 stale+active+clean, 107 active+clean; 457 KiB data, 169 MiB used, 160 GiB / 160 GiB avail 2026-03-10T05:54:54.835 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:54:54 vm02 bash[55303]: audit 2026-03-10T05:54:52.970597+0000 mgr.y (mgr.24992) 146 : audit [DBG] from='client.34246 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T05:54:54.835 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:54:54 vm02 bash[55303]: audit 2026-03-10T05:54:52.970597+0000 mgr.y (mgr.24992) 146 : audit [DBG] from='client.34246 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T05:54:54.835 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:54:54 vm02 bash[55303]: audit 2026-03-10T05:54:53.164374+0000 mgr.y (mgr.24992) 147 : audit [DBG] from='client.44250 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T05:54:54.835 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:54:54 vm02 bash[55303]: audit 2026-03-10T05:54:53.164374+0000 mgr.y (mgr.24992) 147 : audit [DBG] from='client.44250 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T05:54:54.835 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:54:54 vm02 bash[55303]: cluster 2026-03-10T05:54:53.377067+0000 mon.a (mon.0) 320 : cluster [WRN] Health check failed: Reduced data availability: 6 pgs inactive, 11 pgs peering (PG_AVAILABILITY) 2026-03-10T05:54:54.835 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:54:54 vm02 bash[55303]: cluster 2026-03-10T05:54:53.377067+0000 mon.a (mon.0) 320 : cluster [WRN] Health check failed: Reduced data availability: 6 pgs inactive, 11 pgs peering (PG_AVAILABILITY) 2026-03-10T05:54:54.835 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:54:54 vm02 bash[55303]: audit 2026-03-10T05:54:53.443925+0000 mon.b (mon.2) 6 : audit [DBG] from='client.? 192.168.123.102:0/3558506292' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T05:54:54.835 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:54:54 vm02 bash[55303]: audit 2026-03-10T05:54:53.443925+0000 mon.b (mon.2) 6 : audit [DBG] from='client.? 192.168.123.102:0/3558506292' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T05:54:54.835 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:54:54 vm02 bash[55303]: audit 2026-03-10T05:54:53.636604+0000 mgr.y (mgr.24992) 148 : audit [DBG] from='client.34264 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T05:54:54.835 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:54:54 vm02 bash[55303]: audit 2026-03-10T05:54:53.636604+0000 mgr.y (mgr.24992) 148 : audit [DBG] from='client.34264 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T05:54:54.835 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:54:54 vm02 bash[55303]: audit 2026-03-10T05:54:53.933545+0000 mon.c (mon.1) 9 : audit [DBG] from='client.? 192.168.123.102:0/2508999959' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch 2026-03-10T05:54:54.835 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:54:54 vm02 bash[55303]: audit 2026-03-10T05:54:53.933545+0000 mon.c (mon.1) 9 : audit [DBG] from='client.? 192.168.123.102:0/2508999959' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch 2026-03-10T05:54:55.750 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:54:55 vm05 bash[43541]: cluster 2026-03-10T05:54:55.388620+0000 mon.a (mon.0) 321 : cluster [WRN] Health check failed: Degraded data redundancy: 23/723 objects degraded (3.181%), 3 pgs degraded (PG_DEGRADED) 2026-03-10T05:54:55.750 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:54:55 vm05 bash[43541]: cluster 2026-03-10T05:54:55.388620+0000 mon.a (mon.0) 321 : cluster [WRN] Health check failed: Degraded data redundancy: 23/723 objects degraded (3.181%), 3 pgs degraded (PG_DEGRADED) 2026-03-10T05:54:55.834 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:54:55 vm02 bash[55303]: cluster 2026-03-10T05:54:55.388620+0000 mon.a (mon.0) 321 : cluster [WRN] Health check failed: Degraded data redundancy: 23/723 objects degraded (3.181%), 3 pgs degraded (PG_DEGRADED) 2026-03-10T05:54:55.835 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:54:55 vm02 bash[55303]: cluster 2026-03-10T05:54:55.388620+0000 mon.a (mon.0) 321 : cluster [WRN] Health check failed: Degraded data redundancy: 23/723 objects degraded (3.181%), 3 pgs degraded (PG_DEGRADED) 2026-03-10T05:54:55.835 INFO:journalctl@ceph.osd.1.vm02.stdout:Mar 10 05:54:55 vm02 bash[65730]: debug 2026-03-10T05:54:55.523+0000 7f854af77740 -1 osd.1 0 read_superblock omap replica is missing. 2026-03-10T05:54:55.835 INFO:journalctl@ceph.osd.1.vm02.stdout:Mar 10 05:54:55 vm02 bash[65730]: debug 2026-03-10T05:54:55.543+0000 7f854af77740 -1 osd.1 106 log_to_monitors true 2026-03-10T05:54:55.835 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:54:55 vm02 bash[56371]: cluster 2026-03-10T05:54:55.388620+0000 mon.a (mon.0) 321 : cluster [WRN] Health check failed: Degraded data redundancy: 23/723 objects degraded (3.181%), 3 pgs degraded (PG_DEGRADED) 2026-03-10T05:54:55.835 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:54:55 vm02 bash[56371]: cluster 2026-03-10T05:54:55.388620+0000 mon.a (mon.0) 321 : cluster [WRN] Health check failed: Degraded data redundancy: 23/723 objects degraded (3.181%), 3 pgs degraded (PG_DEGRADED) 2026-03-10T05:54:56.750 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:54:56 vm05 bash[43541]: cluster 2026-03-10T05:54:54.846095+0000 mgr.y (mgr.24992) 149 : cluster [DBG] pgmap v72: 161 pgs: 3 active+undersized, 50 peering, 1 stale+active+clean, 3 active+undersized+degraded, 104 active+clean; 457 KiB data, 169 MiB used, 160 GiB / 160 GiB avail; 23/723 objects degraded (3.181%) 2026-03-10T05:54:56.750 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:54:56 vm05 bash[43541]: cluster 2026-03-10T05:54:54.846095+0000 mgr.y (mgr.24992) 149 : cluster [DBG] pgmap v72: 161 pgs: 3 active+undersized, 50 peering, 1 stale+active+clean, 3 active+undersized+degraded, 104 active+clean; 457 KiB data, 169 MiB used, 160 GiB / 160 GiB avail; 23/723 objects degraded (3.181%) 2026-03-10T05:54:56.750 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:54:56 vm05 bash[43541]: audit 2026-03-10T05:54:55.551147+0000 mon.a (mon.0) 322 : audit [INF] from='osd.1 [v2:192.168.123.102:6810/3078876362,v1:192.168.123.102:6811/3078876362]' entity='osd.1' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]: dispatch 2026-03-10T05:54:56.750 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:54:56 vm05 bash[43541]: audit 2026-03-10T05:54:55.551147+0000 mon.a (mon.0) 322 : audit [INF] from='osd.1 [v2:192.168.123.102:6810/3078876362,v1:192.168.123.102:6811/3078876362]' entity='osd.1' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]: dispatch 2026-03-10T05:54:56.750 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:54:56 vm05 bash[43541]: audit 2026-03-10T05:54:55.885061+0000 mon.a (mon.0) 323 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:54:56.750 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:54:56 vm05 bash[43541]: audit 2026-03-10T05:54:55.885061+0000 mon.a (mon.0) 323 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:54:56.750 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:54:56 vm05 bash[43541]: audit 2026-03-10T05:54:55.885871+0000 mon.a (mon.0) 324 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T05:54:56.750 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:54:56 vm05 bash[43541]: audit 2026-03-10T05:54:55.885871+0000 mon.a (mon.0) 324 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T05:54:56.834 INFO:journalctl@ceph.osd.1.vm02.stdout:Mar 10 05:54:56 vm02 bash[65730]: debug 2026-03-10T05:54:56.479+0000 7f8542d22640 -1 osd.1 106 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory 2026-03-10T05:54:56.834 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:54:56 vm02 bash[56371]: cluster 2026-03-10T05:54:54.846095+0000 mgr.y (mgr.24992) 149 : cluster [DBG] pgmap v72: 161 pgs: 3 active+undersized, 50 peering, 1 stale+active+clean, 3 active+undersized+degraded, 104 active+clean; 457 KiB data, 169 MiB used, 160 GiB / 160 GiB avail; 23/723 objects degraded (3.181%) 2026-03-10T05:54:56.834 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:54:56 vm02 bash[56371]: cluster 2026-03-10T05:54:54.846095+0000 mgr.y (mgr.24992) 149 : cluster [DBG] pgmap v72: 161 pgs: 3 active+undersized, 50 peering, 1 stale+active+clean, 3 active+undersized+degraded, 104 active+clean; 457 KiB data, 169 MiB used, 160 GiB / 160 GiB avail; 23/723 objects degraded (3.181%) 2026-03-10T05:54:56.834 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:54:56 vm02 bash[56371]: audit 2026-03-10T05:54:55.551147+0000 mon.a (mon.0) 322 : audit [INF] from='osd.1 [v2:192.168.123.102:6810/3078876362,v1:192.168.123.102:6811/3078876362]' entity='osd.1' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]: dispatch 2026-03-10T05:54:56.834 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:54:56 vm02 bash[56371]: audit 2026-03-10T05:54:55.551147+0000 mon.a (mon.0) 322 : audit [INF] from='osd.1 [v2:192.168.123.102:6810/3078876362,v1:192.168.123.102:6811/3078876362]' entity='osd.1' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]: dispatch 2026-03-10T05:54:56.834 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:54:56 vm02 bash[56371]: audit 2026-03-10T05:54:55.885061+0000 mon.a (mon.0) 323 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:54:56.834 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:54:56 vm02 bash[56371]: audit 2026-03-10T05:54:55.885061+0000 mon.a (mon.0) 323 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:54:56.834 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:54:56 vm02 bash[56371]: audit 2026-03-10T05:54:55.885871+0000 mon.a (mon.0) 324 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T05:54:56.834 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:54:56 vm02 bash[56371]: audit 2026-03-10T05:54:55.885871+0000 mon.a (mon.0) 324 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T05:54:56.835 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:54:56 vm02 bash[55303]: cluster 2026-03-10T05:54:54.846095+0000 mgr.y (mgr.24992) 149 : cluster [DBG] pgmap v72: 161 pgs: 3 active+undersized, 50 peering, 1 stale+active+clean, 3 active+undersized+degraded, 104 active+clean; 457 KiB data, 169 MiB used, 160 GiB / 160 GiB avail; 23/723 objects degraded (3.181%) 2026-03-10T05:54:56.835 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:54:56 vm02 bash[55303]: cluster 2026-03-10T05:54:54.846095+0000 mgr.y (mgr.24992) 149 : cluster [DBG] pgmap v72: 161 pgs: 3 active+undersized, 50 peering, 1 stale+active+clean, 3 active+undersized+degraded, 104 active+clean; 457 KiB data, 169 MiB used, 160 GiB / 160 GiB avail; 23/723 objects degraded (3.181%) 2026-03-10T05:54:56.835 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:54:56 vm02 bash[55303]: audit 2026-03-10T05:54:55.551147+0000 mon.a (mon.0) 322 : audit [INF] from='osd.1 [v2:192.168.123.102:6810/3078876362,v1:192.168.123.102:6811/3078876362]' entity='osd.1' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]: dispatch 2026-03-10T05:54:56.835 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:54:56 vm02 bash[55303]: audit 2026-03-10T05:54:55.551147+0000 mon.a (mon.0) 322 : audit [INF] from='osd.1 [v2:192.168.123.102:6810/3078876362,v1:192.168.123.102:6811/3078876362]' entity='osd.1' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]: dispatch 2026-03-10T05:54:56.835 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:54:56 vm02 bash[55303]: audit 2026-03-10T05:54:55.885061+0000 mon.a (mon.0) 323 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:54:56.835 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:54:56 vm02 bash[55303]: audit 2026-03-10T05:54:55.885061+0000 mon.a (mon.0) 323 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:54:56.835 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:54:56 vm02 bash[55303]: audit 2026-03-10T05:54:55.885871+0000 mon.a (mon.0) 324 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T05:54:56.835 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:54:56 vm02 bash[55303]: audit 2026-03-10T05:54:55.885871+0000 mon.a (mon.0) 324 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T05:54:57.251 INFO:journalctl@ceph.prometheus.a.vm05.stdout:Mar 10 05:54:56 vm05 bash[41269]: ts=2026-03-10T05:54:56.948Z caller=group.go:483 level=warn name=CephNodeDiskspaceWarning index=4 component="rule manager" file=/etc/prometheus/alerting/ceph_alerts.yml group=nodes msg="Evaluating rule failed" rule="alert: CephNodeDiskspaceWarning\nexpr: predict_linear(node_filesystem_free_bytes{device=~\"/.*\"}[2d], 3600 * 24 * 5)\n * on (instance) group_left (nodename) node_uname_info < 0\nlabels:\n oid: 1.3.6.1.4.1.50495.1.2.1.8.4\n severity: warning\n type: ceph_default\nannotations:\n description: Mountpoint {{ $labels.mountpoint }} on {{ $labels.nodename }} will\n be full in less than 5 days based on the 48 hour trailing fill rate.\n summary: Host filesystem free space is getting low\n" err="found duplicate series for the match group {instance=\"vm05\"} on the right hand-side of the operation: [{__name__=\"node_uname_info\", cluster=\"107483ae-1c44-11f1-b530-c1172cd6122a\", domainname=\"(none)\", instance=\"vm05\", job=\"node\", machine=\"x86_64\", nodename=\"vm05\", release=\"5.15.0-1092-kvm\", sysname=\"Linux\", version=\"#97-Ubuntu SMP Fri Jan 23 15:00:24 UTC 2026\"}, {__name__=\"node_uname_info\", domainname=\"(none)\", instance=\"vm05\", job=\"node\", machine=\"x86_64\", nodename=\"vm05\", release=\"5.15.0-1092-kvm\", sysname=\"Linux\", version=\"#97-Ubuntu SMP Fri Jan 23 15:00:24 UTC 2026\"}];many-to-many matching not allowed: matching labels must be unique on one side" 2026-03-10T05:54:57.750 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:54:57 vm05 bash[43541]: audit 2026-03-10T05:54:56.452342+0000 mon.a (mon.0) 325 : audit [INF] from='osd.1 [v2:192.168.123.102:6810/3078876362,v1:192.168.123.102:6811/3078876362]' entity='osd.1' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]': finished 2026-03-10T05:54:57.750 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:54:57 vm05 bash[43541]: audit 2026-03-10T05:54:56.452342+0000 mon.a (mon.0) 325 : audit [INF] from='osd.1 [v2:192.168.123.102:6810/3078876362,v1:192.168.123.102:6811/3078876362]' entity='osd.1' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]': finished 2026-03-10T05:54:57.750 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:54:57 vm05 bash[43541]: cluster 2026-03-10T05:54:56.459432+0000 mon.a (mon.0) 326 : cluster [DBG] osdmap e109: 8 total, 7 up, 8 in 2026-03-10T05:54:57.750 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:54:57 vm05 bash[43541]: cluster 2026-03-10T05:54:56.459432+0000 mon.a (mon.0) 326 : cluster [DBG] osdmap e109: 8 total, 7 up, 8 in 2026-03-10T05:54:57.750 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:54:57 vm05 bash[43541]: audit 2026-03-10T05:54:56.459732+0000 mon.a (mon.0) 327 : audit [INF] from='osd.1 [v2:192.168.123.102:6810/3078876362,v1:192.168.123.102:6811/3078876362]' entity='osd.1' cmd=[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=vm02", "root=default"]}]: dispatch 2026-03-10T05:54:57.750 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:54:57 vm05 bash[43541]: audit 2026-03-10T05:54:56.459732+0000 mon.a (mon.0) 327 : audit [INF] from='osd.1 [v2:192.168.123.102:6810/3078876362,v1:192.168.123.102:6811/3078876362]' entity='osd.1' cmd=[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=vm02", "root=default"]}]: dispatch 2026-03-10T05:54:57.834 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:54:57 vm02 bash[56371]: audit 2026-03-10T05:54:56.452342+0000 mon.a (mon.0) 325 : audit [INF] from='osd.1 [v2:192.168.123.102:6810/3078876362,v1:192.168.123.102:6811/3078876362]' entity='osd.1' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]': finished 2026-03-10T05:54:57.835 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:54:57 vm02 bash[56371]: audit 2026-03-10T05:54:56.452342+0000 mon.a (mon.0) 325 : audit [INF] from='osd.1 [v2:192.168.123.102:6810/3078876362,v1:192.168.123.102:6811/3078876362]' entity='osd.1' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]': finished 2026-03-10T05:54:57.835 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:54:57 vm02 bash[56371]: cluster 2026-03-10T05:54:56.459432+0000 mon.a (mon.0) 326 : cluster [DBG] osdmap e109: 8 total, 7 up, 8 in 2026-03-10T05:54:57.835 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:54:57 vm02 bash[56371]: cluster 2026-03-10T05:54:56.459432+0000 mon.a (mon.0) 326 : cluster [DBG] osdmap e109: 8 total, 7 up, 8 in 2026-03-10T05:54:57.835 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:54:57 vm02 bash[56371]: audit 2026-03-10T05:54:56.459732+0000 mon.a (mon.0) 327 : audit [INF] from='osd.1 [v2:192.168.123.102:6810/3078876362,v1:192.168.123.102:6811/3078876362]' entity='osd.1' cmd=[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=vm02", "root=default"]}]: dispatch 2026-03-10T05:54:57.835 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:54:57 vm02 bash[56371]: audit 2026-03-10T05:54:56.459732+0000 mon.a (mon.0) 327 : audit [INF] from='osd.1 [v2:192.168.123.102:6810/3078876362,v1:192.168.123.102:6811/3078876362]' entity='osd.1' cmd=[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=vm02", "root=default"]}]: dispatch 2026-03-10T05:54:57.835 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:54:57 vm02 bash[55303]: audit 2026-03-10T05:54:56.452342+0000 mon.a (mon.0) 325 : audit [INF] from='osd.1 [v2:192.168.123.102:6810/3078876362,v1:192.168.123.102:6811/3078876362]' entity='osd.1' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]': finished 2026-03-10T05:54:57.835 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:54:57 vm02 bash[55303]: audit 2026-03-10T05:54:56.452342+0000 mon.a (mon.0) 325 : audit [INF] from='osd.1 [v2:192.168.123.102:6810/3078876362,v1:192.168.123.102:6811/3078876362]' entity='osd.1' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]': finished 2026-03-10T05:54:57.835 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:54:57 vm02 bash[55303]: cluster 2026-03-10T05:54:56.459432+0000 mon.a (mon.0) 326 : cluster [DBG] osdmap e109: 8 total, 7 up, 8 in 2026-03-10T05:54:57.835 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:54:57 vm02 bash[55303]: cluster 2026-03-10T05:54:56.459432+0000 mon.a (mon.0) 326 : cluster [DBG] osdmap e109: 8 total, 7 up, 8 in 2026-03-10T05:54:57.835 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:54:57 vm02 bash[55303]: audit 2026-03-10T05:54:56.459732+0000 mon.a (mon.0) 327 : audit [INF] from='osd.1 [v2:192.168.123.102:6810/3078876362,v1:192.168.123.102:6811/3078876362]' entity='osd.1' cmd=[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=vm02", "root=default"]}]: dispatch 2026-03-10T05:54:57.835 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:54:57 vm02 bash[55303]: audit 2026-03-10T05:54:56.459732+0000 mon.a (mon.0) 327 : audit [INF] from='osd.1 [v2:192.168.123.102:6810/3078876362,v1:192.168.123.102:6811/3078876362]' entity='osd.1' cmd=[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=vm02", "root=default"]}]: dispatch 2026-03-10T05:54:58.835 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:54:58 vm02 bash[56371]: cluster 2026-03-10T05:54:56.846683+0000 mgr.y (mgr.24992) 150 : cluster [DBG] pgmap v74: 161 pgs: 40 active+undersized, 5 peering, 18 active+undersized+degraded, 98 active+clean; 457 KiB data, 191 MiB used, 160 GiB / 160 GiB avail; 93/723 objects degraded (12.863%) 2026-03-10T05:54:58.835 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:54:58 vm02 bash[56371]: cluster 2026-03-10T05:54:56.846683+0000 mgr.y (mgr.24992) 150 : cluster [DBG] pgmap v74: 161 pgs: 40 active+undersized, 5 peering, 18 active+undersized+degraded, 98 active+clean; 457 KiB data, 191 MiB used, 160 GiB / 160 GiB avail; 93/723 objects degraded (12.863%) 2026-03-10T05:54:58.835 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:54:58 vm02 bash[56371]: audit 2026-03-10T05:54:56.943041+0000 mgr.y (mgr.24992) 151 : audit [DBG] from='client.25048 -' entity='client.iscsi.foo.vm02.mxbwmh' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T05:54:58.835 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:54:58 vm02 bash[56371]: audit 2026-03-10T05:54:56.943041+0000 mgr.y (mgr.24992) 151 : audit [DBG] from='client.25048 -' entity='client.iscsi.foo.vm02.mxbwmh' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T05:54:58.835 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:54:58 vm02 bash[56371]: cluster 2026-03-10T05:54:57.454424+0000 mon.a (mon.0) 328 : cluster [INF] Health check cleared: OSD_DOWN (was: 1 osds down) 2026-03-10T05:54:58.835 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:54:58 vm02 bash[56371]: cluster 2026-03-10T05:54:57.454424+0000 mon.a (mon.0) 328 : cluster [INF] Health check cleared: OSD_DOWN (was: 1 osds down) 2026-03-10T05:54:58.835 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:54:58 vm02 bash[56371]: cluster 2026-03-10T05:54:57.479365+0000 mon.a (mon.0) 329 : cluster [INF] osd.1 [v2:192.168.123.102:6810/3078876362,v1:192.168.123.102:6811/3078876362] boot 2026-03-10T05:54:58.835 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:54:58 vm02 bash[56371]: cluster 2026-03-10T05:54:57.479365+0000 mon.a (mon.0) 329 : cluster [INF] osd.1 [v2:192.168.123.102:6810/3078876362,v1:192.168.123.102:6811/3078876362] boot 2026-03-10T05:54:58.835 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:54:58 vm02 bash[56371]: cluster 2026-03-10T05:54:57.481301+0000 mon.a (mon.0) 330 : cluster [DBG] osdmap e110: 8 total, 8 up, 8 in 2026-03-10T05:54:58.835 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:54:58 vm02 bash[56371]: cluster 2026-03-10T05:54:57.481301+0000 mon.a (mon.0) 330 : cluster [DBG] osdmap e110: 8 total, 8 up, 8 in 2026-03-10T05:54:58.835 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:54:58 vm02 bash[56371]: audit 2026-03-10T05:54:57.485074+0000 mon.a (mon.0) 331 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T05:54:58.835 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:54:58 vm02 bash[56371]: audit 2026-03-10T05:54:57.485074+0000 mon.a (mon.0) 331 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T05:54:58.835 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:54:58 vm02 bash[55303]: cluster 2026-03-10T05:54:56.846683+0000 mgr.y (mgr.24992) 150 : cluster [DBG] pgmap v74: 161 pgs: 40 active+undersized, 5 peering, 18 active+undersized+degraded, 98 active+clean; 457 KiB data, 191 MiB used, 160 GiB / 160 GiB avail; 93/723 objects degraded (12.863%) 2026-03-10T05:54:58.835 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:54:58 vm02 bash[55303]: cluster 2026-03-10T05:54:56.846683+0000 mgr.y (mgr.24992) 150 : cluster [DBG] pgmap v74: 161 pgs: 40 active+undersized, 5 peering, 18 active+undersized+degraded, 98 active+clean; 457 KiB data, 191 MiB used, 160 GiB / 160 GiB avail; 93/723 objects degraded (12.863%) 2026-03-10T05:54:58.835 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:54:58 vm02 bash[55303]: audit 2026-03-10T05:54:56.943041+0000 mgr.y (mgr.24992) 151 : audit [DBG] from='client.25048 -' entity='client.iscsi.foo.vm02.mxbwmh' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T05:54:58.835 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:54:58 vm02 bash[55303]: audit 2026-03-10T05:54:56.943041+0000 mgr.y (mgr.24992) 151 : audit [DBG] from='client.25048 -' entity='client.iscsi.foo.vm02.mxbwmh' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T05:54:58.835 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:54:58 vm02 bash[55303]: cluster 2026-03-10T05:54:57.454424+0000 mon.a (mon.0) 328 : cluster [INF] Health check cleared: OSD_DOWN (was: 1 osds down) 2026-03-10T05:54:58.835 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:54:58 vm02 bash[55303]: cluster 2026-03-10T05:54:57.454424+0000 mon.a (mon.0) 328 : cluster [INF] Health check cleared: OSD_DOWN (was: 1 osds down) 2026-03-10T05:54:58.835 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:54:58 vm02 bash[55303]: cluster 2026-03-10T05:54:57.479365+0000 mon.a (mon.0) 329 : cluster [INF] osd.1 [v2:192.168.123.102:6810/3078876362,v1:192.168.123.102:6811/3078876362] boot 2026-03-10T05:54:58.835 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:54:58 vm02 bash[55303]: cluster 2026-03-10T05:54:57.479365+0000 mon.a (mon.0) 329 : cluster [INF] osd.1 [v2:192.168.123.102:6810/3078876362,v1:192.168.123.102:6811/3078876362] boot 2026-03-10T05:54:58.835 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:54:58 vm02 bash[55303]: cluster 2026-03-10T05:54:57.481301+0000 mon.a (mon.0) 330 : cluster [DBG] osdmap e110: 8 total, 8 up, 8 in 2026-03-10T05:54:58.835 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:54:58 vm02 bash[55303]: cluster 2026-03-10T05:54:57.481301+0000 mon.a (mon.0) 330 : cluster [DBG] osdmap e110: 8 total, 8 up, 8 in 2026-03-10T05:54:58.835 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:54:58 vm02 bash[55303]: audit 2026-03-10T05:54:57.485074+0000 mon.a (mon.0) 331 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T05:54:58.835 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:54:58 vm02 bash[55303]: audit 2026-03-10T05:54:57.485074+0000 mon.a (mon.0) 331 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T05:54:59.000 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:54:58 vm05 bash[43541]: cluster 2026-03-10T05:54:56.846683+0000 mgr.y (mgr.24992) 150 : cluster [DBG] pgmap v74: 161 pgs: 40 active+undersized, 5 peering, 18 active+undersized+degraded, 98 active+clean; 457 KiB data, 191 MiB used, 160 GiB / 160 GiB avail; 93/723 objects degraded (12.863%) 2026-03-10T05:54:59.000 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:54:58 vm05 bash[43541]: cluster 2026-03-10T05:54:56.846683+0000 mgr.y (mgr.24992) 150 : cluster [DBG] pgmap v74: 161 pgs: 40 active+undersized, 5 peering, 18 active+undersized+degraded, 98 active+clean; 457 KiB data, 191 MiB used, 160 GiB / 160 GiB avail; 93/723 objects degraded (12.863%) 2026-03-10T05:54:59.000 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:54:58 vm05 bash[43541]: audit 2026-03-10T05:54:56.943041+0000 mgr.y (mgr.24992) 151 : audit [DBG] from='client.25048 -' entity='client.iscsi.foo.vm02.mxbwmh' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T05:54:59.000 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:54:58 vm05 bash[43541]: audit 2026-03-10T05:54:56.943041+0000 mgr.y (mgr.24992) 151 : audit [DBG] from='client.25048 -' entity='client.iscsi.foo.vm02.mxbwmh' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T05:54:59.000 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:54:58 vm05 bash[43541]: cluster 2026-03-10T05:54:57.454424+0000 mon.a (mon.0) 328 : cluster [INF] Health check cleared: OSD_DOWN (was: 1 osds down) 2026-03-10T05:54:59.000 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:54:58 vm05 bash[43541]: cluster 2026-03-10T05:54:57.454424+0000 mon.a (mon.0) 328 : cluster [INF] Health check cleared: OSD_DOWN (was: 1 osds down) 2026-03-10T05:54:59.000 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:54:58 vm05 bash[43541]: cluster 2026-03-10T05:54:57.479365+0000 mon.a (mon.0) 329 : cluster [INF] osd.1 [v2:192.168.123.102:6810/3078876362,v1:192.168.123.102:6811/3078876362] boot 2026-03-10T05:54:59.000 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:54:58 vm05 bash[43541]: cluster 2026-03-10T05:54:57.479365+0000 mon.a (mon.0) 329 : cluster [INF] osd.1 [v2:192.168.123.102:6810/3078876362,v1:192.168.123.102:6811/3078876362] boot 2026-03-10T05:54:59.000 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:54:58 vm05 bash[43541]: cluster 2026-03-10T05:54:57.481301+0000 mon.a (mon.0) 330 : cluster [DBG] osdmap e110: 8 total, 8 up, 8 in 2026-03-10T05:54:59.000 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:54:58 vm05 bash[43541]: cluster 2026-03-10T05:54:57.481301+0000 mon.a (mon.0) 330 : cluster [DBG] osdmap e110: 8 total, 8 up, 8 in 2026-03-10T05:54:59.000 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:54:58 vm05 bash[43541]: audit 2026-03-10T05:54:57.485074+0000 mon.a (mon.0) 331 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T05:54:59.000 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:54:58 vm05 bash[43541]: audit 2026-03-10T05:54:57.485074+0000 mon.a (mon.0) 331 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T05:54:59.834 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:54:59 vm02 bash[56371]: audit 2026-03-10T05:54:58.511035+0000 mon.a (mon.0) 332 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:54:59.835 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:54:59 vm02 bash[56371]: audit 2026-03-10T05:54:58.511035+0000 mon.a (mon.0) 332 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:54:59.835 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:54:59 vm02 bash[56371]: cluster 2026-03-10T05:54:58.555473+0000 mon.a (mon.0) 333 : cluster [DBG] osdmap e111: 8 total, 8 up, 8 in 2026-03-10T05:54:59.835 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:54:59 vm02 bash[56371]: cluster 2026-03-10T05:54:58.555473+0000 mon.a (mon.0) 333 : cluster [DBG] osdmap e111: 8 total, 8 up, 8 in 2026-03-10T05:54:59.835 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:54:59 vm02 bash[56371]: audit 2026-03-10T05:54:58.577588+0000 mon.a (mon.0) 334 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:54:59.835 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:54:59 vm02 bash[56371]: audit 2026-03-10T05:54:58.577588+0000 mon.a (mon.0) 334 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:54:59.835 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:54:59 vm02 bash[56371]: audit 2026-03-10T05:54:59.161677+0000 mon.a (mon.0) 335 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:54:59.835 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:54:59 vm02 bash[56371]: audit 2026-03-10T05:54:59.161677+0000 mon.a (mon.0) 335 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:54:59.835 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:54:59 vm02 bash[56371]: audit 2026-03-10T05:54:59.168452+0000 mon.a (mon.0) 336 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:54:59.835 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:54:59 vm02 bash[56371]: audit 2026-03-10T05:54:59.168452+0000 mon.a (mon.0) 336 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:54:59.835 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:54:59 vm02 bash[55303]: audit 2026-03-10T05:54:58.511035+0000 mon.a (mon.0) 332 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:54:59.835 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:54:59 vm02 bash[55303]: audit 2026-03-10T05:54:58.511035+0000 mon.a (mon.0) 332 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:54:59.835 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:54:59 vm02 bash[55303]: cluster 2026-03-10T05:54:58.555473+0000 mon.a (mon.0) 333 : cluster [DBG] osdmap e111: 8 total, 8 up, 8 in 2026-03-10T05:54:59.835 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:54:59 vm02 bash[55303]: cluster 2026-03-10T05:54:58.555473+0000 mon.a (mon.0) 333 : cluster [DBG] osdmap e111: 8 total, 8 up, 8 in 2026-03-10T05:54:59.835 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:54:59 vm02 bash[55303]: audit 2026-03-10T05:54:58.577588+0000 mon.a (mon.0) 334 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:54:59.835 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:54:59 vm02 bash[55303]: audit 2026-03-10T05:54:58.577588+0000 mon.a (mon.0) 334 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:54:59.835 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:54:59 vm02 bash[55303]: audit 2026-03-10T05:54:59.161677+0000 mon.a (mon.0) 335 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:54:59.835 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:54:59 vm02 bash[55303]: audit 2026-03-10T05:54:59.161677+0000 mon.a (mon.0) 335 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:54:59.835 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:54:59 vm02 bash[55303]: audit 2026-03-10T05:54:59.168452+0000 mon.a (mon.0) 336 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:54:59.835 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:54:59 vm02 bash[55303]: audit 2026-03-10T05:54:59.168452+0000 mon.a (mon.0) 336 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:55:00.000 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:54:59 vm05 bash[43541]: audit 2026-03-10T05:54:58.511035+0000 mon.a (mon.0) 332 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:55:00.000 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:54:59 vm05 bash[43541]: audit 2026-03-10T05:54:58.511035+0000 mon.a (mon.0) 332 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:55:00.000 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:54:59 vm05 bash[43541]: cluster 2026-03-10T05:54:58.555473+0000 mon.a (mon.0) 333 : cluster [DBG] osdmap e111: 8 total, 8 up, 8 in 2026-03-10T05:55:00.000 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:54:59 vm05 bash[43541]: cluster 2026-03-10T05:54:58.555473+0000 mon.a (mon.0) 333 : cluster [DBG] osdmap e111: 8 total, 8 up, 8 in 2026-03-10T05:55:00.000 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:54:59 vm05 bash[43541]: audit 2026-03-10T05:54:58.577588+0000 mon.a (mon.0) 334 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:55:00.000 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:54:59 vm05 bash[43541]: audit 2026-03-10T05:54:58.577588+0000 mon.a (mon.0) 334 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:55:00.000 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:54:59 vm05 bash[43541]: audit 2026-03-10T05:54:59.161677+0000 mon.a (mon.0) 335 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:55:00.000 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:54:59 vm05 bash[43541]: audit 2026-03-10T05:54:59.161677+0000 mon.a (mon.0) 335 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:55:00.000 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:54:59 vm05 bash[43541]: audit 2026-03-10T05:54:59.168452+0000 mon.a (mon.0) 336 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:55:00.000 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:54:59 vm05 bash[43541]: audit 2026-03-10T05:54:59.168452+0000 mon.a (mon.0) 336 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:55:00.835 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:55:00 vm02 bash[56371]: cluster 2026-03-10T05:54:58.847007+0000 mgr.y (mgr.24992) 152 : cluster [DBG] pgmap v77: 161 pgs: 3 peering, 40 active+undersized, 20 active+undersized+degraded, 98 active+clean; 457 KiB data, 191 MiB used, 160 GiB / 160 GiB avail; 511 B/s rd, 0 op/s; 101/723 objects degraded (13.970%) 2026-03-10T05:55:00.835 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:55:00 vm02 bash[56371]: cluster 2026-03-10T05:54:58.847007+0000 mgr.y (mgr.24992) 152 : cluster [DBG] pgmap v77: 161 pgs: 3 peering, 40 active+undersized, 20 active+undersized+degraded, 98 active+clean; 457 KiB data, 191 MiB used, 160 GiB / 160 GiB avail; 511 B/s rd, 0 op/s; 101/723 objects degraded (13.970%) 2026-03-10T05:55:00.835 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:55:00 vm02 bash[56371]: cluster 2026-03-10T05:54:59.573128+0000 mon.a (mon.0) 337 : cluster [INF] Health check cleared: PG_AVAILABILITY (was: Reduced data availability: 1 pg peering) 2026-03-10T05:55:00.835 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:55:00 vm02 bash[56371]: cluster 2026-03-10T05:54:59.573128+0000 mon.a (mon.0) 337 : cluster [INF] Health check cleared: PG_AVAILABILITY (was: Reduced data availability: 1 pg peering) 2026-03-10T05:55:00.835 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:55:00 vm02 bash[55303]: cluster 2026-03-10T05:54:58.847007+0000 mgr.y (mgr.24992) 152 : cluster [DBG] pgmap v77: 161 pgs: 3 peering, 40 active+undersized, 20 active+undersized+degraded, 98 active+clean; 457 KiB data, 191 MiB used, 160 GiB / 160 GiB avail; 511 B/s rd, 0 op/s; 101/723 objects degraded (13.970%) 2026-03-10T05:55:00.835 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:55:00 vm02 bash[55303]: cluster 2026-03-10T05:54:58.847007+0000 mgr.y (mgr.24992) 152 : cluster [DBG] pgmap v77: 161 pgs: 3 peering, 40 active+undersized, 20 active+undersized+degraded, 98 active+clean; 457 KiB data, 191 MiB used, 160 GiB / 160 GiB avail; 511 B/s rd, 0 op/s; 101/723 objects degraded (13.970%) 2026-03-10T05:55:00.835 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:55:00 vm02 bash[55303]: cluster 2026-03-10T05:54:59.573128+0000 mon.a (mon.0) 337 : cluster [INF] Health check cleared: PG_AVAILABILITY (was: Reduced data availability: 1 pg peering) 2026-03-10T05:55:00.835 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:55:00 vm02 bash[55303]: cluster 2026-03-10T05:54:59.573128+0000 mon.a (mon.0) 337 : cluster [INF] Health check cleared: PG_AVAILABILITY (was: Reduced data availability: 1 pg peering) 2026-03-10T05:55:01.000 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:55:00 vm05 bash[43541]: cluster 2026-03-10T05:54:58.847007+0000 mgr.y (mgr.24992) 152 : cluster [DBG] pgmap v77: 161 pgs: 3 peering, 40 active+undersized, 20 active+undersized+degraded, 98 active+clean; 457 KiB data, 191 MiB used, 160 GiB / 160 GiB avail; 511 B/s rd, 0 op/s; 101/723 objects degraded (13.970%) 2026-03-10T05:55:01.000 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:55:00 vm05 bash[43541]: cluster 2026-03-10T05:54:58.847007+0000 mgr.y (mgr.24992) 152 : cluster [DBG] pgmap v77: 161 pgs: 3 peering, 40 active+undersized, 20 active+undersized+degraded, 98 active+clean; 457 KiB data, 191 MiB used, 160 GiB / 160 GiB avail; 511 B/s rd, 0 op/s; 101/723 objects degraded (13.970%) 2026-03-10T05:55:01.000 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:55:00 vm05 bash[43541]: cluster 2026-03-10T05:54:59.573128+0000 mon.a (mon.0) 337 : cluster [INF] Health check cleared: PG_AVAILABILITY (was: Reduced data availability: 1 pg peering) 2026-03-10T05:55:01.000 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:55:00 vm05 bash[43541]: cluster 2026-03-10T05:54:59.573128+0000 mon.a (mon.0) 337 : cluster [INF] Health check cleared: PG_AVAILABILITY (was: Reduced data availability: 1 pg peering) 2026-03-10T05:55:01.834 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:55:01 vm02 bash[56371]: cluster 2026-03-10T05:55:01.184149+0000 mon.a (mon.0) 338 : cluster [WRN] Health check update: Degraded data redundancy: 101/723 objects degraded (13.970%), 20 pgs degraded (PG_DEGRADED) 2026-03-10T05:55:01.834 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:55:01 vm02 bash[56371]: cluster 2026-03-10T05:55:01.184149+0000 mon.a (mon.0) 338 : cluster [WRN] Health check update: Degraded data redundancy: 101/723 objects degraded (13.970%), 20 pgs degraded (PG_DEGRADED) 2026-03-10T05:55:01.835 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:55:01 vm02 bash[55303]: cluster 2026-03-10T05:55:01.184149+0000 mon.a (mon.0) 338 : cluster [WRN] Health check update: Degraded data redundancy: 101/723 objects degraded (13.970%), 20 pgs degraded (PG_DEGRADED) 2026-03-10T05:55:01.835 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:55:01 vm02 bash[55303]: cluster 2026-03-10T05:55:01.184149+0000 mon.a (mon.0) 338 : cluster [WRN] Health check update: Degraded data redundancy: 101/723 objects degraded (13.970%), 20 pgs degraded (PG_DEGRADED) 2026-03-10T05:55:02.000 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:55:01 vm05 bash[43541]: cluster 2026-03-10T05:55:01.184149+0000 mon.a (mon.0) 338 : cluster [WRN] Health check update: Degraded data redundancy: 101/723 objects degraded (13.970%), 20 pgs degraded (PG_DEGRADED) 2026-03-10T05:55:02.000 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:55:01 vm05 bash[43541]: cluster 2026-03-10T05:55:01.184149+0000 mon.a (mon.0) 338 : cluster [WRN] Health check update: Degraded data redundancy: 101/723 objects degraded (13.970%), 20 pgs degraded (PG_DEGRADED) 2026-03-10T05:55:02.834 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:55:02 vm02 bash[56371]: cluster 2026-03-10T05:55:00.847393+0000 mgr.y (mgr.24992) 153 : cluster [DBG] pgmap v78: 161 pgs: 3 peering, 35 active+undersized, 19 active+undersized+degraded, 104 active+clean; 457 KiB data, 191 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s; 95/723 objects degraded (13.140%) 2026-03-10T05:55:02.835 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:55:02 vm02 bash[56371]: cluster 2026-03-10T05:55:00.847393+0000 mgr.y (mgr.24992) 153 : cluster [DBG] pgmap v78: 161 pgs: 3 peering, 35 active+undersized, 19 active+undersized+degraded, 104 active+clean; 457 KiB data, 191 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s; 95/723 objects degraded (13.140%) 2026-03-10T05:55:02.835 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:55:02 vm02 bash[55303]: cluster 2026-03-10T05:55:00.847393+0000 mgr.y (mgr.24992) 153 : cluster [DBG] pgmap v78: 161 pgs: 3 peering, 35 active+undersized, 19 active+undersized+degraded, 104 active+clean; 457 KiB data, 191 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s; 95/723 objects degraded (13.140%) 2026-03-10T05:55:02.835 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:55:02 vm02 bash[55303]: cluster 2026-03-10T05:55:00.847393+0000 mgr.y (mgr.24992) 153 : cluster [DBG] pgmap v78: 161 pgs: 3 peering, 35 active+undersized, 19 active+undersized+degraded, 104 active+clean; 457 KiB data, 191 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s; 95/723 objects degraded (13.140%) 2026-03-10T05:55:03.000 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:55:02 vm05 bash[43541]: cluster 2026-03-10T05:55:00.847393+0000 mgr.y (mgr.24992) 153 : cluster [DBG] pgmap v78: 161 pgs: 3 peering, 35 active+undersized, 19 active+undersized+degraded, 104 active+clean; 457 KiB data, 191 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s; 95/723 objects degraded (13.140%) 2026-03-10T05:55:03.000 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:55:02 vm05 bash[43541]: cluster 2026-03-10T05:55:00.847393+0000 mgr.y (mgr.24992) 153 : cluster [DBG] pgmap v78: 161 pgs: 3 peering, 35 active+undersized, 19 active+undersized+degraded, 104 active+clean; 457 KiB data, 191 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s; 95/723 objects degraded (13.140%) 2026-03-10T05:55:03.335 INFO:journalctl@ceph.mgr.y.vm02.stdout:Mar 10 05:55:02 vm02 bash[52264]: ::ffff:192.168.123.105 - - [10/Mar/2026:05:55:02] "GET /metrics HTTP/1.1" 200 38051 "" "Prometheus/2.51.0" 2026-03-10T05:55:03.966 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:55:03 vm02 bash[56371]: cluster 2026-03-10T05:55:03.543644+0000 mon.a (mon.0) 339 : cluster [INF] Health check cleared: PG_DEGRADED (was: Degraded data redundancy: 95/723 objects degraded (13.140%), 19 pgs degraded) 2026-03-10T05:55:03.966 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:55:03 vm02 bash[56371]: cluster 2026-03-10T05:55:03.543644+0000 mon.a (mon.0) 339 : cluster [INF] Health check cleared: PG_DEGRADED (was: Degraded data redundancy: 95/723 objects degraded (13.140%), 19 pgs degraded) 2026-03-10T05:55:03.966 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:55:03 vm02 bash[56371]: cluster 2026-03-10T05:55:03.543700+0000 mon.a (mon.0) 340 : cluster [INF] Cluster is now healthy 2026-03-10T05:55:03.966 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:55:03 vm02 bash[56371]: cluster 2026-03-10T05:55:03.543700+0000 mon.a (mon.0) 340 : cluster [INF] Cluster is now healthy 2026-03-10T05:55:03.966 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:55:03 vm02 bash[55303]: cluster 2026-03-10T05:55:03.543644+0000 mon.a (mon.0) 339 : cluster [INF] Health check cleared: PG_DEGRADED (was: Degraded data redundancy: 95/723 objects degraded (13.140%), 19 pgs degraded) 2026-03-10T05:55:03.966 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:55:03 vm02 bash[55303]: cluster 2026-03-10T05:55:03.543644+0000 mon.a (mon.0) 339 : cluster [INF] Health check cleared: PG_DEGRADED (was: Degraded data redundancy: 95/723 objects degraded (13.140%), 19 pgs degraded) 2026-03-10T05:55:03.966 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:55:03 vm02 bash[55303]: cluster 2026-03-10T05:55:03.543700+0000 mon.a (mon.0) 340 : cluster [INF] Cluster is now healthy 2026-03-10T05:55:03.966 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:55:03 vm02 bash[55303]: cluster 2026-03-10T05:55:03.543700+0000 mon.a (mon.0) 340 : cluster [INF] Cluster is now healthy 2026-03-10T05:55:04.000 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:55:03 vm05 bash[43541]: cluster 2026-03-10T05:55:03.543644+0000 mon.a (mon.0) 339 : cluster [INF] Health check cleared: PG_DEGRADED (was: Degraded data redundancy: 95/723 objects degraded (13.140%), 19 pgs degraded) 2026-03-10T05:55:04.000 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:55:03 vm05 bash[43541]: cluster 2026-03-10T05:55:03.543644+0000 mon.a (mon.0) 339 : cluster [INF] Health check cleared: PG_DEGRADED (was: Degraded data redundancy: 95/723 objects degraded (13.140%), 19 pgs degraded) 2026-03-10T05:55:04.000 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:55:03 vm05 bash[43541]: cluster 2026-03-10T05:55:03.543700+0000 mon.a (mon.0) 340 : cluster [INF] Cluster is now healthy 2026-03-10T05:55:04.000 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:55:03 vm05 bash[43541]: cluster 2026-03-10T05:55:03.543700+0000 mon.a (mon.0) 340 : cluster [INF] Cluster is now healthy 2026-03-10T05:55:04.500 INFO:journalctl@ceph.prometheus.a.vm05.stdout:Mar 10 05:55:04 vm05 bash[41269]: ts=2026-03-10T05:55:04.147Z caller=alerting.go:391 level=warn component="rule manager" alert="unsupported value type" msg="Expanding alert template failed" err="error executing template __alert_CephOSDDown: template: __alert_CephOSDDown:1:358: executing \"__alert_CephOSDDown\" at : error calling query: found duplicate series for the match group {ceph_daemon=\"osd.1\"} on the right hand-side of the operation: [{__name__=\"ceph_osd_metadata\", ceph_daemon=\"osd.1\", ceph_version=\"ceph version 17.2.0 (43e2e60a7559d3f46c9d53f1ca875fd499a1e35e) quincy (stable)\", cluster=\"107483ae-1c44-11f1-b530-c1172cd6122a\", cluster_addr=\"192.168.123.102\", device_class=\"hdd\", hostname=\"vm02\", instance=\"ceph_cluster\", job=\"ceph\", objectstore=\"bluestore\", public_addr=\"192.168.123.102\"}, {__name__=\"ceph_osd_metadata\", ceph_daemon=\"osd.1\", ceph_version=\"ceph version 17.2.0 (43e2e60a7559d3f46c9d53f1ca875fd499a1e35e) quincy (stable)\", cluster_addr=\"192.168.123.102\", device_class=\"hdd\", hostname=\"vm02\", instance=\"192.168.123.105:9283\", job=\"ceph\", objectstore=\"bluestore\", public_addr=\"192.168.123.102\"}];many-to-many matching not allowed: matching labels must be unique on one side" data="unsupported value type" 2026-03-10T05:55:04.500 INFO:journalctl@ceph.prometheus.a.vm05.stdout:Mar 10 05:55:04 vm05 bash[41269]: ts=2026-03-10T05:55:04.147Z caller=group.go:483 level=warn name=CephOSDFlapping index=13 component="rule manager" file=/etc/prometheus/alerting/ceph_alerts.yml group=osd msg="Evaluating rule failed" rule="alert: CephOSDFlapping\nexpr: (rate(ceph_osd_up[5m]) * on (ceph_daemon) group_left (hostname) ceph_osd_metadata)\n * 60 > 1\nlabels:\n oid: 1.3.6.1.4.1.50495.1.2.1.4.4\n severity: warning\n type: ceph_default\nannotations:\n description: OSD {{ $labels.ceph_daemon }} on {{ $labels.hostname }} was marked\n down and back up {{ $value | humanize }} times once a minute for 5 minutes. This\n may indicate a network issue (latency, packet loss, MTU mismatch) on the cluster\n network, or the public network if no cluster network is deployed. Check the network\n stats on the listed host(s).\n documentation: https://docs.ceph.com/en/latest/rados/troubleshooting/troubleshooting-osd#flapping-osds\n summary: Network issues are causing OSDs to flap (mark each other down)\n" err="found duplicate series for the match group {ceph_daemon=\"osd.1\"} on the right hand-side of the operation: [{__name__=\"ceph_osd_metadata\", ceph_daemon=\"osd.1\", ceph_version=\"ceph version 17.2.0 (43e2e60a7559d3f46c9d53f1ca875fd499a1e35e) quincy (stable)\", cluster=\"107483ae-1c44-11f1-b530-c1172cd6122a\", cluster_addr=\"192.168.123.102\", device_class=\"hdd\", hostname=\"vm02\", instance=\"ceph_cluster\", job=\"ceph\", objectstore=\"bluestore\", public_addr=\"192.168.123.102\"}, {__name__=\"ceph_osd_metadata\", ceph_daemon=\"osd.1\", ceph_version=\"ceph version 17.2.0 (43e2e60a7559d3f46c9d53f1ca875fd499a1e35e) quincy (stable)\", cluster_addr=\"192.168.123.102\", device_class=\"hdd\", hostname=\"vm02\", instance=\"192.168.123.105:9283\", job=\"ceph\", objectstore=\"bluestore\", public_addr=\"192.168.123.102\"}];many-to-many matching not allowed: matching labels must be unique on one side" 2026-03-10T05:55:04.963 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:55:04 vm02 bash[56371]: cluster 2026-03-10T05:55:02.847796+0000 mgr.y (mgr.24992) 154 : cluster [DBG] pgmap v79: 161 pgs: 3 peering, 158 active+clean; 457 KiB data, 196 MiB used, 160 GiB / 160 GiB avail; 1.6 KiB/s rd, 1 op/s 2026-03-10T05:55:04.963 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:55:04 vm02 bash[56371]: cluster 2026-03-10T05:55:02.847796+0000 mgr.y (mgr.24992) 154 : cluster [DBG] pgmap v79: 161 pgs: 3 peering, 158 active+clean; 457 KiB data, 196 MiB used, 160 GiB / 160 GiB avail; 1.6 KiB/s rd, 1 op/s 2026-03-10T05:55:04.963 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:55:04 vm02 bash[55303]: cluster 2026-03-10T05:55:02.847796+0000 mgr.y (mgr.24992) 154 : cluster [DBG] pgmap v79: 161 pgs: 3 peering, 158 active+clean; 457 KiB data, 196 MiB used, 160 GiB / 160 GiB avail; 1.6 KiB/s rd, 1 op/s 2026-03-10T05:55:04.963 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:55:04 vm02 bash[55303]: cluster 2026-03-10T05:55:02.847796+0000 mgr.y (mgr.24992) 154 : cluster [DBG] pgmap v79: 161 pgs: 3 peering, 158 active+clean; 457 KiB data, 196 MiB used, 160 GiB / 160 GiB avail; 1.6 KiB/s rd, 1 op/s 2026-03-10T05:55:04.999 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:55:04 vm05 bash[43541]: cluster 2026-03-10T05:55:02.847796+0000 mgr.y (mgr.24992) 154 : cluster [DBG] pgmap v79: 161 pgs: 3 peering, 158 active+clean; 457 KiB data, 196 MiB used, 160 GiB / 160 GiB avail; 1.6 KiB/s rd, 1 op/s 2026-03-10T05:55:05.000 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:55:04 vm05 bash[43541]: cluster 2026-03-10T05:55:02.847796+0000 mgr.y (mgr.24992) 154 : cluster [DBG] pgmap v79: 161 pgs: 3 peering, 158 active+clean; 457 KiB data, 196 MiB used, 160 GiB / 160 GiB avail; 1.6 KiB/s rd, 1 op/s 2026-03-10T05:55:06.945 INFO:journalctl@ceph.mgr.x.vm05.stdout:Mar 10 05:55:06 vm05 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:55:06.946 INFO:journalctl@ceph.osd.4.vm05.stdout:Mar 10 05:55:06 vm05 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:55:06.946 INFO:journalctl@ceph.osd.4.vm05.stdout:Mar 10 05:55:06 vm05 systemd[1]: Stopping Ceph osd.4 for 107483ae-1c44-11f1-b530-c1172cd6122a... 2026-03-10T05:55:06.946 INFO:journalctl@ceph.osd.4.vm05.stdout:Mar 10 05:55:06 vm05 bash[20835]: debug 2026-03-10T05:55:06.918+0000 7f4423763700 -1 received signal: Terminated from /sbin/docker-init -- /usr/bin/ceph-osd -n osd.4 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-stderr=true --default-log-stderr-prefix=debug (PID: 1) UID: 0 2026-03-10T05:55:06.946 INFO:journalctl@ceph.osd.4.vm05.stdout:Mar 10 05:55:06 vm05 bash[20835]: debug 2026-03-10T05:55:06.918+0000 7f4423763700 -1 osd.4 111 *** Got signal Terminated *** 2026-03-10T05:55:06.946 INFO:journalctl@ceph.osd.4.vm05.stdout:Mar 10 05:55:06 vm05 bash[20835]: debug 2026-03-10T05:55:06.918+0000 7f4423763700 -1 osd.4 111 *** Immediate shutdown (osd_fast_shutdown=true) *** 2026-03-10T05:55:06.946 INFO:journalctl@ceph.osd.6.vm05.stdout:Mar 10 05:55:06 vm05 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:55:06.946 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:55:06 vm05 bash[43541]: cluster 2026-03-10T05:55:04.848111+0000 mgr.y (mgr.24992) 155 : cluster [DBG] pgmap v80: 161 pgs: 161 active+clean; 457 KiB data, 196 MiB used, 160 GiB / 160 GiB avail; 639 B/s rd, 0 op/s 2026-03-10T05:55:06.946 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:55:06 vm05 bash[43541]: cluster 2026-03-10T05:55:04.848111+0000 mgr.y (mgr.24992) 155 : cluster [DBG] pgmap v80: 161 pgs: 161 active+clean; 457 KiB data, 196 MiB used, 160 GiB / 160 GiB avail; 639 B/s rd, 0 op/s 2026-03-10T05:55:06.946 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:55:06 vm05 bash[43541]: audit 2026-03-10T05:55:05.682629+0000 mon.a (mon.0) 341 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:55:06.946 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:55:06 vm05 bash[43541]: audit 2026-03-10T05:55:05.682629+0000 mon.a (mon.0) 341 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:55:06.946 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:55:06 vm05 bash[43541]: audit 2026-03-10T05:55:05.689481+0000 mon.a (mon.0) 342 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:55:06.946 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:55:06 vm05 bash[43541]: audit 2026-03-10T05:55:05.689481+0000 mon.a (mon.0) 342 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:55:06.946 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:55:06 vm05 bash[43541]: audit 2026-03-10T05:55:05.690272+0000 mon.a (mon.0) 343 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:55:06.946 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:55:06 vm05 bash[43541]: audit 2026-03-10T05:55:05.690272+0000 mon.a (mon.0) 343 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:55:06.946 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:55:06 vm05 bash[43541]: audit 2026-03-10T05:55:05.690734+0000 mon.a (mon.0) 344 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T05:55:06.946 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:55:06 vm05 bash[43541]: audit 2026-03-10T05:55:05.690734+0000 mon.a (mon.0) 344 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T05:55:06.946 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:55:06 vm05 bash[43541]: audit 2026-03-10T05:55:05.694096+0000 mon.a (mon.0) 345 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:55:06.946 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:55:06 vm05 bash[43541]: audit 2026-03-10T05:55:05.694096+0000 mon.a (mon.0) 345 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:55:06.946 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:55:06 vm05 bash[43541]: audit 2026-03-10T05:55:05.736419+0000 mon.a (mon.0) 346 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T05:55:06.946 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:55:06 vm05 bash[43541]: audit 2026-03-10T05:55:05.736419+0000 mon.a (mon.0) 346 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T05:55:06.946 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:55:06 vm05 bash[43541]: audit 2026-03-10T05:55:05.737364+0000 mon.a (mon.0) 347 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T05:55:06.946 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:55:06 vm05 bash[43541]: audit 2026-03-10T05:55:05.737364+0000 mon.a (mon.0) 347 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T05:55:06.946 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:55:06 vm05 bash[43541]: audit 2026-03-10T05:55:05.738053+0000 mon.a (mon.0) 348 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T05:55:06.946 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:55:06 vm05 bash[43541]: audit 2026-03-10T05:55:05.738053+0000 mon.a (mon.0) 348 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T05:55:06.946 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:55:06 vm05 bash[43541]: audit 2026-03-10T05:55:05.738565+0000 mon.a (mon.0) 349 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T05:55:06.946 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:55:06 vm05 bash[43541]: audit 2026-03-10T05:55:05.738565+0000 mon.a (mon.0) 349 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T05:55:06.946 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:55:06 vm05 bash[43541]: audit 2026-03-10T05:55:05.739140+0000 mon.a (mon.0) 350 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "osd ok-to-stop", "ids": ["4"], "max": 16}]: dispatch 2026-03-10T05:55:06.946 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:55:06 vm05 bash[43541]: audit 2026-03-10T05:55:05.739140+0000 mon.a (mon.0) 350 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "osd ok-to-stop", "ids": ["4"], "max": 16}]: dispatch 2026-03-10T05:55:06.946 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:55:06 vm05 bash[43541]: audit 2026-03-10T05:55:06.153658+0000 mon.a (mon.0) 351 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:55:06.946 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:55:06 vm05 bash[43541]: audit 2026-03-10T05:55:06.153658+0000 mon.a (mon.0) 351 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:55:06.946 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:55:06 vm05 bash[43541]: audit 2026-03-10T05:55:06.156755+0000 mon.a (mon.0) 352 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.4"}]: dispatch 2026-03-10T05:55:06.946 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:55:06 vm05 bash[43541]: audit 2026-03-10T05:55:06.156755+0000 mon.a (mon.0) 352 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.4"}]: dispatch 2026-03-10T05:55:06.946 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:55:06 vm05 bash[43541]: audit 2026-03-10T05:55:06.157087+0000 mon.a (mon.0) 353 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:55:06.946 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:55:06 vm05 bash[43541]: audit 2026-03-10T05:55:06.157087+0000 mon.a (mon.0) 353 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:55:06.946 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:55:06 vm05 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:55:06.946 INFO:journalctl@ceph.osd.5.vm05.stdout:Mar 10 05:55:06 vm05 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:55:06.947 INFO:journalctl@ceph.osd.7.vm05.stdout:Mar 10 05:55:06 vm05 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:55:06.947 INFO:journalctl@ceph.prometheus.a.vm05.stdout:Mar 10 05:55:06 vm05 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:55:06.947 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:55:06 vm05 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:55:06.947 INFO:journalctl@ceph.node-exporter.b.vm05.stdout:Mar 10 05:55:06 vm05 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:55:06.947 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:55:06 vm02 bash[56371]: cluster 2026-03-10T05:55:04.848111+0000 mgr.y (mgr.24992) 155 : cluster [DBG] pgmap v80: 161 pgs: 161 active+clean; 457 KiB data, 196 MiB used, 160 GiB / 160 GiB avail; 639 B/s rd, 0 op/s 2026-03-10T05:55:06.947 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:55:06 vm02 bash[56371]: cluster 2026-03-10T05:55:04.848111+0000 mgr.y (mgr.24992) 155 : cluster [DBG] pgmap v80: 161 pgs: 161 active+clean; 457 KiB data, 196 MiB used, 160 GiB / 160 GiB avail; 639 B/s rd, 0 op/s 2026-03-10T05:55:06.947 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:55:06 vm02 bash[56371]: audit 2026-03-10T05:55:05.682629+0000 mon.a (mon.0) 341 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:55:06.947 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:55:06 vm02 bash[56371]: audit 2026-03-10T05:55:05.682629+0000 mon.a (mon.0) 341 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:55:06.947 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:55:06 vm02 bash[56371]: audit 2026-03-10T05:55:05.689481+0000 mon.a (mon.0) 342 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:55:06.947 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:55:06 vm02 bash[56371]: audit 2026-03-10T05:55:05.689481+0000 mon.a (mon.0) 342 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:55:06.947 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:55:06 vm02 bash[56371]: audit 2026-03-10T05:55:05.690272+0000 mon.a (mon.0) 343 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:55:06.947 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:55:06 vm02 bash[56371]: audit 2026-03-10T05:55:05.690272+0000 mon.a (mon.0) 343 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:55:06.947 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:55:06 vm02 bash[56371]: audit 2026-03-10T05:55:05.690734+0000 mon.a (mon.0) 344 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T05:55:06.947 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:55:06 vm02 bash[56371]: audit 2026-03-10T05:55:05.690734+0000 mon.a (mon.0) 344 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T05:55:06.947 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:55:06 vm02 bash[56371]: audit 2026-03-10T05:55:05.694096+0000 mon.a (mon.0) 345 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:55:06.947 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:55:06 vm02 bash[56371]: audit 2026-03-10T05:55:05.694096+0000 mon.a (mon.0) 345 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:55:06.947 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:55:06 vm02 bash[56371]: audit 2026-03-10T05:55:05.736419+0000 mon.a (mon.0) 346 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T05:55:06.947 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:55:06 vm02 bash[56371]: audit 2026-03-10T05:55:05.736419+0000 mon.a (mon.0) 346 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T05:55:06.947 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:55:06 vm02 bash[56371]: audit 2026-03-10T05:55:05.737364+0000 mon.a (mon.0) 347 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T05:55:06.947 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:55:06 vm02 bash[56371]: audit 2026-03-10T05:55:05.737364+0000 mon.a (mon.0) 347 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T05:55:06.947 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:55:06 vm02 bash[56371]: audit 2026-03-10T05:55:05.738053+0000 mon.a (mon.0) 348 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T05:55:06.947 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:55:06 vm02 bash[56371]: audit 2026-03-10T05:55:05.738053+0000 mon.a (mon.0) 348 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T05:55:06.947 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:55:06 vm02 bash[56371]: audit 2026-03-10T05:55:05.738565+0000 mon.a (mon.0) 349 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T05:55:06.947 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:55:06 vm02 bash[56371]: audit 2026-03-10T05:55:05.738565+0000 mon.a (mon.0) 349 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T05:55:06.947 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:55:06 vm02 bash[56371]: audit 2026-03-10T05:55:05.739140+0000 mon.a (mon.0) 350 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "osd ok-to-stop", "ids": ["4"], "max": 16}]: dispatch 2026-03-10T05:55:06.947 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:55:06 vm02 bash[56371]: audit 2026-03-10T05:55:05.739140+0000 mon.a (mon.0) 350 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "osd ok-to-stop", "ids": ["4"], "max": 16}]: dispatch 2026-03-10T05:55:06.947 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:55:06 vm02 bash[56371]: audit 2026-03-10T05:55:06.153658+0000 mon.a (mon.0) 351 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:55:06.947 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:55:06 vm02 bash[56371]: audit 2026-03-10T05:55:06.153658+0000 mon.a (mon.0) 351 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:55:06.947 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:55:06 vm02 bash[56371]: audit 2026-03-10T05:55:06.156755+0000 mon.a (mon.0) 352 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.4"}]: dispatch 2026-03-10T05:55:06.947 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:55:06 vm02 bash[56371]: audit 2026-03-10T05:55:06.156755+0000 mon.a (mon.0) 352 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.4"}]: dispatch 2026-03-10T05:55:06.947 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:55:06 vm02 bash[56371]: audit 2026-03-10T05:55:06.157087+0000 mon.a (mon.0) 353 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:55:06.947 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:55:06 vm02 bash[56371]: audit 2026-03-10T05:55:06.157087+0000 mon.a (mon.0) 353 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:55:06.948 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:55:06 vm02 bash[55303]: cluster 2026-03-10T05:55:04.848111+0000 mgr.y (mgr.24992) 155 : cluster [DBG] pgmap v80: 161 pgs: 161 active+clean; 457 KiB data, 196 MiB used, 160 GiB / 160 GiB avail; 639 B/s rd, 0 op/s 2026-03-10T05:55:06.948 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:55:06 vm02 bash[55303]: cluster 2026-03-10T05:55:04.848111+0000 mgr.y (mgr.24992) 155 : cluster [DBG] pgmap v80: 161 pgs: 161 active+clean; 457 KiB data, 196 MiB used, 160 GiB / 160 GiB avail; 639 B/s rd, 0 op/s 2026-03-10T05:55:06.948 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:55:06 vm02 bash[55303]: audit 2026-03-10T05:55:05.682629+0000 mon.a (mon.0) 341 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:55:06.948 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:55:06 vm02 bash[55303]: audit 2026-03-10T05:55:05.682629+0000 mon.a (mon.0) 341 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:55:06.948 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:55:06 vm02 bash[55303]: audit 2026-03-10T05:55:05.689481+0000 mon.a (mon.0) 342 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:55:06.948 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:55:06 vm02 bash[55303]: audit 2026-03-10T05:55:05.689481+0000 mon.a (mon.0) 342 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:55:06.948 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:55:06 vm02 bash[55303]: audit 2026-03-10T05:55:05.690272+0000 mon.a (mon.0) 343 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:55:06.948 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:55:06 vm02 bash[55303]: audit 2026-03-10T05:55:05.690272+0000 mon.a (mon.0) 343 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:55:06.948 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:55:06 vm02 bash[55303]: audit 2026-03-10T05:55:05.690734+0000 mon.a (mon.0) 344 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T05:55:06.948 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:55:06 vm02 bash[55303]: audit 2026-03-10T05:55:05.690734+0000 mon.a (mon.0) 344 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T05:55:06.948 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:55:06 vm02 bash[55303]: audit 2026-03-10T05:55:05.694096+0000 mon.a (mon.0) 345 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:55:06.948 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:55:06 vm02 bash[55303]: audit 2026-03-10T05:55:05.694096+0000 mon.a (mon.0) 345 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:55:06.948 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:55:06 vm02 bash[55303]: audit 2026-03-10T05:55:05.736419+0000 mon.a (mon.0) 346 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T05:55:06.948 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:55:06 vm02 bash[55303]: audit 2026-03-10T05:55:05.736419+0000 mon.a (mon.0) 346 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T05:55:06.948 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:55:06 vm02 bash[55303]: audit 2026-03-10T05:55:05.737364+0000 mon.a (mon.0) 347 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T05:55:06.948 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:55:06 vm02 bash[55303]: audit 2026-03-10T05:55:05.737364+0000 mon.a (mon.0) 347 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T05:55:06.948 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:55:06 vm02 bash[55303]: audit 2026-03-10T05:55:05.738053+0000 mon.a (mon.0) 348 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T05:55:06.948 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:55:06 vm02 bash[55303]: audit 2026-03-10T05:55:05.738053+0000 mon.a (mon.0) 348 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T05:55:06.948 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:55:06 vm02 bash[55303]: audit 2026-03-10T05:55:05.738565+0000 mon.a (mon.0) 349 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T05:55:06.948 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:55:06 vm02 bash[55303]: audit 2026-03-10T05:55:05.738565+0000 mon.a (mon.0) 349 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T05:55:06.948 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:55:06 vm02 bash[55303]: audit 2026-03-10T05:55:05.739140+0000 mon.a (mon.0) 350 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "osd ok-to-stop", "ids": ["4"], "max": 16}]: dispatch 2026-03-10T05:55:06.948 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:55:06 vm02 bash[55303]: audit 2026-03-10T05:55:05.739140+0000 mon.a (mon.0) 350 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "osd ok-to-stop", "ids": ["4"], "max": 16}]: dispatch 2026-03-10T05:55:06.948 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:55:06 vm02 bash[55303]: audit 2026-03-10T05:55:06.153658+0000 mon.a (mon.0) 351 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:55:06.948 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:55:06 vm02 bash[55303]: audit 2026-03-10T05:55:06.153658+0000 mon.a (mon.0) 351 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:55:06.948 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:55:06 vm02 bash[55303]: audit 2026-03-10T05:55:06.156755+0000 mon.a (mon.0) 352 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.4"}]: dispatch 2026-03-10T05:55:06.948 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:55:06 vm02 bash[55303]: audit 2026-03-10T05:55:06.156755+0000 mon.a (mon.0) 352 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.4"}]: dispatch 2026-03-10T05:55:06.948 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:55:06 vm02 bash[55303]: audit 2026-03-10T05:55:06.157087+0000 mon.a (mon.0) 353 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:55:06.948 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:55:06 vm02 bash[55303]: audit 2026-03-10T05:55:06.157087+0000 mon.a (mon.0) 353 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:55:07.250 INFO:journalctl@ceph.prometheus.a.vm05.stdout:Mar 10 05:55:06 vm05 bash[41269]: ts=2026-03-10T05:55:06.948Z caller=group.go:483 level=warn name=CephNodeDiskspaceWarning index=4 component="rule manager" file=/etc/prometheus/alerting/ceph_alerts.yml group=nodes msg="Evaluating rule failed" rule="alert: CephNodeDiskspaceWarning\nexpr: predict_linear(node_filesystem_free_bytes{device=~\"/.*\"}[2d], 3600 * 24 * 5)\n * on (instance) group_left (nodename) node_uname_info < 0\nlabels:\n oid: 1.3.6.1.4.1.50495.1.2.1.8.4\n severity: warning\n type: ceph_default\nannotations:\n description: Mountpoint {{ $labels.mountpoint }} on {{ $labels.nodename }} will\n be full in less than 5 days based on the 48 hour trailing fill rate.\n summary: Host filesystem free space is getting low\n" err="found duplicate series for the match group {instance=\"vm05\"} on the right hand-side of the operation: [{__name__=\"node_uname_info\", cluster=\"107483ae-1c44-11f1-b530-c1172cd6122a\", domainname=\"(none)\", instance=\"vm05\", job=\"node\", machine=\"x86_64\", nodename=\"vm05\", release=\"5.15.0-1092-kvm\", sysname=\"Linux\", version=\"#97-Ubuntu SMP Fri Jan 23 15:00:24 UTC 2026\"}, {__name__=\"node_uname_info\", domainname=\"(none)\", instance=\"vm05\", job=\"node\", machine=\"x86_64\", nodename=\"vm05\", release=\"5.15.0-1092-kvm\", sysname=\"Linux\", version=\"#97-Ubuntu SMP Fri Jan 23 15:00:24 UTC 2026\"}];many-to-many matching not allowed: matching labels must be unique on one side" 2026-03-10T05:55:07.999 INFO:journalctl@ceph.osd.4.vm05.stdout:Mar 10 05:55:07 vm05 bash[45229]: ceph-107483ae-1c44-11f1-b530-c1172cd6122a-osd-4 2026-03-10T05:55:08.000 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:55:07 vm05 bash[43541]: audit 2026-03-10T05:55:05.739270+0000 mgr.y (mgr.24992) 156 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "osd ok-to-stop", "ids": ["4"], "max": 16}]: dispatch 2026-03-10T05:55:08.000 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:55:07 vm05 bash[43541]: audit 2026-03-10T05:55:05.739270+0000 mgr.y (mgr.24992) 156 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "osd ok-to-stop", "ids": ["4"], "max": 16}]: dispatch 2026-03-10T05:55:08.000 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:55:07 vm05 bash[43541]: cephadm 2026-03-10T05:55:05.739966+0000 mgr.y (mgr.24992) 157 : cephadm [INF] Upgrade: osd.4 is safe to restart 2026-03-10T05:55:08.000 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:55:07 vm05 bash[43541]: cephadm 2026-03-10T05:55:05.739966+0000 mgr.y (mgr.24992) 157 : cephadm [INF] Upgrade: osd.4 is safe to restart 2026-03-10T05:55:08.000 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:55:07 vm05 bash[43541]: cephadm 2026-03-10T05:55:06.147956+0000 mgr.y (mgr.24992) 158 : cephadm [INF] Upgrade: Updating osd.4 2026-03-10T05:55:08.000 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:55:07 vm05 bash[43541]: cephadm 2026-03-10T05:55:06.147956+0000 mgr.y (mgr.24992) 158 : cephadm [INF] Upgrade: Updating osd.4 2026-03-10T05:55:08.000 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:55:07 vm05 bash[43541]: cephadm 2026-03-10T05:55:06.158250+0000 mgr.y (mgr.24992) 159 : cephadm [INF] Deploying daemon osd.4 on vm05 2026-03-10T05:55:08.000 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:55:07 vm05 bash[43541]: cephadm 2026-03-10T05:55:06.158250+0000 mgr.y (mgr.24992) 159 : cephadm [INF] Deploying daemon osd.4 on vm05 2026-03-10T05:55:08.000 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:55:07 vm05 bash[43541]: cluster 2026-03-10T05:55:06.917747+0000 mon.a (mon.0) 354 : cluster [INF] osd.4 marked itself down and dead 2026-03-10T05:55:08.000 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:55:07 vm05 bash[43541]: cluster 2026-03-10T05:55:06.917747+0000 mon.a (mon.0) 354 : cluster [INF] osd.4 marked itself down and dead 2026-03-10T05:55:08.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:55:07 vm02 bash[55303]: audit 2026-03-10T05:55:05.739270+0000 mgr.y (mgr.24992) 156 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "osd ok-to-stop", "ids": ["4"], "max": 16}]: dispatch 2026-03-10T05:55:08.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:55:07 vm02 bash[55303]: audit 2026-03-10T05:55:05.739270+0000 mgr.y (mgr.24992) 156 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "osd ok-to-stop", "ids": ["4"], "max": 16}]: dispatch 2026-03-10T05:55:08.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:55:07 vm02 bash[55303]: cephadm 2026-03-10T05:55:05.739966+0000 mgr.y (mgr.24992) 157 : cephadm [INF] Upgrade: osd.4 is safe to restart 2026-03-10T05:55:08.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:55:07 vm02 bash[55303]: cephadm 2026-03-10T05:55:05.739966+0000 mgr.y (mgr.24992) 157 : cephadm [INF] Upgrade: osd.4 is safe to restart 2026-03-10T05:55:08.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:55:07 vm02 bash[55303]: cephadm 2026-03-10T05:55:06.147956+0000 mgr.y (mgr.24992) 158 : cephadm [INF] Upgrade: Updating osd.4 2026-03-10T05:55:08.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:55:07 vm02 bash[55303]: cephadm 2026-03-10T05:55:06.147956+0000 mgr.y (mgr.24992) 158 : cephadm [INF] Upgrade: Updating osd.4 2026-03-10T05:55:08.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:55:07 vm02 bash[55303]: cephadm 2026-03-10T05:55:06.158250+0000 mgr.y (mgr.24992) 159 : cephadm [INF] Deploying daemon osd.4 on vm05 2026-03-10T05:55:08.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:55:07 vm02 bash[55303]: cephadm 2026-03-10T05:55:06.158250+0000 mgr.y (mgr.24992) 159 : cephadm [INF] Deploying daemon osd.4 on vm05 2026-03-10T05:55:08.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:55:07 vm02 bash[55303]: cluster 2026-03-10T05:55:06.917747+0000 mon.a (mon.0) 354 : cluster [INF] osd.4 marked itself down and dead 2026-03-10T05:55:08.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:55:07 vm02 bash[55303]: cluster 2026-03-10T05:55:06.917747+0000 mon.a (mon.0) 354 : cluster [INF] osd.4 marked itself down and dead 2026-03-10T05:55:08.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:55:07 vm02 bash[56371]: audit 2026-03-10T05:55:05.739270+0000 mgr.y (mgr.24992) 156 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "osd ok-to-stop", "ids": ["4"], "max": 16}]: dispatch 2026-03-10T05:55:08.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:55:07 vm02 bash[56371]: audit 2026-03-10T05:55:05.739270+0000 mgr.y (mgr.24992) 156 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "osd ok-to-stop", "ids": ["4"], "max": 16}]: dispatch 2026-03-10T05:55:08.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:55:07 vm02 bash[56371]: cephadm 2026-03-10T05:55:05.739966+0000 mgr.y (mgr.24992) 157 : cephadm [INF] Upgrade: osd.4 is safe to restart 2026-03-10T05:55:08.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:55:07 vm02 bash[56371]: cephadm 2026-03-10T05:55:05.739966+0000 mgr.y (mgr.24992) 157 : cephadm [INF] Upgrade: osd.4 is safe to restart 2026-03-10T05:55:08.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:55:07 vm02 bash[56371]: cephadm 2026-03-10T05:55:06.147956+0000 mgr.y (mgr.24992) 158 : cephadm [INF] Upgrade: Updating osd.4 2026-03-10T05:55:08.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:55:07 vm02 bash[56371]: cephadm 2026-03-10T05:55:06.147956+0000 mgr.y (mgr.24992) 158 : cephadm [INF] Upgrade: Updating osd.4 2026-03-10T05:55:08.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:55:07 vm02 bash[56371]: cephadm 2026-03-10T05:55:06.158250+0000 mgr.y (mgr.24992) 159 : cephadm [INF] Deploying daemon osd.4 on vm05 2026-03-10T05:55:08.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:55:07 vm02 bash[56371]: cephadm 2026-03-10T05:55:06.158250+0000 mgr.y (mgr.24992) 159 : cephadm [INF] Deploying daemon osd.4 on vm05 2026-03-10T05:55:08.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:55:07 vm02 bash[56371]: cluster 2026-03-10T05:55:06.917747+0000 mon.a (mon.0) 354 : cluster [INF] osd.4 marked itself down and dead 2026-03-10T05:55:08.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:55:07 vm02 bash[56371]: cluster 2026-03-10T05:55:06.917747+0000 mon.a (mon.0) 354 : cluster [INF] osd.4 marked itself down and dead 2026-03-10T05:55:08.276 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:55:08 vm05 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:55:08.276 INFO:journalctl@ceph.mgr.x.vm05.stdout:Mar 10 05:55:08 vm05 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:55:08.276 INFO:journalctl@ceph.osd.4.vm05.stdout:Mar 10 05:55:08 vm05 systemd[1]: ceph-107483ae-1c44-11f1-b530-c1172cd6122a@osd.4.service: Deactivated successfully. 2026-03-10T05:55:08.276 INFO:journalctl@ceph.osd.4.vm05.stdout:Mar 10 05:55:08 vm05 systemd[1]: Stopped Ceph osd.4 for 107483ae-1c44-11f1-b530-c1172cd6122a. 2026-03-10T05:55:08.276 INFO:journalctl@ceph.osd.4.vm05.stdout:Mar 10 05:55:08 vm05 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:55:08.276 INFO:journalctl@ceph.osd.4.vm05.stdout:Mar 10 05:55:08 vm05 systemd[1]: Started Ceph osd.4 for 107483ae-1c44-11f1-b530-c1172cd6122a. 2026-03-10T05:55:08.276 INFO:journalctl@ceph.osd.5.vm05.stdout:Mar 10 05:55:08 vm05 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:55:08.276 INFO:journalctl@ceph.osd.6.vm05.stdout:Mar 10 05:55:08 vm05 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:55:08.276 INFO:journalctl@ceph.osd.7.vm05.stdout:Mar 10 05:55:08 vm05 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:55:08.276 INFO:journalctl@ceph.prometheus.a.vm05.stdout:Mar 10 05:55:08 vm05 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:55:08.276 INFO:journalctl@ceph.node-exporter.b.vm05.stdout:Mar 10 05:55:08 vm05 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:55:08.276 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:55:08 vm05 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:55:08.723 INFO:journalctl@ceph.osd.4.vm05.stdout:Mar 10 05:55:08 vm05 bash[45436]: Running command: /usr/bin/ceph-authtool --gen-print-key 2026-03-10T05:55:08.723 INFO:journalctl@ceph.osd.4.vm05.stdout:Mar 10 05:55:08 vm05 bash[45436]: Running command: /usr/bin/ceph-authtool --gen-print-key 2026-03-10T05:55:08.999 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:55:08 vm05 bash[43541]: cluster 2026-03-10T05:55:06.848506+0000 mgr.y (mgr.24992) 160 : cluster [DBG] pgmap v81: 161 pgs: 161 active+clean; 457 KiB data, 196 MiB used, 160 GiB / 160 GiB avail; 1.1 KiB/s rd, 1 op/s 2026-03-10T05:55:09.000 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:55:08 vm05 bash[43541]: cluster 2026-03-10T05:55:06.848506+0000 mgr.y (mgr.24992) 160 : cluster [DBG] pgmap v81: 161 pgs: 161 active+clean; 457 KiB data, 196 MiB used, 160 GiB / 160 GiB avail; 1.1 KiB/s rd, 1 op/s 2026-03-10T05:55:09.000 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:55:08 vm05 bash[43541]: audit 2026-03-10T05:55:06.946305+0000 mgr.y (mgr.24992) 161 : audit [DBG] from='client.25048 -' entity='client.iscsi.foo.vm02.mxbwmh' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T05:55:09.000 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:55:08 vm05 bash[43541]: audit 2026-03-10T05:55:06.946305+0000 mgr.y (mgr.24992) 161 : audit [DBG] from='client.25048 -' entity='client.iscsi.foo.vm02.mxbwmh' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T05:55:09.000 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:55:08 vm05 bash[43541]: cluster 2026-03-10T05:55:07.685648+0000 mon.a (mon.0) 355 : cluster [WRN] Health check failed: 1 osds down (OSD_DOWN) 2026-03-10T05:55:09.000 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:55:08 vm05 bash[43541]: cluster 2026-03-10T05:55:07.685648+0000 mon.a (mon.0) 355 : cluster [WRN] Health check failed: 1 osds down (OSD_DOWN) 2026-03-10T05:55:09.000 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:55:08 vm05 bash[43541]: cluster 2026-03-10T05:55:07.708205+0000 mon.a (mon.0) 356 : cluster [DBG] osdmap e112: 8 total, 7 up, 8 in 2026-03-10T05:55:09.000 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:55:08 vm05 bash[43541]: cluster 2026-03-10T05:55:07.708205+0000 mon.a (mon.0) 356 : cluster [DBG] osdmap e112: 8 total, 7 up, 8 in 2026-03-10T05:55:09.000 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:55:08 vm05 bash[43541]: audit 2026-03-10T05:55:08.261607+0000 mon.a (mon.0) 357 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:55:09.000 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:55:08 vm05 bash[43541]: audit 2026-03-10T05:55:08.261607+0000 mon.a (mon.0) 357 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:55:09.000 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:55:08 vm05 bash[43541]: audit 2026-03-10T05:55:08.269503+0000 mon.a (mon.0) 358 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:55:09.000 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:55:08 vm05 bash[43541]: audit 2026-03-10T05:55:08.269503+0000 mon.a (mon.0) 358 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:55:09.084 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:55:08 vm02 bash[55303]: cluster 2026-03-10T05:55:06.848506+0000 mgr.y (mgr.24992) 160 : cluster [DBG] pgmap v81: 161 pgs: 161 active+clean; 457 KiB data, 196 MiB used, 160 GiB / 160 GiB avail; 1.1 KiB/s rd, 1 op/s 2026-03-10T05:55:09.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:55:08 vm02 bash[55303]: cluster 2026-03-10T05:55:06.848506+0000 mgr.y (mgr.24992) 160 : cluster [DBG] pgmap v81: 161 pgs: 161 active+clean; 457 KiB data, 196 MiB used, 160 GiB / 160 GiB avail; 1.1 KiB/s rd, 1 op/s 2026-03-10T05:55:09.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:55:08 vm02 bash[55303]: audit 2026-03-10T05:55:06.946305+0000 mgr.y (mgr.24992) 161 : audit [DBG] from='client.25048 -' entity='client.iscsi.foo.vm02.mxbwmh' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T05:55:09.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:55:08 vm02 bash[55303]: audit 2026-03-10T05:55:06.946305+0000 mgr.y (mgr.24992) 161 : audit [DBG] from='client.25048 -' entity='client.iscsi.foo.vm02.mxbwmh' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T05:55:09.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:55:08 vm02 bash[55303]: cluster 2026-03-10T05:55:07.685648+0000 mon.a (mon.0) 355 : cluster [WRN] Health check failed: 1 osds down (OSD_DOWN) 2026-03-10T05:55:09.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:55:08 vm02 bash[55303]: cluster 2026-03-10T05:55:07.685648+0000 mon.a (mon.0) 355 : cluster [WRN] Health check failed: 1 osds down (OSD_DOWN) 2026-03-10T05:55:09.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:55:08 vm02 bash[55303]: cluster 2026-03-10T05:55:07.708205+0000 mon.a (mon.0) 356 : cluster [DBG] osdmap e112: 8 total, 7 up, 8 in 2026-03-10T05:55:09.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:55:08 vm02 bash[55303]: cluster 2026-03-10T05:55:07.708205+0000 mon.a (mon.0) 356 : cluster [DBG] osdmap e112: 8 total, 7 up, 8 in 2026-03-10T05:55:09.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:55:08 vm02 bash[55303]: audit 2026-03-10T05:55:08.261607+0000 mon.a (mon.0) 357 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:55:09.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:55:08 vm02 bash[55303]: audit 2026-03-10T05:55:08.261607+0000 mon.a (mon.0) 357 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:55:09.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:55:08 vm02 bash[55303]: audit 2026-03-10T05:55:08.269503+0000 mon.a (mon.0) 358 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:55:09.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:55:08 vm02 bash[55303]: audit 2026-03-10T05:55:08.269503+0000 mon.a (mon.0) 358 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:55:09.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:55:08 vm02 bash[56371]: cluster 2026-03-10T05:55:06.848506+0000 mgr.y (mgr.24992) 160 : cluster [DBG] pgmap v81: 161 pgs: 161 active+clean; 457 KiB data, 196 MiB used, 160 GiB / 160 GiB avail; 1.1 KiB/s rd, 1 op/s 2026-03-10T05:55:09.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:55:08 vm02 bash[56371]: cluster 2026-03-10T05:55:06.848506+0000 mgr.y (mgr.24992) 160 : cluster [DBG] pgmap v81: 161 pgs: 161 active+clean; 457 KiB data, 196 MiB used, 160 GiB / 160 GiB avail; 1.1 KiB/s rd, 1 op/s 2026-03-10T05:55:09.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:55:08 vm02 bash[56371]: audit 2026-03-10T05:55:06.946305+0000 mgr.y (mgr.24992) 161 : audit [DBG] from='client.25048 -' entity='client.iscsi.foo.vm02.mxbwmh' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T05:55:09.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:55:08 vm02 bash[56371]: audit 2026-03-10T05:55:06.946305+0000 mgr.y (mgr.24992) 161 : audit [DBG] from='client.25048 -' entity='client.iscsi.foo.vm02.mxbwmh' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T05:55:09.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:55:08 vm02 bash[56371]: cluster 2026-03-10T05:55:07.685648+0000 mon.a (mon.0) 355 : cluster [WRN] Health check failed: 1 osds down (OSD_DOWN) 2026-03-10T05:55:09.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:55:08 vm02 bash[56371]: cluster 2026-03-10T05:55:07.685648+0000 mon.a (mon.0) 355 : cluster [WRN] Health check failed: 1 osds down (OSD_DOWN) 2026-03-10T05:55:09.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:55:08 vm02 bash[56371]: cluster 2026-03-10T05:55:07.708205+0000 mon.a (mon.0) 356 : cluster [DBG] osdmap e112: 8 total, 7 up, 8 in 2026-03-10T05:55:09.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:55:08 vm02 bash[56371]: cluster 2026-03-10T05:55:07.708205+0000 mon.a (mon.0) 356 : cluster [DBG] osdmap e112: 8 total, 7 up, 8 in 2026-03-10T05:55:09.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:55:08 vm02 bash[56371]: audit 2026-03-10T05:55:08.261607+0000 mon.a (mon.0) 357 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:55:09.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:55:08 vm02 bash[56371]: audit 2026-03-10T05:55:08.261607+0000 mon.a (mon.0) 357 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:55:09.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:55:08 vm02 bash[56371]: audit 2026-03-10T05:55:08.269503+0000 mon.a (mon.0) 358 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:55:09.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:55:08 vm02 bash[56371]: audit 2026-03-10T05:55:08.269503+0000 mon.a (mon.0) 358 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:55:09.611 INFO:journalctl@ceph.osd.4.vm05.stdout:Mar 10 05:55:09 vm05 bash[45436]: --> Failed to activate via raw: did not find any matching OSD to activate 2026-03-10T05:55:09.611 INFO:journalctl@ceph.osd.4.vm05.stdout:Mar 10 05:55:09 vm05 bash[45436]: Running command: /usr/bin/ceph-authtool --gen-print-key 2026-03-10T05:55:09.611 INFO:journalctl@ceph.osd.4.vm05.stdout:Mar 10 05:55:09 vm05 bash[45436]: Running command: /usr/bin/ceph-authtool --gen-print-key 2026-03-10T05:55:09.611 INFO:journalctl@ceph.osd.4.vm05.stdout:Mar 10 05:55:09 vm05 bash[45436]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-4 2026-03-10T05:55:09.611 INFO:journalctl@ceph.osd.4.vm05.stdout:Mar 10 05:55:09 vm05 bash[45436]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph-b68765b1-f9bc-468f-ba45-c4629dee5715/osd-block-49541bd1-b8b0-4d09-9b97-6ca490c33f9d --path /var/lib/ceph/osd/ceph-4 --no-mon-config 2026-03-10T05:55:09.999 INFO:journalctl@ceph.osd.4.vm05.stdout:Mar 10 05:55:09 vm05 bash[45436]: Running command: /usr/bin/ln -snf /dev/ceph-b68765b1-f9bc-468f-ba45-c4629dee5715/osd-block-49541bd1-b8b0-4d09-9b97-6ca490c33f9d /var/lib/ceph/osd/ceph-4/block 2026-03-10T05:55:10.000 INFO:journalctl@ceph.osd.4.vm05.stdout:Mar 10 05:55:09 vm05 bash[45436]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-4/block 2026-03-10T05:55:10.000 INFO:journalctl@ceph.osd.4.vm05.stdout:Mar 10 05:55:09 vm05 bash[45436]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0 2026-03-10T05:55:10.000 INFO:journalctl@ceph.osd.4.vm05.stdout:Mar 10 05:55:09 vm05 bash[45436]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-4 2026-03-10T05:55:10.000 INFO:journalctl@ceph.osd.4.vm05.stdout:Mar 10 05:55:09 vm05 bash[45436]: --> ceph-volume lvm activate successful for osd ID: 4 2026-03-10T05:55:10.000 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:55:09 vm05 bash[43541]: cluster 2026-03-10T05:55:08.700132+0000 mon.a (mon.0) 359 : cluster [DBG] osdmap e113: 8 total, 7 up, 8 in 2026-03-10T05:55:10.000 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:55:09 vm05 bash[43541]: cluster 2026-03-10T05:55:08.700132+0000 mon.a (mon.0) 359 : cluster [DBG] osdmap e113: 8 total, 7 up, 8 in 2026-03-10T05:55:10.000 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:55:09 vm05 bash[43541]: cluster 2026-03-10T05:55:08.848831+0000 mgr.y (mgr.24992) 162 : cluster [DBG] pgmap v84: 161 pgs: 7 peering, 19 stale+active+clean, 135 active+clean; 457 KiB data, 196 MiB used, 160 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-10T05:55:10.000 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:55:09 vm05 bash[43541]: cluster 2026-03-10T05:55:08.848831+0000 mgr.y (mgr.24992) 162 : cluster [DBG] pgmap v84: 161 pgs: 7 peering, 19 stale+active+clean, 135 active+clean; 457 KiB data, 196 MiB used, 160 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-10T05:55:10.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:55:09 vm02 bash[56371]: cluster 2026-03-10T05:55:08.700132+0000 mon.a (mon.0) 359 : cluster [DBG] osdmap e113: 8 total, 7 up, 8 in 2026-03-10T05:55:10.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:55:09 vm02 bash[56371]: cluster 2026-03-10T05:55:08.700132+0000 mon.a (mon.0) 359 : cluster [DBG] osdmap e113: 8 total, 7 up, 8 in 2026-03-10T05:55:10.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:55:09 vm02 bash[56371]: cluster 2026-03-10T05:55:08.848831+0000 mgr.y (mgr.24992) 162 : cluster [DBG] pgmap v84: 161 pgs: 7 peering, 19 stale+active+clean, 135 active+clean; 457 KiB data, 196 MiB used, 160 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-10T05:55:10.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:55:09 vm02 bash[56371]: cluster 2026-03-10T05:55:08.848831+0000 mgr.y (mgr.24992) 162 : cluster [DBG] pgmap v84: 161 pgs: 7 peering, 19 stale+active+clean, 135 active+clean; 457 KiB data, 196 MiB used, 160 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-10T05:55:10.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:55:09 vm02 bash[55303]: cluster 2026-03-10T05:55:08.700132+0000 mon.a (mon.0) 359 : cluster [DBG] osdmap e113: 8 total, 7 up, 8 in 2026-03-10T05:55:10.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:55:09 vm02 bash[55303]: cluster 2026-03-10T05:55:08.700132+0000 mon.a (mon.0) 359 : cluster [DBG] osdmap e113: 8 total, 7 up, 8 in 2026-03-10T05:55:10.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:55:09 vm02 bash[55303]: cluster 2026-03-10T05:55:08.848831+0000 mgr.y (mgr.24992) 162 : cluster [DBG] pgmap v84: 161 pgs: 7 peering, 19 stale+active+clean, 135 active+clean; 457 KiB data, 196 MiB used, 160 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-10T05:55:10.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:55:09 vm02 bash[55303]: cluster 2026-03-10T05:55:08.848831+0000 mgr.y (mgr.24992) 162 : cluster [DBG] pgmap v84: 161 pgs: 7 peering, 19 stale+active+clean, 135 active+clean; 457 KiB data, 196 MiB used, 160 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-10T05:55:10.732 INFO:journalctl@ceph.osd.4.vm05.stdout:Mar 10 05:55:10 vm05 bash[45790]: debug 2026-03-10T05:55:10.438+0000 7f2380055740 -1 Falling back to public interface 2026-03-10T05:55:10.999 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:55:10 vm05 bash[43541]: cluster 2026-03-10T05:55:09.721571+0000 mon.a (mon.0) 360 : cluster [WRN] Health check failed: Reduced data availability: 4 pgs peering (PG_AVAILABILITY) 2026-03-10T05:55:10.999 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:55:10 vm05 bash[43541]: cluster 2026-03-10T05:55:09.721571+0000 mon.a (mon.0) 360 : cluster [WRN] Health check failed: Reduced data availability: 4 pgs peering (PG_AVAILABILITY) 2026-03-10T05:55:11.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:55:10 vm02 bash[56371]: cluster 2026-03-10T05:55:09.721571+0000 mon.a (mon.0) 360 : cluster [WRN] Health check failed: Reduced data availability: 4 pgs peering (PG_AVAILABILITY) 2026-03-10T05:55:11.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:55:10 vm02 bash[56371]: cluster 2026-03-10T05:55:09.721571+0000 mon.a (mon.0) 360 : cluster [WRN] Health check failed: Reduced data availability: 4 pgs peering (PG_AVAILABILITY) 2026-03-10T05:55:11.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:55:10 vm02 bash[55303]: cluster 2026-03-10T05:55:09.721571+0000 mon.a (mon.0) 360 : cluster [WRN] Health check failed: Reduced data availability: 4 pgs peering (PG_AVAILABILITY) 2026-03-10T05:55:11.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:55:10 vm02 bash[55303]: cluster 2026-03-10T05:55:09.721571+0000 mon.a (mon.0) 360 : cluster [WRN] Health check failed: Reduced data availability: 4 pgs peering (PG_AVAILABILITY) 2026-03-10T05:55:11.749 INFO:journalctl@ceph.osd.4.vm05.stdout:Mar 10 05:55:11 vm05 bash[45790]: debug 2026-03-10T05:55:11.394+0000 7f2380055740 -1 osd.4 0 read_superblock omap replica is missing. 2026-03-10T05:55:11.750 INFO:journalctl@ceph.osd.4.vm05.stdout:Mar 10 05:55:11 vm05 bash[45790]: debug 2026-03-10T05:55:11.434+0000 7f2380055740 -1 osd.4 111 log_to_monitors true 2026-03-10T05:55:12.249 INFO:journalctl@ceph.osd.4.vm05.stdout:Mar 10 05:55:11 vm05 bash[45790]: debug 2026-03-10T05:55:11.938+0000 7f2377e00640 -1 osd.4 111 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory 2026-03-10T05:55:12.249 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:55:11 vm05 bash[43541]: cluster 2026-03-10T05:55:10.849128+0000 mgr.y (mgr.24992) 163 : cluster [DBG] pgmap v85: 161 pgs: 3 active+undersized, 7 peering, 17 stale+active+clean, 6 active+undersized+degraded, 128 active+clean; 457 KiB data, 196 MiB used, 160 GiB / 160 GiB avail; 29/723 objects degraded (4.011%) 2026-03-10T05:55:12.250 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:55:11 vm05 bash[43541]: cluster 2026-03-10T05:55:10.849128+0000 mgr.y (mgr.24992) 163 : cluster [DBG] pgmap v85: 161 pgs: 3 active+undersized, 7 peering, 17 stale+active+clean, 6 active+undersized+degraded, 128 active+clean; 457 KiB data, 196 MiB used, 160 GiB / 160 GiB avail; 29/723 objects degraded (4.011%) 2026-03-10T05:55:12.250 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:55:11 vm05 bash[43541]: audit 2026-03-10T05:55:10.886427+0000 mon.a (mon.0) 361 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:55:12.250 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:55:11 vm05 bash[43541]: audit 2026-03-10T05:55:10.886427+0000 mon.a (mon.0) 361 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:55:12.250 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:55:11 vm05 bash[43541]: audit 2026-03-10T05:55:10.887145+0000 mon.a (mon.0) 362 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T05:55:12.250 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:55:11 vm05 bash[43541]: audit 2026-03-10T05:55:10.887145+0000 mon.a (mon.0) 362 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T05:55:12.250 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:55:11 vm05 bash[43541]: audit 2026-03-10T05:55:10.909455+0000 mon.a (mon.0) 363 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:55:12.250 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:55:11 vm05 bash[43541]: audit 2026-03-10T05:55:10.909455+0000 mon.a (mon.0) 363 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:55:12.250 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:55:11 vm05 bash[43541]: audit 2026-03-10T05:55:11.438058+0000 mon.a (mon.0) 364 : audit [INF] from='osd.4 ' entity='osd.4' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["4"]}]: dispatch 2026-03-10T05:55:12.250 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:55:11 vm05 bash[43541]: audit 2026-03-10T05:55:11.438058+0000 mon.a (mon.0) 364 : audit [INF] from='osd.4 ' entity='osd.4' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["4"]}]: dispatch 2026-03-10T05:55:12.250 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:55:11 vm05 bash[43541]: audit 2026-03-10T05:55:11.441709+0000 mon.b (mon.2) 7 : audit [INF] from='osd.4 [v2:192.168.123.105:6800/2485869618,v1:192.168.123.105:6801/2485869618]' entity='osd.4' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["4"]}]: dispatch 2026-03-10T05:55:12.250 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:55:11 vm05 bash[43541]: audit 2026-03-10T05:55:11.441709+0000 mon.b (mon.2) 7 : audit [INF] from='osd.4 [v2:192.168.123.105:6800/2485869618,v1:192.168.123.105:6801/2485869618]' entity='osd.4' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["4"]}]: dispatch 2026-03-10T05:55:12.250 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:55:11 vm05 bash[43541]: cluster 2026-03-10T05:55:11.731701+0000 mon.a (mon.0) 365 : cluster [WRN] Health check failed: Degraded data redundancy: 29/723 objects degraded (4.011%), 6 pgs degraded (PG_DEGRADED) 2026-03-10T05:55:12.250 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:55:11 vm05 bash[43541]: cluster 2026-03-10T05:55:11.731701+0000 mon.a (mon.0) 365 : cluster [WRN] Health check failed: Degraded data redundancy: 29/723 objects degraded (4.011%), 6 pgs degraded (PG_DEGRADED) 2026-03-10T05:55:12.335 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:55:11 vm02 bash[56371]: cluster 2026-03-10T05:55:10.849128+0000 mgr.y (mgr.24992) 163 : cluster [DBG] pgmap v85: 161 pgs: 3 active+undersized, 7 peering, 17 stale+active+clean, 6 active+undersized+degraded, 128 active+clean; 457 KiB data, 196 MiB used, 160 GiB / 160 GiB avail; 29/723 objects degraded (4.011%) 2026-03-10T05:55:12.335 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:55:11 vm02 bash[56371]: cluster 2026-03-10T05:55:10.849128+0000 mgr.y (mgr.24992) 163 : cluster [DBG] pgmap v85: 161 pgs: 3 active+undersized, 7 peering, 17 stale+active+clean, 6 active+undersized+degraded, 128 active+clean; 457 KiB data, 196 MiB used, 160 GiB / 160 GiB avail; 29/723 objects degraded (4.011%) 2026-03-10T05:55:12.335 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:55:11 vm02 bash[56371]: audit 2026-03-10T05:55:10.886427+0000 mon.a (mon.0) 361 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:55:12.335 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:55:11 vm02 bash[56371]: audit 2026-03-10T05:55:10.886427+0000 mon.a (mon.0) 361 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:55:12.335 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:55:11 vm02 bash[56371]: audit 2026-03-10T05:55:10.887145+0000 mon.a (mon.0) 362 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T05:55:12.335 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:55:11 vm02 bash[56371]: audit 2026-03-10T05:55:10.887145+0000 mon.a (mon.0) 362 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T05:55:12.335 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:55:11 vm02 bash[56371]: audit 2026-03-10T05:55:10.909455+0000 mon.a (mon.0) 363 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:55:12.335 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:55:11 vm02 bash[56371]: audit 2026-03-10T05:55:10.909455+0000 mon.a (mon.0) 363 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:55:12.335 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:55:11 vm02 bash[56371]: audit 2026-03-10T05:55:11.438058+0000 mon.a (mon.0) 364 : audit [INF] from='osd.4 ' entity='osd.4' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["4"]}]: dispatch 2026-03-10T05:55:12.335 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:55:11 vm02 bash[56371]: audit 2026-03-10T05:55:11.438058+0000 mon.a (mon.0) 364 : audit [INF] from='osd.4 ' entity='osd.4' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["4"]}]: dispatch 2026-03-10T05:55:12.335 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:55:11 vm02 bash[56371]: audit 2026-03-10T05:55:11.441709+0000 mon.b (mon.2) 7 : audit [INF] from='osd.4 [v2:192.168.123.105:6800/2485869618,v1:192.168.123.105:6801/2485869618]' entity='osd.4' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["4"]}]: dispatch 2026-03-10T05:55:12.335 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:55:11 vm02 bash[56371]: audit 2026-03-10T05:55:11.441709+0000 mon.b (mon.2) 7 : audit [INF] from='osd.4 [v2:192.168.123.105:6800/2485869618,v1:192.168.123.105:6801/2485869618]' entity='osd.4' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["4"]}]: dispatch 2026-03-10T05:55:12.335 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:55:11 vm02 bash[56371]: cluster 2026-03-10T05:55:11.731701+0000 mon.a (mon.0) 365 : cluster [WRN] Health check failed: Degraded data redundancy: 29/723 objects degraded (4.011%), 6 pgs degraded (PG_DEGRADED) 2026-03-10T05:55:12.335 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:55:11 vm02 bash[56371]: cluster 2026-03-10T05:55:11.731701+0000 mon.a (mon.0) 365 : cluster [WRN] Health check failed: Degraded data redundancy: 29/723 objects degraded (4.011%), 6 pgs degraded (PG_DEGRADED) 2026-03-10T05:55:12.335 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:55:11 vm02 bash[55303]: cluster 2026-03-10T05:55:10.849128+0000 mgr.y (mgr.24992) 163 : cluster [DBG] pgmap v85: 161 pgs: 3 active+undersized, 7 peering, 17 stale+active+clean, 6 active+undersized+degraded, 128 active+clean; 457 KiB data, 196 MiB used, 160 GiB / 160 GiB avail; 29/723 objects degraded (4.011%) 2026-03-10T05:55:12.335 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:55:11 vm02 bash[55303]: cluster 2026-03-10T05:55:10.849128+0000 mgr.y (mgr.24992) 163 : cluster [DBG] pgmap v85: 161 pgs: 3 active+undersized, 7 peering, 17 stale+active+clean, 6 active+undersized+degraded, 128 active+clean; 457 KiB data, 196 MiB used, 160 GiB / 160 GiB avail; 29/723 objects degraded (4.011%) 2026-03-10T05:55:12.335 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:55:11 vm02 bash[55303]: audit 2026-03-10T05:55:10.886427+0000 mon.a (mon.0) 361 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:55:12.335 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:55:11 vm02 bash[55303]: audit 2026-03-10T05:55:10.886427+0000 mon.a (mon.0) 361 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:55:12.335 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:55:11 vm02 bash[55303]: audit 2026-03-10T05:55:10.887145+0000 mon.a (mon.0) 362 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T05:55:12.335 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:55:11 vm02 bash[55303]: audit 2026-03-10T05:55:10.887145+0000 mon.a (mon.0) 362 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T05:55:12.335 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:55:11 vm02 bash[55303]: audit 2026-03-10T05:55:10.909455+0000 mon.a (mon.0) 363 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:55:12.335 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:55:11 vm02 bash[55303]: audit 2026-03-10T05:55:10.909455+0000 mon.a (mon.0) 363 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:55:12.335 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:55:11 vm02 bash[55303]: audit 2026-03-10T05:55:11.438058+0000 mon.a (mon.0) 364 : audit [INF] from='osd.4 ' entity='osd.4' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["4"]}]: dispatch 2026-03-10T05:55:12.335 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:55:11 vm02 bash[55303]: audit 2026-03-10T05:55:11.438058+0000 mon.a (mon.0) 364 : audit [INF] from='osd.4 ' entity='osd.4' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["4"]}]: dispatch 2026-03-10T05:55:12.335 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:55:11 vm02 bash[55303]: audit 2026-03-10T05:55:11.441709+0000 mon.b (mon.2) 7 : audit [INF] from='osd.4 [v2:192.168.123.105:6800/2485869618,v1:192.168.123.105:6801/2485869618]' entity='osd.4' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["4"]}]: dispatch 2026-03-10T05:55:12.335 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:55:11 vm02 bash[55303]: audit 2026-03-10T05:55:11.441709+0000 mon.b (mon.2) 7 : audit [INF] from='osd.4 [v2:192.168.123.105:6800/2485869618,v1:192.168.123.105:6801/2485869618]' entity='osd.4' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["4"]}]: dispatch 2026-03-10T05:55:12.335 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:55:11 vm02 bash[55303]: cluster 2026-03-10T05:55:11.731701+0000 mon.a (mon.0) 365 : cluster [WRN] Health check failed: Degraded data redundancy: 29/723 objects degraded (4.011%), 6 pgs degraded (PG_DEGRADED) 2026-03-10T05:55:12.335 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:55:11 vm02 bash[55303]: cluster 2026-03-10T05:55:11.731701+0000 mon.a (mon.0) 365 : cluster [WRN] Health check failed: Degraded data redundancy: 29/723 objects degraded (4.011%), 6 pgs degraded (PG_DEGRADED) 2026-03-10T05:55:13.249 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:55:12 vm05 bash[43541]: audit 2026-03-10T05:55:11.908788+0000 mon.a (mon.0) 366 : audit [INF] from='osd.4 ' entity='osd.4' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["4"]}]': finished 2026-03-10T05:55:13.249 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:55:12 vm05 bash[43541]: audit 2026-03-10T05:55:11.908788+0000 mon.a (mon.0) 366 : audit [INF] from='osd.4 ' entity='osd.4' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["4"]}]': finished 2026-03-10T05:55:13.250 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:55:12 vm05 bash[43541]: cluster 2026-03-10T05:55:11.913088+0000 mon.a (mon.0) 367 : cluster [DBG] osdmap e114: 8 total, 7 up, 8 in 2026-03-10T05:55:13.250 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:55:12 vm05 bash[43541]: cluster 2026-03-10T05:55:11.913088+0000 mon.a (mon.0) 367 : cluster [DBG] osdmap e114: 8 total, 7 up, 8 in 2026-03-10T05:55:13.250 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:55:12 vm05 bash[43541]: audit 2026-03-10T05:55:11.920165+0000 mon.a (mon.0) 368 : audit [INF] from='osd.4 ' entity='osd.4' cmd=[{"prefix": "osd crush create-or-move", "id": 4, "weight":0.0195, "args": ["host=vm05", "root=default"]}]: dispatch 2026-03-10T05:55:13.250 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:55:12 vm05 bash[43541]: audit 2026-03-10T05:55:11.920165+0000 mon.a (mon.0) 368 : audit [INF] from='osd.4 ' entity='osd.4' cmd=[{"prefix": "osd crush create-or-move", "id": 4, "weight":0.0195, "args": ["host=vm05", "root=default"]}]: dispatch 2026-03-10T05:55:13.250 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:55:12 vm05 bash[43541]: audit 2026-03-10T05:55:11.922941+0000 mon.b (mon.2) 8 : audit [INF] from='osd.4 [v2:192.168.123.105:6800/2485869618,v1:192.168.123.105:6801/2485869618]' entity='osd.4' cmd=[{"prefix": "osd crush create-or-move", "id": 4, "weight":0.0195, "args": ["host=vm05", "root=default"]}]: dispatch 2026-03-10T05:55:13.250 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:55:12 vm05 bash[43541]: audit 2026-03-10T05:55:11.922941+0000 mon.b (mon.2) 8 : audit [INF] from='osd.4 [v2:192.168.123.105:6800/2485869618,v1:192.168.123.105:6801/2485869618]' entity='osd.4' cmd=[{"prefix": "osd crush create-or-move", "id": 4, "weight":0.0195, "args": ["host=vm05", "root=default"]}]: dispatch 2026-03-10T05:55:13.334 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:55:12 vm02 bash[56371]: audit 2026-03-10T05:55:11.908788+0000 mon.a (mon.0) 366 : audit [INF] from='osd.4 ' entity='osd.4' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["4"]}]': finished 2026-03-10T05:55:13.334 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:55:12 vm02 bash[56371]: audit 2026-03-10T05:55:11.908788+0000 mon.a (mon.0) 366 : audit [INF] from='osd.4 ' entity='osd.4' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["4"]}]': finished 2026-03-10T05:55:13.335 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:55:12 vm02 bash[56371]: cluster 2026-03-10T05:55:11.913088+0000 mon.a (mon.0) 367 : cluster [DBG] osdmap e114: 8 total, 7 up, 8 in 2026-03-10T05:55:13.335 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:55:12 vm02 bash[56371]: cluster 2026-03-10T05:55:11.913088+0000 mon.a (mon.0) 367 : cluster [DBG] osdmap e114: 8 total, 7 up, 8 in 2026-03-10T05:55:13.335 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:55:12 vm02 bash[56371]: audit 2026-03-10T05:55:11.920165+0000 mon.a (mon.0) 368 : audit [INF] from='osd.4 ' entity='osd.4' cmd=[{"prefix": "osd crush create-or-move", "id": 4, "weight":0.0195, "args": ["host=vm05", "root=default"]}]: dispatch 2026-03-10T05:55:13.335 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:55:12 vm02 bash[56371]: audit 2026-03-10T05:55:11.920165+0000 mon.a (mon.0) 368 : audit [INF] from='osd.4 ' entity='osd.4' cmd=[{"prefix": "osd crush create-or-move", "id": 4, "weight":0.0195, "args": ["host=vm05", "root=default"]}]: dispatch 2026-03-10T05:55:13.335 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:55:12 vm02 bash[56371]: audit 2026-03-10T05:55:11.922941+0000 mon.b (mon.2) 8 : audit [INF] from='osd.4 [v2:192.168.123.105:6800/2485869618,v1:192.168.123.105:6801/2485869618]' entity='osd.4' cmd=[{"prefix": "osd crush create-or-move", "id": 4, "weight":0.0195, "args": ["host=vm05", "root=default"]}]: dispatch 2026-03-10T05:55:13.335 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:55:12 vm02 bash[56371]: audit 2026-03-10T05:55:11.922941+0000 mon.b (mon.2) 8 : audit [INF] from='osd.4 [v2:192.168.123.105:6800/2485869618,v1:192.168.123.105:6801/2485869618]' entity='osd.4' cmd=[{"prefix": "osd crush create-or-move", "id": 4, "weight":0.0195, "args": ["host=vm05", "root=default"]}]: dispatch 2026-03-10T05:55:13.335 INFO:journalctl@ceph.mgr.y.vm02.stdout:Mar 10 05:55:12 vm02 bash[52264]: ::ffff:192.168.123.105 - - [10/Mar/2026:05:55:12] "GET /metrics HTTP/1.1" 200 38061 "" "Prometheus/2.51.0" 2026-03-10T05:55:13.335 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:55:12 vm02 bash[55303]: audit 2026-03-10T05:55:11.908788+0000 mon.a (mon.0) 366 : audit [INF] from='osd.4 ' entity='osd.4' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["4"]}]': finished 2026-03-10T05:55:13.335 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:55:12 vm02 bash[55303]: audit 2026-03-10T05:55:11.908788+0000 mon.a (mon.0) 366 : audit [INF] from='osd.4 ' entity='osd.4' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["4"]}]': finished 2026-03-10T05:55:13.335 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:55:12 vm02 bash[55303]: cluster 2026-03-10T05:55:11.913088+0000 mon.a (mon.0) 367 : cluster [DBG] osdmap e114: 8 total, 7 up, 8 in 2026-03-10T05:55:13.335 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:55:12 vm02 bash[55303]: cluster 2026-03-10T05:55:11.913088+0000 mon.a (mon.0) 367 : cluster [DBG] osdmap e114: 8 total, 7 up, 8 in 2026-03-10T05:55:13.335 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:55:12 vm02 bash[55303]: audit 2026-03-10T05:55:11.920165+0000 mon.a (mon.0) 368 : audit [INF] from='osd.4 ' entity='osd.4' cmd=[{"prefix": "osd crush create-or-move", "id": 4, "weight":0.0195, "args": ["host=vm05", "root=default"]}]: dispatch 2026-03-10T05:55:13.335 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:55:12 vm02 bash[55303]: audit 2026-03-10T05:55:11.920165+0000 mon.a (mon.0) 368 : audit [INF] from='osd.4 ' entity='osd.4' cmd=[{"prefix": "osd crush create-or-move", "id": 4, "weight":0.0195, "args": ["host=vm05", "root=default"]}]: dispatch 2026-03-10T05:55:13.335 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:55:12 vm02 bash[55303]: audit 2026-03-10T05:55:11.922941+0000 mon.b (mon.2) 8 : audit [INF] from='osd.4 [v2:192.168.123.105:6800/2485869618,v1:192.168.123.105:6801/2485869618]' entity='osd.4' cmd=[{"prefix": "osd crush create-or-move", "id": 4, "weight":0.0195, "args": ["host=vm05", "root=default"]}]: dispatch 2026-03-10T05:55:13.335 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:55:12 vm02 bash[55303]: audit 2026-03-10T05:55:11.922941+0000 mon.b (mon.2) 8 : audit [INF] from='osd.4 [v2:192.168.123.105:6800/2485869618,v1:192.168.123.105:6801/2485869618]' entity='osd.4' cmd=[{"prefix": "osd crush create-or-move", "id": 4, "weight":0.0195, "args": ["host=vm05", "root=default"]}]: dispatch 2026-03-10T05:55:14.249 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:55:13 vm05 bash[43541]: cluster 2026-03-10T05:55:12.849441+0000 mgr.y (mgr.24992) 164 : cluster [DBG] pgmap v87: 161 pgs: 38 active+undersized, 7 peering, 23 active+undersized+degraded, 93 active+clean; 457 KiB data, 214 MiB used, 160 GiB / 160 GiB avail; 116/723 objects degraded (16.044%) 2026-03-10T05:55:14.250 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:55:13 vm05 bash[43541]: cluster 2026-03-10T05:55:12.849441+0000 mgr.y (mgr.24992) 164 : cluster [DBG] pgmap v87: 161 pgs: 38 active+undersized, 7 peering, 23 active+undersized+degraded, 93 active+clean; 457 KiB data, 214 MiB used, 160 GiB / 160 GiB avail; 116/723 objects degraded (16.044%) 2026-03-10T05:55:14.250 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:55:13 vm05 bash[43541]: cluster 2026-03-10T05:55:12.910062+0000 mon.a (mon.0) 369 : cluster [INF] Health check cleared: OSD_DOWN (was: 1 osds down) 2026-03-10T05:55:14.250 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:55:13 vm05 bash[43541]: cluster 2026-03-10T05:55:12.910062+0000 mon.a (mon.0) 369 : cluster [INF] Health check cleared: OSD_DOWN (was: 1 osds down) 2026-03-10T05:55:14.250 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:55:13 vm05 bash[43541]: cluster 2026-03-10T05:55:12.952598+0000 mon.a (mon.0) 370 : cluster [INF] osd.4 [v2:192.168.123.105:6800/2485869618,v1:192.168.123.105:6801/2485869618] boot 2026-03-10T05:55:14.250 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:55:13 vm05 bash[43541]: cluster 2026-03-10T05:55:12.952598+0000 mon.a (mon.0) 370 : cluster [INF] osd.4 [v2:192.168.123.105:6800/2485869618,v1:192.168.123.105:6801/2485869618] boot 2026-03-10T05:55:14.250 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:55:13 vm05 bash[43541]: cluster 2026-03-10T05:55:12.953451+0000 mon.a (mon.0) 371 : cluster [DBG] osdmap e115: 8 total, 8 up, 8 in 2026-03-10T05:55:14.250 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:55:13 vm05 bash[43541]: cluster 2026-03-10T05:55:12.953451+0000 mon.a (mon.0) 371 : cluster [DBG] osdmap e115: 8 total, 8 up, 8 in 2026-03-10T05:55:14.250 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:55:13 vm05 bash[43541]: audit 2026-03-10T05:55:12.953725+0000 mon.a (mon.0) 372 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-10T05:55:14.250 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:55:13 vm05 bash[43541]: audit 2026-03-10T05:55:12.953725+0000 mon.a (mon.0) 372 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-10T05:55:14.250 INFO:journalctl@ceph.prometheus.a.vm05.stdout:Mar 10 05:55:14 vm05 bash[41269]: ts=2026-03-10T05:55:14.147Z caller=alerting.go:391 level=warn component="rule manager" alert="unsupported value type" msg="Expanding alert template failed" err="error executing template __alert_CephOSDDown: template: __alert_CephOSDDown:1:358: executing \"__alert_CephOSDDown\" at : error calling query: found duplicate series for the match group {ceph_daemon=\"osd.4\"} on the right hand-side of the operation: [{__name__=\"ceph_osd_metadata\", ceph_daemon=\"osd.4\", ceph_version=\"ceph version 17.2.0 (43e2e60a7559d3f46c9d53f1ca875fd499a1e35e) quincy (stable)\", cluster=\"107483ae-1c44-11f1-b530-c1172cd6122a\", cluster_addr=\"192.168.123.105\", device_class=\"hdd\", hostname=\"vm05\", instance=\"ceph_cluster\", job=\"ceph\", objectstore=\"bluestore\", public_addr=\"192.168.123.105\"}, {__name__=\"ceph_osd_metadata\", ceph_daemon=\"osd.4\", ceph_version=\"ceph version 17.2.0 (43e2e60a7559d3f46c9d53f1ca875fd499a1e35e) quincy (stable)\", cluster_addr=\"192.168.123.105\", device_class=\"hdd\", hostname=\"vm05\", instance=\"192.168.123.105:9283\", job=\"ceph\", objectstore=\"bluestore\", public_addr=\"192.168.123.105\"}];many-to-many matching not allowed: matching labels must be unique on one side" data="unsupported value type" 2026-03-10T05:55:14.250 INFO:journalctl@ceph.prometheus.a.vm05.stdout:Mar 10 05:55:14 vm05 bash[41269]: ts=2026-03-10T05:55:14.148Z caller=group.go:483 level=warn name=CephOSDFlapping index=13 component="rule manager" file=/etc/prometheus/alerting/ceph_alerts.yml group=osd msg="Evaluating rule failed" rule="alert: CephOSDFlapping\nexpr: (rate(ceph_osd_up[5m]) * on (ceph_daemon) group_left (hostname) ceph_osd_metadata)\n * 60 > 1\nlabels:\n oid: 1.3.6.1.4.1.50495.1.2.1.4.4\n severity: warning\n type: ceph_default\nannotations:\n description: OSD {{ $labels.ceph_daemon }} on {{ $labels.hostname }} was marked\n down and back up {{ $value | humanize }} times once a minute for 5 minutes. This\n may indicate a network issue (latency, packet loss, MTU mismatch) on the cluster\n network, or the public network if no cluster network is deployed. Check the network\n stats on the listed host(s).\n documentation: https://docs.ceph.com/en/latest/rados/troubleshooting/troubleshooting-osd#flapping-osds\n summary: Network issues are causing OSDs to flap (mark each other down)\n" err="found duplicate series for the match group {ceph_daemon=\"osd.4\"} on the right hand-side of the operation: [{__name__=\"ceph_osd_metadata\", ceph_daemon=\"osd.4\", ceph_version=\"ceph version 17.2.0 (43e2e60a7559d3f46c9d53f1ca875fd499a1e35e) quincy (stable)\", cluster=\"107483ae-1c44-11f1-b530-c1172cd6122a\", cluster_addr=\"192.168.123.105\", device_class=\"hdd\", hostname=\"vm05\", instance=\"ceph_cluster\", job=\"ceph\", objectstore=\"bluestore\", public_addr=\"192.168.123.105\"}, {__name__=\"ceph_osd_metadata\", ceph_daemon=\"osd.4\", ceph_version=\"ceph version 17.2.0 (43e2e60a7559d3f46c9d53f1ca875fd499a1e35e) quincy (stable)\", cluster_addr=\"192.168.123.105\", device_class=\"hdd\", hostname=\"vm05\", instance=\"192.168.123.105:9283\", job=\"ceph\", objectstore=\"bluestore\", public_addr=\"192.168.123.105\"}];many-to-many matching not allowed: matching labels must be unique on one side" 2026-03-10T05:55:14.335 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:55:13 vm02 bash[56371]: cluster 2026-03-10T05:55:12.849441+0000 mgr.y (mgr.24992) 164 : cluster [DBG] pgmap v87: 161 pgs: 38 active+undersized, 7 peering, 23 active+undersized+degraded, 93 active+clean; 457 KiB data, 214 MiB used, 160 GiB / 160 GiB avail; 116/723 objects degraded (16.044%) 2026-03-10T05:55:14.335 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:55:13 vm02 bash[56371]: cluster 2026-03-10T05:55:12.849441+0000 mgr.y (mgr.24992) 164 : cluster [DBG] pgmap v87: 161 pgs: 38 active+undersized, 7 peering, 23 active+undersized+degraded, 93 active+clean; 457 KiB data, 214 MiB used, 160 GiB / 160 GiB avail; 116/723 objects degraded (16.044%) 2026-03-10T05:55:14.335 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:55:13 vm02 bash[56371]: cluster 2026-03-10T05:55:12.910062+0000 mon.a (mon.0) 369 : cluster [INF] Health check cleared: OSD_DOWN (was: 1 osds down) 2026-03-10T05:55:14.335 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:55:13 vm02 bash[56371]: cluster 2026-03-10T05:55:12.910062+0000 mon.a (mon.0) 369 : cluster [INF] Health check cleared: OSD_DOWN (was: 1 osds down) 2026-03-10T05:55:14.335 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:55:13 vm02 bash[56371]: cluster 2026-03-10T05:55:12.952598+0000 mon.a (mon.0) 370 : cluster [INF] osd.4 [v2:192.168.123.105:6800/2485869618,v1:192.168.123.105:6801/2485869618] boot 2026-03-10T05:55:14.335 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:55:13 vm02 bash[56371]: cluster 2026-03-10T05:55:12.952598+0000 mon.a (mon.0) 370 : cluster [INF] osd.4 [v2:192.168.123.105:6800/2485869618,v1:192.168.123.105:6801/2485869618] boot 2026-03-10T05:55:14.335 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:55:13 vm02 bash[56371]: cluster 2026-03-10T05:55:12.953451+0000 mon.a (mon.0) 371 : cluster [DBG] osdmap e115: 8 total, 8 up, 8 in 2026-03-10T05:55:14.335 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:55:13 vm02 bash[56371]: cluster 2026-03-10T05:55:12.953451+0000 mon.a (mon.0) 371 : cluster [DBG] osdmap e115: 8 total, 8 up, 8 in 2026-03-10T05:55:14.335 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:55:13 vm02 bash[56371]: audit 2026-03-10T05:55:12.953725+0000 mon.a (mon.0) 372 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-10T05:55:14.335 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:55:13 vm02 bash[56371]: audit 2026-03-10T05:55:12.953725+0000 mon.a (mon.0) 372 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-10T05:55:14.335 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:55:13 vm02 bash[55303]: cluster 2026-03-10T05:55:12.849441+0000 mgr.y (mgr.24992) 164 : cluster [DBG] pgmap v87: 161 pgs: 38 active+undersized, 7 peering, 23 active+undersized+degraded, 93 active+clean; 457 KiB data, 214 MiB used, 160 GiB / 160 GiB avail; 116/723 objects degraded (16.044%) 2026-03-10T05:55:14.335 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:55:13 vm02 bash[55303]: cluster 2026-03-10T05:55:12.849441+0000 mgr.y (mgr.24992) 164 : cluster [DBG] pgmap v87: 161 pgs: 38 active+undersized, 7 peering, 23 active+undersized+degraded, 93 active+clean; 457 KiB data, 214 MiB used, 160 GiB / 160 GiB avail; 116/723 objects degraded (16.044%) 2026-03-10T05:55:14.335 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:55:13 vm02 bash[55303]: cluster 2026-03-10T05:55:12.910062+0000 mon.a (mon.0) 369 : cluster [INF] Health check cleared: OSD_DOWN (was: 1 osds down) 2026-03-10T05:55:14.335 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:55:13 vm02 bash[55303]: cluster 2026-03-10T05:55:12.910062+0000 mon.a (mon.0) 369 : cluster [INF] Health check cleared: OSD_DOWN (was: 1 osds down) 2026-03-10T05:55:14.335 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:55:13 vm02 bash[55303]: cluster 2026-03-10T05:55:12.952598+0000 mon.a (mon.0) 370 : cluster [INF] osd.4 [v2:192.168.123.105:6800/2485869618,v1:192.168.123.105:6801/2485869618] boot 2026-03-10T05:55:14.335 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:55:13 vm02 bash[55303]: cluster 2026-03-10T05:55:12.952598+0000 mon.a (mon.0) 370 : cluster [INF] osd.4 [v2:192.168.123.105:6800/2485869618,v1:192.168.123.105:6801/2485869618] boot 2026-03-10T05:55:14.335 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:55:13 vm02 bash[55303]: cluster 2026-03-10T05:55:12.953451+0000 mon.a (mon.0) 371 : cluster [DBG] osdmap e115: 8 total, 8 up, 8 in 2026-03-10T05:55:14.335 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:55:13 vm02 bash[55303]: cluster 2026-03-10T05:55:12.953451+0000 mon.a (mon.0) 371 : cluster [DBG] osdmap e115: 8 total, 8 up, 8 in 2026-03-10T05:55:14.335 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:55:13 vm02 bash[55303]: audit 2026-03-10T05:55:12.953725+0000 mon.a (mon.0) 372 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-10T05:55:14.335 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:55:13 vm02 bash[55303]: audit 2026-03-10T05:55:12.953725+0000 mon.a (mon.0) 372 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-10T05:55:15.249 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:55:14 vm05 bash[43541]: cluster 2026-03-10T05:55:13.922250+0000 mon.a (mon.0) 373 : cluster [DBG] osdmap e116: 8 total, 8 up, 8 in 2026-03-10T05:55:15.250 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:55:14 vm05 bash[43541]: cluster 2026-03-10T05:55:13.922250+0000 mon.a (mon.0) 373 : cluster [DBG] osdmap e116: 8 total, 8 up, 8 in 2026-03-10T05:55:15.335 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:55:14 vm02 bash[56371]: cluster 2026-03-10T05:55:13.922250+0000 mon.a (mon.0) 373 : cluster [DBG] osdmap e116: 8 total, 8 up, 8 in 2026-03-10T05:55:15.335 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:55:14 vm02 bash[56371]: cluster 2026-03-10T05:55:13.922250+0000 mon.a (mon.0) 373 : cluster [DBG] osdmap e116: 8 total, 8 up, 8 in 2026-03-10T05:55:15.335 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:55:14 vm02 bash[55303]: cluster 2026-03-10T05:55:13.922250+0000 mon.a (mon.0) 373 : cluster [DBG] osdmap e116: 8 total, 8 up, 8 in 2026-03-10T05:55:15.335 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:55:14 vm02 bash[55303]: cluster 2026-03-10T05:55:13.922250+0000 mon.a (mon.0) 373 : cluster [DBG] osdmap e116: 8 total, 8 up, 8 in 2026-03-10T05:55:16.249 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:55:16 vm05 bash[43541]: cluster 2026-03-10T05:55:14.849862+0000 mgr.y (mgr.24992) 165 : cluster [DBG] pgmap v90: 161 pgs: 38 active+undersized, 7 peering, 23 active+undersized+degraded, 93 active+clean; 457 KiB data, 214 MiB used, 160 GiB / 160 GiB avail; 116/723 objects degraded (16.044%) 2026-03-10T05:55:16.250 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:55:16 vm05 bash[43541]: cluster 2026-03-10T05:55:14.849862+0000 mgr.y (mgr.24992) 165 : cluster [DBG] pgmap v90: 161 pgs: 38 active+undersized, 7 peering, 23 active+undersized+degraded, 93 active+clean; 457 KiB data, 214 MiB used, 160 GiB / 160 GiB avail; 116/723 objects degraded (16.044%) 2026-03-10T05:55:16.250 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:55:16 vm05 bash[43541]: audit 2026-03-10T05:55:14.999289+0000 mon.a (mon.0) 374 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:55:16.250 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:55:16 vm05 bash[43541]: audit 2026-03-10T05:55:14.999289+0000 mon.a (mon.0) 374 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:55:16.250 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:55:16 vm05 bash[43541]: audit 2026-03-10T05:55:15.003962+0000 mon.a (mon.0) 375 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:55:16.250 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:55:16 vm05 bash[43541]: audit 2026-03-10T05:55:15.003962+0000 mon.a (mon.0) 375 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:55:16.250 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:55:16 vm05 bash[43541]: audit 2026-03-10T05:55:15.580540+0000 mon.a (mon.0) 376 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:55:16.250 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:55:16 vm05 bash[43541]: audit 2026-03-10T05:55:15.580540+0000 mon.a (mon.0) 376 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:55:16.250 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:55:16 vm05 bash[43541]: audit 2026-03-10T05:55:15.586546+0000 mon.a (mon.0) 377 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:55:16.250 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:55:16 vm05 bash[43541]: audit 2026-03-10T05:55:15.586546+0000 mon.a (mon.0) 377 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:55:16.334 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:55:15 vm02 bash[56371]: cluster 2026-03-10T05:55:14.849862+0000 mgr.y (mgr.24992) 165 : cluster [DBG] pgmap v90: 161 pgs: 38 active+undersized, 7 peering, 23 active+undersized+degraded, 93 active+clean; 457 KiB data, 214 MiB used, 160 GiB / 160 GiB avail; 116/723 objects degraded (16.044%) 2026-03-10T05:55:16.335 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:55:15 vm02 bash[56371]: cluster 2026-03-10T05:55:14.849862+0000 mgr.y (mgr.24992) 165 : cluster [DBG] pgmap v90: 161 pgs: 38 active+undersized, 7 peering, 23 active+undersized+degraded, 93 active+clean; 457 KiB data, 214 MiB used, 160 GiB / 160 GiB avail; 116/723 objects degraded (16.044%) 2026-03-10T05:55:16.335 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:55:15 vm02 bash[56371]: audit 2026-03-10T05:55:14.999289+0000 mon.a (mon.0) 374 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:55:16.335 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:55:15 vm02 bash[56371]: audit 2026-03-10T05:55:14.999289+0000 mon.a (mon.0) 374 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:55:16.335 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:55:15 vm02 bash[56371]: audit 2026-03-10T05:55:15.003962+0000 mon.a (mon.0) 375 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:55:16.335 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:55:15 vm02 bash[56371]: audit 2026-03-10T05:55:15.003962+0000 mon.a (mon.0) 375 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:55:16.335 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:55:15 vm02 bash[56371]: audit 2026-03-10T05:55:15.580540+0000 mon.a (mon.0) 376 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:55:16.335 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:55:15 vm02 bash[56371]: audit 2026-03-10T05:55:15.580540+0000 mon.a (mon.0) 376 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:55:16.335 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:55:15 vm02 bash[56371]: audit 2026-03-10T05:55:15.586546+0000 mon.a (mon.0) 377 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:55:16.335 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:55:15 vm02 bash[56371]: audit 2026-03-10T05:55:15.586546+0000 mon.a (mon.0) 377 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:55:16.335 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:55:15 vm02 bash[55303]: cluster 2026-03-10T05:55:14.849862+0000 mgr.y (mgr.24992) 165 : cluster [DBG] pgmap v90: 161 pgs: 38 active+undersized, 7 peering, 23 active+undersized+degraded, 93 active+clean; 457 KiB data, 214 MiB used, 160 GiB / 160 GiB avail; 116/723 objects degraded (16.044%) 2026-03-10T05:55:16.335 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:55:15 vm02 bash[55303]: cluster 2026-03-10T05:55:14.849862+0000 mgr.y (mgr.24992) 165 : cluster [DBG] pgmap v90: 161 pgs: 38 active+undersized, 7 peering, 23 active+undersized+degraded, 93 active+clean; 457 KiB data, 214 MiB used, 160 GiB / 160 GiB avail; 116/723 objects degraded (16.044%) 2026-03-10T05:55:16.335 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:55:15 vm02 bash[55303]: audit 2026-03-10T05:55:14.999289+0000 mon.a (mon.0) 374 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:55:16.335 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:55:15 vm02 bash[55303]: audit 2026-03-10T05:55:14.999289+0000 mon.a (mon.0) 374 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:55:16.335 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:55:15 vm02 bash[55303]: audit 2026-03-10T05:55:15.003962+0000 mon.a (mon.0) 375 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:55:16.335 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:55:15 vm02 bash[55303]: audit 2026-03-10T05:55:15.003962+0000 mon.a (mon.0) 375 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:55:16.335 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:55:15 vm02 bash[55303]: audit 2026-03-10T05:55:15.580540+0000 mon.a (mon.0) 376 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:55:16.335 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:55:15 vm02 bash[55303]: audit 2026-03-10T05:55:15.580540+0000 mon.a (mon.0) 376 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:55:16.335 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:55:15 vm02 bash[55303]: audit 2026-03-10T05:55:15.586546+0000 mon.a (mon.0) 377 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:55:16.335 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:55:15 vm02 bash[55303]: audit 2026-03-10T05:55:15.586546+0000 mon.a (mon.0) 377 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:55:17.249 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:55:17 vm05 bash[43541]: cluster 2026-03-10T05:55:16.186006+0000 mon.a (mon.0) 378 : cluster [WRN] Health check update: Reduced data availability: 3 pgs inactive, 4 pgs peering (PG_AVAILABILITY) 2026-03-10T05:55:17.250 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:55:17 vm05 bash[43541]: cluster 2026-03-10T05:55:16.186006+0000 mon.a (mon.0) 378 : cluster [WRN] Health check update: Reduced data availability: 3 pgs inactive, 4 pgs peering (PG_AVAILABILITY) 2026-03-10T05:55:17.250 INFO:journalctl@ceph.prometheus.a.vm05.stdout:Mar 10 05:55:16 vm05 bash[41269]: ts=2026-03-10T05:55:16.949Z caller=group.go:483 level=warn name=CephNodeDiskspaceWarning index=4 component="rule manager" file=/etc/prometheus/alerting/ceph_alerts.yml group=nodes msg="Evaluating rule failed" rule="alert: CephNodeDiskspaceWarning\nexpr: predict_linear(node_filesystem_free_bytes{device=~\"/.*\"}[2d], 3600 * 24 * 5)\n * on (instance) group_left (nodename) node_uname_info < 0\nlabels:\n oid: 1.3.6.1.4.1.50495.1.2.1.8.4\n severity: warning\n type: ceph_default\nannotations:\n description: Mountpoint {{ $labels.mountpoint }} on {{ $labels.nodename }} will\n be full in less than 5 days based on the 48 hour trailing fill rate.\n summary: Host filesystem free space is getting low\n" err="found duplicate series for the match group {instance=\"vm05\"} on the right hand-side of the operation: [{__name__=\"node_uname_info\", cluster=\"107483ae-1c44-11f1-b530-c1172cd6122a\", domainname=\"(none)\", instance=\"vm05\", job=\"node\", machine=\"x86_64\", nodename=\"vm05\", release=\"5.15.0-1092-kvm\", sysname=\"Linux\", version=\"#97-Ubuntu SMP Fri Jan 23 15:00:24 UTC 2026\"}, {__name__=\"node_uname_info\", domainname=\"(none)\", instance=\"vm05\", job=\"node\", machine=\"x86_64\", nodename=\"vm05\", release=\"5.15.0-1092-kvm\", sysname=\"Linux\", version=\"#97-Ubuntu SMP Fri Jan 23 15:00:24 UTC 2026\"}];many-to-many matching not allowed: matching labels must be unique on one side" 2026-03-10T05:55:17.334 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:55:17 vm02 bash[56371]: cluster 2026-03-10T05:55:16.186006+0000 mon.a (mon.0) 378 : cluster [WRN] Health check update: Reduced data availability: 3 pgs inactive, 4 pgs peering (PG_AVAILABILITY) 2026-03-10T05:55:17.334 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:55:17 vm02 bash[56371]: cluster 2026-03-10T05:55:16.186006+0000 mon.a (mon.0) 378 : cluster [WRN] Health check update: Reduced data availability: 3 pgs inactive, 4 pgs peering (PG_AVAILABILITY) 2026-03-10T05:55:17.335 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:55:17 vm02 bash[55303]: cluster 2026-03-10T05:55:16.186006+0000 mon.a (mon.0) 378 : cluster [WRN] Health check update: Reduced data availability: 3 pgs inactive, 4 pgs peering (PG_AVAILABILITY) 2026-03-10T05:55:17.335 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:55:17 vm02 bash[55303]: cluster 2026-03-10T05:55:16.186006+0000 mon.a (mon.0) 378 : cluster [WRN] Health check update: Reduced data availability: 3 pgs inactive, 4 pgs peering (PG_AVAILABILITY) 2026-03-10T05:55:18.334 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:55:18 vm02 bash[56371]: cluster 2026-03-10T05:55:16.850386+0000 mgr.y (mgr.24992) 166 : cluster [DBG] pgmap v91: 161 pgs: 19 active+undersized, 7 peering, 12 active+undersized+degraded, 123 active+clean; 457 KiB data, 214 MiB used, 160 GiB / 160 GiB avail; 55/723 objects degraded (7.607%) 2026-03-10T05:55:18.334 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:55:18 vm02 bash[56371]: cluster 2026-03-10T05:55:16.850386+0000 mgr.y (mgr.24992) 166 : cluster [DBG] pgmap v91: 161 pgs: 19 active+undersized, 7 peering, 12 active+undersized+degraded, 123 active+clean; 457 KiB data, 214 MiB used, 160 GiB / 160 GiB avail; 55/723 objects degraded (7.607%) 2026-03-10T05:55:18.334 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:55:18 vm02 bash[56371]: audit 2026-03-10T05:55:16.956491+0000 mgr.y (mgr.24992) 167 : audit [DBG] from='client.25048 -' entity='client.iscsi.foo.vm02.mxbwmh' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T05:55:18.334 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:55:18 vm02 bash[56371]: audit 2026-03-10T05:55:16.956491+0000 mgr.y (mgr.24992) 167 : audit [DBG] from='client.25048 -' entity='client.iscsi.foo.vm02.mxbwmh' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T05:55:18.335 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:55:18 vm02 bash[56371]: cluster 2026-03-10T05:55:16.998137+0000 mon.a (mon.0) 379 : cluster [WRN] Health check update: Degraded data redundancy: 55/723 objects degraded (7.607%), 12 pgs degraded (PG_DEGRADED) 2026-03-10T05:55:18.335 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:55:18 vm02 bash[56371]: cluster 2026-03-10T05:55:16.998137+0000 mon.a (mon.0) 379 : cluster [WRN] Health check update: Degraded data redundancy: 55/723 objects degraded (7.607%), 12 pgs degraded (PG_DEGRADED) 2026-03-10T05:55:18.335 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:55:18 vm02 bash[55303]: cluster 2026-03-10T05:55:16.850386+0000 mgr.y (mgr.24992) 166 : cluster [DBG] pgmap v91: 161 pgs: 19 active+undersized, 7 peering, 12 active+undersized+degraded, 123 active+clean; 457 KiB data, 214 MiB used, 160 GiB / 160 GiB avail; 55/723 objects degraded (7.607%) 2026-03-10T05:55:18.335 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:55:18 vm02 bash[55303]: cluster 2026-03-10T05:55:16.850386+0000 mgr.y (mgr.24992) 166 : cluster [DBG] pgmap v91: 161 pgs: 19 active+undersized, 7 peering, 12 active+undersized+degraded, 123 active+clean; 457 KiB data, 214 MiB used, 160 GiB / 160 GiB avail; 55/723 objects degraded (7.607%) 2026-03-10T05:55:18.335 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:55:18 vm02 bash[55303]: audit 2026-03-10T05:55:16.956491+0000 mgr.y (mgr.24992) 167 : audit [DBG] from='client.25048 -' entity='client.iscsi.foo.vm02.mxbwmh' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T05:55:18.335 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:55:18 vm02 bash[55303]: audit 2026-03-10T05:55:16.956491+0000 mgr.y (mgr.24992) 167 : audit [DBG] from='client.25048 -' entity='client.iscsi.foo.vm02.mxbwmh' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T05:55:18.335 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:55:18 vm02 bash[55303]: cluster 2026-03-10T05:55:16.998137+0000 mon.a (mon.0) 379 : cluster [WRN] Health check update: Degraded data redundancy: 55/723 objects degraded (7.607%), 12 pgs degraded (PG_DEGRADED) 2026-03-10T05:55:18.335 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:55:18 vm02 bash[55303]: cluster 2026-03-10T05:55:16.998137+0000 mon.a (mon.0) 379 : cluster [WRN] Health check update: Degraded data redundancy: 55/723 objects degraded (7.607%), 12 pgs degraded (PG_DEGRADED) 2026-03-10T05:55:18.499 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:55:18 vm05 bash[43541]: cluster 2026-03-10T05:55:16.850386+0000 mgr.y (mgr.24992) 166 : cluster [DBG] pgmap v91: 161 pgs: 19 active+undersized, 7 peering, 12 active+undersized+degraded, 123 active+clean; 457 KiB data, 214 MiB used, 160 GiB / 160 GiB avail; 55/723 objects degraded (7.607%) 2026-03-10T05:55:18.499 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:55:18 vm05 bash[43541]: cluster 2026-03-10T05:55:16.850386+0000 mgr.y (mgr.24992) 166 : cluster [DBG] pgmap v91: 161 pgs: 19 active+undersized, 7 peering, 12 active+undersized+degraded, 123 active+clean; 457 KiB data, 214 MiB used, 160 GiB / 160 GiB avail; 55/723 objects degraded (7.607%) 2026-03-10T05:55:18.499 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:55:18 vm05 bash[43541]: audit 2026-03-10T05:55:16.956491+0000 mgr.y (mgr.24992) 167 : audit [DBG] from='client.25048 -' entity='client.iscsi.foo.vm02.mxbwmh' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T05:55:18.499 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:55:18 vm05 bash[43541]: audit 2026-03-10T05:55:16.956491+0000 mgr.y (mgr.24992) 167 : audit [DBG] from='client.25048 -' entity='client.iscsi.foo.vm02.mxbwmh' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T05:55:18.499 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:55:18 vm05 bash[43541]: cluster 2026-03-10T05:55:16.998137+0000 mon.a (mon.0) 379 : cluster [WRN] Health check update: Degraded data redundancy: 55/723 objects degraded (7.607%), 12 pgs degraded (PG_DEGRADED) 2026-03-10T05:55:18.500 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:55:18 vm05 bash[43541]: cluster 2026-03-10T05:55:16.998137+0000 mon.a (mon.0) 379 : cluster [WRN] Health check update: Degraded data redundancy: 55/723 objects degraded (7.607%), 12 pgs degraded (PG_DEGRADED) 2026-03-10T05:55:19.334 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:55:19 vm02 bash[56371]: cluster 2026-03-10T05:55:19.007231+0000 mon.a (mon.0) 380 : cluster [INF] Health check cleared: PG_AVAILABILITY (was: Reduced data availability: 3 pgs inactive, 4 pgs peering) 2026-03-10T05:55:19.335 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:55:19 vm02 bash[56371]: cluster 2026-03-10T05:55:19.007231+0000 mon.a (mon.0) 380 : cluster [INF] Health check cleared: PG_AVAILABILITY (was: Reduced data availability: 3 pgs inactive, 4 pgs peering) 2026-03-10T05:55:19.335 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:55:19 vm02 bash[56371]: cluster 2026-03-10T05:55:19.007244+0000 mon.a (mon.0) 381 : cluster [INF] Health check cleared: PG_DEGRADED (was: Degraded data redundancy: 55/723 objects degraded (7.607%), 12 pgs degraded) 2026-03-10T05:55:19.335 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:55:19 vm02 bash[56371]: cluster 2026-03-10T05:55:19.007244+0000 mon.a (mon.0) 381 : cluster [INF] Health check cleared: PG_DEGRADED (was: Degraded data redundancy: 55/723 objects degraded (7.607%), 12 pgs degraded) 2026-03-10T05:55:19.335 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:55:19 vm02 bash[56371]: cluster 2026-03-10T05:55:19.007249+0000 mon.a (mon.0) 382 : cluster [INF] Cluster is now healthy 2026-03-10T05:55:19.335 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:55:19 vm02 bash[56371]: cluster 2026-03-10T05:55:19.007249+0000 mon.a (mon.0) 382 : cluster [INF] Cluster is now healthy 2026-03-10T05:55:19.335 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:55:19 vm02 bash[55303]: cluster 2026-03-10T05:55:19.007231+0000 mon.a (mon.0) 380 : cluster [INF] Health check cleared: PG_AVAILABILITY (was: Reduced data availability: 3 pgs inactive, 4 pgs peering) 2026-03-10T05:55:19.335 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:55:19 vm02 bash[55303]: cluster 2026-03-10T05:55:19.007231+0000 mon.a (mon.0) 380 : cluster [INF] Health check cleared: PG_AVAILABILITY (was: Reduced data availability: 3 pgs inactive, 4 pgs peering) 2026-03-10T05:55:19.335 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:55:19 vm02 bash[55303]: cluster 2026-03-10T05:55:19.007244+0000 mon.a (mon.0) 381 : cluster [INF] Health check cleared: PG_DEGRADED (was: Degraded data redundancy: 55/723 objects degraded (7.607%), 12 pgs degraded) 2026-03-10T05:55:19.335 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:55:19 vm02 bash[55303]: cluster 2026-03-10T05:55:19.007244+0000 mon.a (mon.0) 381 : cluster [INF] Health check cleared: PG_DEGRADED (was: Degraded data redundancy: 55/723 objects degraded (7.607%), 12 pgs degraded) 2026-03-10T05:55:19.335 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:55:19 vm02 bash[55303]: cluster 2026-03-10T05:55:19.007249+0000 mon.a (mon.0) 382 : cluster [INF] Cluster is now healthy 2026-03-10T05:55:19.335 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:55:19 vm02 bash[55303]: cluster 2026-03-10T05:55:19.007249+0000 mon.a (mon.0) 382 : cluster [INF] Cluster is now healthy 2026-03-10T05:55:19.499 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:55:19 vm05 bash[43541]: cluster 2026-03-10T05:55:19.007231+0000 mon.a (mon.0) 380 : cluster [INF] Health check cleared: PG_AVAILABILITY (was: Reduced data availability: 3 pgs inactive, 4 pgs peering) 2026-03-10T05:55:19.499 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:55:19 vm05 bash[43541]: cluster 2026-03-10T05:55:19.007231+0000 mon.a (mon.0) 380 : cluster [INF] Health check cleared: PG_AVAILABILITY (was: Reduced data availability: 3 pgs inactive, 4 pgs peering) 2026-03-10T05:55:19.499 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:55:19 vm05 bash[43541]: cluster 2026-03-10T05:55:19.007244+0000 mon.a (mon.0) 381 : cluster [INF] Health check cleared: PG_DEGRADED (was: Degraded data redundancy: 55/723 objects degraded (7.607%), 12 pgs degraded) 2026-03-10T05:55:19.500 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:55:19 vm05 bash[43541]: cluster 2026-03-10T05:55:19.007244+0000 mon.a (mon.0) 381 : cluster [INF] Health check cleared: PG_DEGRADED (was: Degraded data redundancy: 55/723 objects degraded (7.607%), 12 pgs degraded) 2026-03-10T05:55:19.500 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:55:19 vm05 bash[43541]: cluster 2026-03-10T05:55:19.007249+0000 mon.a (mon.0) 382 : cluster [INF] Cluster is now healthy 2026-03-10T05:55:19.500 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:55:19 vm05 bash[43541]: cluster 2026-03-10T05:55:19.007249+0000 mon.a (mon.0) 382 : cluster [INF] Cluster is now healthy 2026-03-10T05:55:20.334 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:55:20 vm02 bash[56371]: cluster 2026-03-10T05:55:18.850736+0000 mgr.y (mgr.24992) 168 : cluster [DBG] pgmap v92: 161 pgs: 161 active+clean; 457 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 737 B/s rd, 0 op/s 2026-03-10T05:55:20.334 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:55:20 vm02 bash[56371]: cluster 2026-03-10T05:55:18.850736+0000 mgr.y (mgr.24992) 168 : cluster [DBG] pgmap v92: 161 pgs: 161 active+clean; 457 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 737 B/s rd, 0 op/s 2026-03-10T05:55:20.335 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:55:20 vm02 bash[55303]: cluster 2026-03-10T05:55:18.850736+0000 mgr.y (mgr.24992) 168 : cluster [DBG] pgmap v92: 161 pgs: 161 active+clean; 457 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 737 B/s rd, 0 op/s 2026-03-10T05:55:20.335 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:55:20 vm02 bash[55303]: cluster 2026-03-10T05:55:18.850736+0000 mgr.y (mgr.24992) 168 : cluster [DBG] pgmap v92: 161 pgs: 161 active+clean; 457 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 737 B/s rd, 0 op/s 2026-03-10T05:55:20.383 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:55:20 vm05 bash[43541]: cluster 2026-03-10T05:55:18.850736+0000 mgr.y (mgr.24992) 168 : cluster [DBG] pgmap v92: 161 pgs: 161 active+clean; 457 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 737 B/s rd, 0 op/s 2026-03-10T05:55:20.383 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:55:20 vm05 bash[43541]: cluster 2026-03-10T05:55:18.850736+0000 mgr.y (mgr.24992) 168 : cluster [DBG] pgmap v92: 161 pgs: 161 active+clean; 457 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 737 B/s rd, 0 op/s 2026-03-10T05:55:22.334 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:55:22 vm02 bash[56371]: cluster 2026-03-10T05:55:20.851021+0000 mgr.y (mgr.24992) 169 : cluster [DBG] pgmap v93: 161 pgs: 161 active+clean; 457 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 639 B/s rd, 0 op/s 2026-03-10T05:55:22.335 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:55:22 vm02 bash[56371]: cluster 2026-03-10T05:55:20.851021+0000 mgr.y (mgr.24992) 169 : cluster [DBG] pgmap v93: 161 pgs: 161 active+clean; 457 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 639 B/s rd, 0 op/s 2026-03-10T05:55:22.335 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:55:22 vm02 bash[55303]: cluster 2026-03-10T05:55:20.851021+0000 mgr.y (mgr.24992) 169 : cluster [DBG] pgmap v93: 161 pgs: 161 active+clean; 457 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 639 B/s rd, 0 op/s 2026-03-10T05:55:22.335 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:55:22 vm02 bash[55303]: cluster 2026-03-10T05:55:20.851021+0000 mgr.y (mgr.24992) 169 : cluster [DBG] pgmap v93: 161 pgs: 161 active+clean; 457 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 639 B/s rd, 0 op/s 2026-03-10T05:55:22.380 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:55:22 vm05 bash[43541]: cluster 2026-03-10T05:55:20.851021+0000 mgr.y (mgr.24992) 169 : cluster [DBG] pgmap v93: 161 pgs: 161 active+clean; 457 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 639 B/s rd, 0 op/s 2026-03-10T05:55:22.380 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:55:22 vm05 bash[43541]: cluster 2026-03-10T05:55:20.851021+0000 mgr.y (mgr.24992) 169 : cluster [DBG] pgmap v93: 161 pgs: 161 active+clean; 457 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 639 B/s rd, 0 op/s 2026-03-10T05:55:23.335 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:55:23 vm02 bash[56371]: audit 2026-03-10T05:55:22.084921+0000 mon.a (mon.0) 383 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:55:23.335 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:55:23 vm02 bash[56371]: audit 2026-03-10T05:55:22.084921+0000 mon.a (mon.0) 383 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:55:23.335 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:55:23 vm02 bash[56371]: audit 2026-03-10T05:55:22.091819+0000 mon.a (mon.0) 384 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:55:23.335 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:55:23 vm02 bash[56371]: audit 2026-03-10T05:55:22.091819+0000 mon.a (mon.0) 384 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:55:23.335 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:55:23 vm02 bash[56371]: audit 2026-03-10T05:55:22.093777+0000 mon.a (mon.0) 385 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:55:23.335 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:55:23 vm02 bash[56371]: audit 2026-03-10T05:55:22.093777+0000 mon.a (mon.0) 385 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:55:23.335 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:55:23 vm02 bash[56371]: audit 2026-03-10T05:55:22.094478+0000 mon.a (mon.0) 386 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T05:55:23.335 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:55:23 vm02 bash[56371]: audit 2026-03-10T05:55:22.094478+0000 mon.a (mon.0) 386 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T05:55:23.335 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:55:23 vm02 bash[56371]: audit 2026-03-10T05:55:22.097967+0000 mon.a (mon.0) 387 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:55:23.335 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:55:23 vm02 bash[56371]: audit 2026-03-10T05:55:22.097967+0000 mon.a (mon.0) 387 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:55:23.335 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:55:23 vm02 bash[56371]: audit 2026-03-10T05:55:22.139967+0000 mon.a (mon.0) 388 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T05:55:23.335 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:55:23 vm02 bash[56371]: audit 2026-03-10T05:55:22.139967+0000 mon.a (mon.0) 388 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T05:55:23.335 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:55:23 vm02 bash[56371]: audit 2026-03-10T05:55:22.140938+0000 mon.a (mon.0) 389 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T05:55:23.335 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:55:23 vm02 bash[56371]: audit 2026-03-10T05:55:22.140938+0000 mon.a (mon.0) 389 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T05:55:23.335 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:55:23 vm02 bash[56371]: audit 2026-03-10T05:55:22.141632+0000 mon.a (mon.0) 390 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T05:55:23.335 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:55:23 vm02 bash[56371]: audit 2026-03-10T05:55:22.141632+0000 mon.a (mon.0) 390 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T05:55:23.335 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:55:23 vm02 bash[56371]: audit 2026-03-10T05:55:22.142157+0000 mon.a (mon.0) 391 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T05:55:23.335 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:55:23 vm02 bash[56371]: audit 2026-03-10T05:55:22.142157+0000 mon.a (mon.0) 391 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T05:55:23.335 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:55:23 vm02 bash[56371]: audit 2026-03-10T05:55:22.142785+0000 mon.a (mon.0) 392 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "osd ok-to-stop", "ids": ["5"], "max": 16}]: dispatch 2026-03-10T05:55:23.335 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:55:23 vm02 bash[56371]: audit 2026-03-10T05:55:22.142785+0000 mon.a (mon.0) 392 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "osd ok-to-stop", "ids": ["5"], "max": 16}]: dispatch 2026-03-10T05:55:23.335 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:55:23 vm02 bash[56371]: audit 2026-03-10T05:55:22.142920+0000 mgr.y (mgr.24992) 170 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "osd ok-to-stop", "ids": ["5"], "max": 16}]: dispatch 2026-03-10T05:55:23.335 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:55:23 vm02 bash[56371]: audit 2026-03-10T05:55:22.142920+0000 mgr.y (mgr.24992) 170 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "osd ok-to-stop", "ids": ["5"], "max": 16}]: dispatch 2026-03-10T05:55:23.335 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:55:23 vm02 bash[56371]: cephadm 2026-03-10T05:55:22.143487+0000 mgr.y (mgr.24992) 171 : cephadm [INF] Upgrade: osd.5 is safe to restart 2026-03-10T05:55:23.335 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:55:23 vm02 bash[56371]: cephadm 2026-03-10T05:55:22.143487+0000 mgr.y (mgr.24992) 171 : cephadm [INF] Upgrade: osd.5 is safe to restart 2026-03-10T05:55:23.335 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:55:23 vm02 bash[56371]: cephadm 2026-03-10T05:55:22.519806+0000 mgr.y (mgr.24992) 172 : cephadm [INF] Upgrade: Updating osd.5 2026-03-10T05:55:23.335 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:55:23 vm02 bash[56371]: cephadm 2026-03-10T05:55:22.519806+0000 mgr.y (mgr.24992) 172 : cephadm [INF] Upgrade: Updating osd.5 2026-03-10T05:55:23.335 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:55:23 vm02 bash[56371]: audit 2026-03-10T05:55:22.526685+0000 mon.a (mon.0) 393 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:55:23.335 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:55:23 vm02 bash[56371]: audit 2026-03-10T05:55:22.526685+0000 mon.a (mon.0) 393 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:55:23.335 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:55:23 vm02 bash[56371]: audit 2026-03-10T05:55:22.528203+0000 mon.a (mon.0) 394 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.5"}]: dispatch 2026-03-10T05:55:23.335 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:55:23 vm02 bash[56371]: audit 2026-03-10T05:55:22.528203+0000 mon.a (mon.0) 394 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.5"}]: dispatch 2026-03-10T05:55:23.335 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:55:23 vm02 bash[56371]: audit 2026-03-10T05:55:22.528554+0000 mon.a (mon.0) 395 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:55:23.335 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:55:23 vm02 bash[56371]: audit 2026-03-10T05:55:22.528554+0000 mon.a (mon.0) 395 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:55:23.335 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:55:23 vm02 bash[56371]: cephadm 2026-03-10T05:55:22.529701+0000 mgr.y (mgr.24992) 173 : cephadm [INF] Deploying daemon osd.5 on vm05 2026-03-10T05:55:23.335 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:55:23 vm02 bash[56371]: cephadm 2026-03-10T05:55:22.529701+0000 mgr.y (mgr.24992) 173 : cephadm [INF] Deploying daemon osd.5 on vm05 2026-03-10T05:55:23.336 INFO:journalctl@ceph.mgr.y.vm02.stdout:Mar 10 05:55:22 vm02 bash[52264]: ::ffff:192.168.123.105 - - [10/Mar/2026:05:55:22] "GET /metrics HTTP/1.1" 200 38061 "" "Prometheus/2.51.0" 2026-03-10T05:55:23.336 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:55:23 vm02 bash[55303]: audit 2026-03-10T05:55:22.084921+0000 mon.a (mon.0) 383 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:55:23.336 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:55:23 vm02 bash[55303]: audit 2026-03-10T05:55:22.084921+0000 mon.a (mon.0) 383 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:55:23.336 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:55:23 vm02 bash[55303]: audit 2026-03-10T05:55:22.091819+0000 mon.a (mon.0) 384 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:55:23.336 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:55:23 vm02 bash[55303]: audit 2026-03-10T05:55:22.091819+0000 mon.a (mon.0) 384 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:55:23.336 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:55:23 vm02 bash[55303]: audit 2026-03-10T05:55:22.093777+0000 mon.a (mon.0) 385 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:55:23.336 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:55:23 vm02 bash[55303]: audit 2026-03-10T05:55:22.093777+0000 mon.a (mon.0) 385 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:55:23.336 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:55:23 vm02 bash[55303]: audit 2026-03-10T05:55:22.094478+0000 mon.a (mon.0) 386 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T05:55:23.336 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:55:23 vm02 bash[55303]: audit 2026-03-10T05:55:22.094478+0000 mon.a (mon.0) 386 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T05:55:23.336 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:55:23 vm02 bash[55303]: audit 2026-03-10T05:55:22.097967+0000 mon.a (mon.0) 387 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:55:23.336 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:55:23 vm02 bash[55303]: audit 2026-03-10T05:55:22.097967+0000 mon.a (mon.0) 387 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:55:23.336 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:55:23 vm02 bash[55303]: audit 2026-03-10T05:55:22.139967+0000 mon.a (mon.0) 388 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T05:55:23.336 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:55:23 vm02 bash[55303]: audit 2026-03-10T05:55:22.139967+0000 mon.a (mon.0) 388 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T05:55:23.336 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:55:23 vm02 bash[55303]: audit 2026-03-10T05:55:22.140938+0000 mon.a (mon.0) 389 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T05:55:23.336 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:55:23 vm02 bash[55303]: audit 2026-03-10T05:55:22.140938+0000 mon.a (mon.0) 389 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T05:55:23.336 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:55:23 vm02 bash[55303]: audit 2026-03-10T05:55:22.141632+0000 mon.a (mon.0) 390 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T05:55:23.336 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:55:23 vm02 bash[55303]: audit 2026-03-10T05:55:22.141632+0000 mon.a (mon.0) 390 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T05:55:23.336 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:55:23 vm02 bash[55303]: audit 2026-03-10T05:55:22.142157+0000 mon.a (mon.0) 391 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T05:55:23.336 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:55:23 vm02 bash[55303]: audit 2026-03-10T05:55:22.142157+0000 mon.a (mon.0) 391 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T05:55:23.336 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:55:23 vm02 bash[55303]: audit 2026-03-10T05:55:22.142785+0000 mon.a (mon.0) 392 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "osd ok-to-stop", "ids": ["5"], "max": 16}]: dispatch 2026-03-10T05:55:23.336 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:55:23 vm02 bash[55303]: audit 2026-03-10T05:55:22.142785+0000 mon.a (mon.0) 392 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "osd ok-to-stop", "ids": ["5"], "max": 16}]: dispatch 2026-03-10T05:55:23.336 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:55:23 vm02 bash[55303]: audit 2026-03-10T05:55:22.142920+0000 mgr.y (mgr.24992) 170 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "osd ok-to-stop", "ids": ["5"], "max": 16}]: dispatch 2026-03-10T05:55:23.336 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:55:23 vm02 bash[55303]: audit 2026-03-10T05:55:22.142920+0000 mgr.y (mgr.24992) 170 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "osd ok-to-stop", "ids": ["5"], "max": 16}]: dispatch 2026-03-10T05:55:23.336 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:55:23 vm02 bash[55303]: cephadm 2026-03-10T05:55:22.143487+0000 mgr.y (mgr.24992) 171 : cephadm [INF] Upgrade: osd.5 is safe to restart 2026-03-10T05:55:23.336 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:55:23 vm02 bash[55303]: cephadm 2026-03-10T05:55:22.143487+0000 mgr.y (mgr.24992) 171 : cephadm [INF] Upgrade: osd.5 is safe to restart 2026-03-10T05:55:23.336 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:55:23 vm02 bash[55303]: cephadm 2026-03-10T05:55:22.519806+0000 mgr.y (mgr.24992) 172 : cephadm [INF] Upgrade: Updating osd.5 2026-03-10T05:55:23.336 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:55:23 vm02 bash[55303]: cephadm 2026-03-10T05:55:22.519806+0000 mgr.y (mgr.24992) 172 : cephadm [INF] Upgrade: Updating osd.5 2026-03-10T05:55:23.336 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:55:23 vm02 bash[55303]: audit 2026-03-10T05:55:22.526685+0000 mon.a (mon.0) 393 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:55:23.336 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:55:23 vm02 bash[55303]: audit 2026-03-10T05:55:22.526685+0000 mon.a (mon.0) 393 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:55:23.336 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:55:23 vm02 bash[55303]: audit 2026-03-10T05:55:22.528203+0000 mon.a (mon.0) 394 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.5"}]: dispatch 2026-03-10T05:55:23.336 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:55:23 vm02 bash[55303]: audit 2026-03-10T05:55:22.528203+0000 mon.a (mon.0) 394 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.5"}]: dispatch 2026-03-10T05:55:23.336 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:55:23 vm02 bash[55303]: audit 2026-03-10T05:55:22.528554+0000 mon.a (mon.0) 395 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:55:23.336 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:55:23 vm02 bash[55303]: audit 2026-03-10T05:55:22.528554+0000 mon.a (mon.0) 395 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:55:23.336 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:55:23 vm02 bash[55303]: cephadm 2026-03-10T05:55:22.529701+0000 mgr.y (mgr.24992) 173 : cephadm [INF] Deploying daemon osd.5 on vm05 2026-03-10T05:55:23.336 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:55:23 vm02 bash[55303]: cephadm 2026-03-10T05:55:22.529701+0000 mgr.y (mgr.24992) 173 : cephadm [INF] Deploying daemon osd.5 on vm05 2026-03-10T05:55:23.385 INFO:journalctl@ceph.mgr.x.vm05.stdout:Mar 10 05:55:23 vm05 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:55:23.385 INFO:journalctl@ceph.osd.4.vm05.stdout:Mar 10 05:55:23 vm05 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:55:23.385 INFO:journalctl@ceph.osd.6.vm05.stdout:Mar 10 05:55:23 vm05 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:55:23.385 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:55:23 vm05 bash[43541]: audit 2026-03-10T05:55:22.084921+0000 mon.a (mon.0) 383 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:55:23.385 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:55:23 vm05 bash[43541]: audit 2026-03-10T05:55:22.084921+0000 mon.a (mon.0) 383 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:55:23.385 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:55:23 vm05 bash[43541]: audit 2026-03-10T05:55:22.091819+0000 mon.a (mon.0) 384 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:55:23.385 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:55:23 vm05 bash[43541]: audit 2026-03-10T05:55:22.091819+0000 mon.a (mon.0) 384 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:55:23.385 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:55:23 vm05 bash[43541]: audit 2026-03-10T05:55:22.093777+0000 mon.a (mon.0) 385 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:55:23.385 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:55:23 vm05 bash[43541]: audit 2026-03-10T05:55:22.093777+0000 mon.a (mon.0) 385 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:55:23.385 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:55:23 vm05 bash[43541]: audit 2026-03-10T05:55:22.094478+0000 mon.a (mon.0) 386 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T05:55:23.386 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:55:23 vm05 bash[43541]: audit 2026-03-10T05:55:22.094478+0000 mon.a (mon.0) 386 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T05:55:23.386 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:55:23 vm05 bash[43541]: audit 2026-03-10T05:55:22.097967+0000 mon.a (mon.0) 387 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:55:23.386 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:55:23 vm05 bash[43541]: audit 2026-03-10T05:55:22.097967+0000 mon.a (mon.0) 387 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:55:23.386 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:55:23 vm05 bash[43541]: audit 2026-03-10T05:55:22.139967+0000 mon.a (mon.0) 388 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T05:55:23.386 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:55:23 vm05 bash[43541]: audit 2026-03-10T05:55:22.139967+0000 mon.a (mon.0) 388 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T05:55:23.386 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:55:23 vm05 bash[43541]: audit 2026-03-10T05:55:22.140938+0000 mon.a (mon.0) 389 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T05:55:23.386 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:55:23 vm05 bash[43541]: audit 2026-03-10T05:55:22.140938+0000 mon.a (mon.0) 389 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T05:55:23.386 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:55:23 vm05 bash[43541]: audit 2026-03-10T05:55:22.141632+0000 mon.a (mon.0) 390 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T05:55:23.386 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:55:23 vm05 bash[43541]: audit 2026-03-10T05:55:22.141632+0000 mon.a (mon.0) 390 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T05:55:23.386 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:55:23 vm05 bash[43541]: audit 2026-03-10T05:55:22.142157+0000 mon.a (mon.0) 391 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T05:55:23.386 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:55:23 vm05 bash[43541]: audit 2026-03-10T05:55:22.142157+0000 mon.a (mon.0) 391 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T05:55:23.386 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:55:23 vm05 bash[43541]: audit 2026-03-10T05:55:22.142785+0000 mon.a (mon.0) 392 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "osd ok-to-stop", "ids": ["5"], "max": 16}]: dispatch 2026-03-10T05:55:23.386 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:55:23 vm05 bash[43541]: audit 2026-03-10T05:55:22.142785+0000 mon.a (mon.0) 392 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "osd ok-to-stop", "ids": ["5"], "max": 16}]: dispatch 2026-03-10T05:55:23.386 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:55:23 vm05 bash[43541]: audit 2026-03-10T05:55:22.142920+0000 mgr.y (mgr.24992) 170 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "osd ok-to-stop", "ids": ["5"], "max": 16}]: dispatch 2026-03-10T05:55:23.386 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:55:23 vm05 bash[43541]: audit 2026-03-10T05:55:22.142920+0000 mgr.y (mgr.24992) 170 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "osd ok-to-stop", "ids": ["5"], "max": 16}]: dispatch 2026-03-10T05:55:23.386 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:55:23 vm05 bash[43541]: cephadm 2026-03-10T05:55:22.143487+0000 mgr.y (mgr.24992) 171 : cephadm [INF] Upgrade: osd.5 is safe to restart 2026-03-10T05:55:23.386 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:55:23 vm05 bash[43541]: cephadm 2026-03-10T05:55:22.143487+0000 mgr.y (mgr.24992) 171 : cephadm [INF] Upgrade: osd.5 is safe to restart 2026-03-10T05:55:23.386 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:55:23 vm05 bash[43541]: cephadm 2026-03-10T05:55:22.519806+0000 mgr.y (mgr.24992) 172 : cephadm [INF] Upgrade: Updating osd.5 2026-03-10T05:55:23.386 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:55:23 vm05 bash[43541]: cephadm 2026-03-10T05:55:22.519806+0000 mgr.y (mgr.24992) 172 : cephadm [INF] Upgrade: Updating osd.5 2026-03-10T05:55:23.386 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:55:23 vm05 bash[43541]: audit 2026-03-10T05:55:22.526685+0000 mon.a (mon.0) 393 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:55:23.386 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:55:23 vm05 bash[43541]: audit 2026-03-10T05:55:22.526685+0000 mon.a (mon.0) 393 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:55:23.386 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:55:23 vm05 bash[43541]: audit 2026-03-10T05:55:22.528203+0000 mon.a (mon.0) 394 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.5"}]: dispatch 2026-03-10T05:55:23.386 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:55:23 vm05 bash[43541]: audit 2026-03-10T05:55:22.528203+0000 mon.a (mon.0) 394 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.5"}]: dispatch 2026-03-10T05:55:23.386 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:55:23 vm05 bash[43541]: audit 2026-03-10T05:55:22.528554+0000 mon.a (mon.0) 395 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:55:23.386 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:55:23 vm05 bash[43541]: audit 2026-03-10T05:55:22.528554+0000 mon.a (mon.0) 395 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:55:23.386 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:55:23 vm05 bash[43541]: cephadm 2026-03-10T05:55:22.529701+0000 mgr.y (mgr.24992) 173 : cephadm [INF] Deploying daemon osd.5 on vm05 2026-03-10T05:55:23.386 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:55:23 vm05 bash[43541]: cephadm 2026-03-10T05:55:22.529701+0000 mgr.y (mgr.24992) 173 : cephadm [INF] Deploying daemon osd.5 on vm05 2026-03-10T05:55:23.386 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:55:23 vm05 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:55:23.386 INFO:journalctl@ceph.osd.5.vm05.stdout:Mar 10 05:55:23 vm05 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:55:23.386 INFO:journalctl@ceph.osd.5.vm05.stdout:Mar 10 05:55:23 vm05 systemd[1]: Stopping Ceph osd.5 for 107483ae-1c44-11f1-b530-c1172cd6122a... 2026-03-10T05:55:23.386 INFO:journalctl@ceph.osd.5.vm05.stdout:Mar 10 05:55:23 vm05 bash[23962]: debug 2026-03-10T05:55:23.291+0000 7f2c1716e700 -1 received signal: Terminated from /sbin/docker-init -- /usr/bin/ceph-osd -n osd.5 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-stderr=true --default-log-stderr-prefix=debug (PID: 1) UID: 0 2026-03-10T05:55:23.386 INFO:journalctl@ceph.osd.5.vm05.stdout:Mar 10 05:55:23 vm05 bash[23962]: debug 2026-03-10T05:55:23.291+0000 7f2c1716e700 -1 osd.5 116 *** Got signal Terminated *** 2026-03-10T05:55:23.386 INFO:journalctl@ceph.osd.5.vm05.stdout:Mar 10 05:55:23 vm05 bash[23962]: debug 2026-03-10T05:55:23.291+0000 7f2c1716e700 -1 osd.5 116 *** Immediate shutdown (osd_fast_shutdown=true) *** 2026-03-10T05:55:23.386 INFO:journalctl@ceph.osd.7.vm05.stdout:Mar 10 05:55:23 vm05 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:55:23.386 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:55:23 vm05 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:55:23.386 INFO:journalctl@ceph.prometheus.a.vm05.stdout:Mar 10 05:55:23 vm05 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:55:23.386 INFO:journalctl@ceph.node-exporter.b.vm05.stdout:Mar 10 05:55:23 vm05 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:55:24.171 INFO:teuthology.orchestra.run.vm02.stdout:true 2026-03-10T05:55:24.438 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:55:24 vm05 bash[43541]: cluster 2026-03-10T05:55:22.851455+0000 mgr.y (mgr.24992) 174 : cluster [DBG] pgmap v94: 161 pgs: 161 active+clean; 457 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 1.0 KiB/s rd, 1 op/s 2026-03-10T05:55:24.438 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:55:24 vm05 bash[43541]: cluster 2026-03-10T05:55:22.851455+0000 mgr.y (mgr.24992) 174 : cluster [DBG] pgmap v94: 161 pgs: 161 active+clean; 457 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 1.0 KiB/s rd, 1 op/s 2026-03-10T05:55:24.438 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:55:24 vm05 bash[43541]: cluster 2026-03-10T05:55:23.292080+0000 mon.a (mon.0) 396 : cluster [INF] osd.5 marked itself down and dead 2026-03-10T05:55:24.438 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:55:24 vm05 bash[43541]: cluster 2026-03-10T05:55:23.292080+0000 mon.a (mon.0) 396 : cluster [INF] osd.5 marked itself down and dead 2026-03-10T05:55:24.438 INFO:journalctl@ceph.osd.5.vm05.stdout:Mar 10 05:55:24 vm05 bash[47255]: ceph-107483ae-1c44-11f1-b530-c1172cd6122a-osd-5 2026-03-10T05:55:24.438 INFO:journalctl@ceph.prometheus.a.vm05.stdout:Mar 10 05:55:24 vm05 bash[41269]: ts=2026-03-10T05:55:24.147Z caller=alerting.go:391 level=warn component="rule manager" alert="unsupported value type" msg="Expanding alert template failed" err="error executing template __alert_CephOSDDown: template: __alert_CephOSDDown:1:358: executing \"__alert_CephOSDDown\" at : error calling query: found duplicate series for the match group {ceph_daemon=\"osd.4\"} on the right hand-side of the operation: [{__name__=\"ceph_osd_metadata\", ceph_daemon=\"osd.4\", ceph_version=\"ceph version 17.2.0 (43e2e60a7559d3f46c9d53f1ca875fd499a1e35e) quincy (stable)\", cluster=\"107483ae-1c44-11f1-b530-c1172cd6122a\", cluster_addr=\"192.168.123.105\", device_class=\"hdd\", hostname=\"vm05\", instance=\"ceph_cluster\", job=\"ceph\", objectstore=\"bluestore\", public_addr=\"192.168.123.105\"}, {__name__=\"ceph_osd_metadata\", ceph_daemon=\"osd.4\", ceph_version=\"ceph version 17.2.0 (43e2e60a7559d3f46c9d53f1ca875fd499a1e35e) quincy (stable)\", cluster_addr=\"192.168.123.105\", device_class=\"hdd\", hostname=\"vm05\", instance=\"192.168.123.105:9283\", job=\"ceph\", objectstore=\"bluestore\", public_addr=\"192.168.123.105\"}];many-to-many matching not allowed: matching labels must be unique on one side" data="unsupported value type" 2026-03-10T05:55:24.438 INFO:journalctl@ceph.prometheus.a.vm05.stdout:Mar 10 05:55:24 vm05 bash[41269]: ts=2026-03-10T05:55:24.147Z caller=group.go:483 level=warn name=CephOSDFlapping index=13 component="rule manager" file=/etc/prometheus/alerting/ceph_alerts.yml group=osd msg="Evaluating rule failed" rule="alert: CephOSDFlapping\nexpr: (rate(ceph_osd_up[5m]) * on (ceph_daemon) group_left (hostname) ceph_osd_metadata)\n * 60 > 1\nlabels:\n oid: 1.3.6.1.4.1.50495.1.2.1.4.4\n severity: warning\n type: ceph_default\nannotations:\n description: OSD {{ $labels.ceph_daemon }} on {{ $labels.hostname }} was marked\n down and back up {{ $value | humanize }} times once a minute for 5 minutes. This\n may indicate a network issue (latency, packet loss, MTU mismatch) on the cluster\n network, or the public network if no cluster network is deployed. Check the network\n stats on the listed host(s).\n documentation: https://docs.ceph.com/en/latest/rados/troubleshooting/troubleshooting-osd#flapping-osds\n summary: Network issues are causing OSDs to flap (mark each other down)\n" err="found duplicate series for the match group {ceph_daemon=\"osd.4\"} on the right hand-side of the operation: [{__name__=\"ceph_osd_metadata\", ceph_daemon=\"osd.4\", ceph_version=\"ceph version 17.2.0 (43e2e60a7559d3f46c9d53f1ca875fd499a1e35e) quincy (stable)\", cluster=\"107483ae-1c44-11f1-b530-c1172cd6122a\", cluster_addr=\"192.168.123.105\", device_class=\"hdd\", hostname=\"vm05\", instance=\"ceph_cluster\", job=\"ceph\", objectstore=\"bluestore\", public_addr=\"192.168.123.105\"}, {__name__=\"ceph_osd_metadata\", ceph_daemon=\"osd.4\", ceph_version=\"ceph version 17.2.0 (43e2e60a7559d3f46c9d53f1ca875fd499a1e35e) quincy (stable)\", cluster_addr=\"192.168.123.105\", device_class=\"hdd\", hostname=\"vm05\", instance=\"192.168.123.105:9283\", job=\"ceph\", objectstore=\"bluestore\", public_addr=\"192.168.123.105\"}];many-to-many matching not allowed: matching labels must be unique on one side" 2026-03-10T05:55:24.537 INFO:teuthology.orchestra.run.vm02.stdout:NAME HOST PORTS STATUS REFRESHED AGE MEM USE MEM LIM VERSION IMAGE ID CONTAINER ID 2026-03-10T05:55:24.537 INFO:teuthology.orchestra.run.vm02.stdout:alertmanager.a vm02 *:9093,9094 running (3m) 26s ago 8m 14.9M - 0.25.0 c8568f914cd2 7a7c5c2cddb6 2026-03-10T05:55:24.537 INFO:teuthology.orchestra.run.vm02.stdout:grafana.a vm05 *:3000 running (3m) 9s ago 7m 37.2M - dad864ee21e9 95c6d977988a 2026-03-10T05:55:24.537 INFO:teuthology.orchestra.run.vm02.stdout:iscsi.foo.vm02.mxbwmh vm02 running (2m) 26s ago 7m 44.2M - 3.5 e1d6a67b021e 62aba5b41046 2026-03-10T05:55:24.537 INFO:teuthology.orchestra.run.vm02.stdout:mgr.x vm05 *:8443,9283,8765 running (2m) 9s ago 10m 464M - 19.2.3-678-ge911bdeb 654f31e6858e 7579626ada90 2026-03-10T05:55:24.537 INFO:teuthology.orchestra.run.vm02.stdout:mgr.y vm02 *:8443,9283,8765 running (3m) 26s ago 11m 529M - 19.2.3-678-ge911bdeb 654f31e6858e ef46d0f7b15e 2026-03-10T05:55:24.537 INFO:teuthology.orchestra.run.vm02.stdout:mon.a vm02 running (2m) 26s ago 11m 47.2M 2048M 19.2.3-678-ge911bdeb 654f31e6858e df3a0a290a95 2026-03-10T05:55:24.537 INFO:teuthology.orchestra.run.vm02.stdout:mon.b vm05 running (118s) 9s ago 10m 38.8M 2048M 19.2.3-678-ge911bdeb 654f31e6858e 1da04b90d16b 2026-03-10T05:55:24.537 INFO:teuthology.orchestra.run.vm02.stdout:mon.c vm02 running (2m) 26s ago 10m 44.2M 2048M 19.2.3-678-ge911bdeb 654f31e6858e 7f2cdf1b7aa6 2026-03-10T05:55:24.537 INFO:teuthology.orchestra.run.vm02.stdout:node-exporter.a vm02 *:9100 running (3m) 26s ago 8m 7535k - 1.7.0 72c9c2088986 90288450bd1f 2026-03-10T05:55:24.537 INFO:teuthology.orchestra.run.vm02.stdout:node-exporter.b vm05 *:9100 running (3m) 9s ago 8m 7596k - 1.7.0 72c9c2088986 4e859143cb0e 2026-03-10T05:55:24.537 INFO:teuthology.orchestra.run.vm02.stdout:osd.0 vm02 running (62s) 26s ago 10m 66.8M 4096M 19.2.3-678-ge911bdeb 654f31e6858e 640360275f83 2026-03-10T05:55:24.537 INFO:teuthology.orchestra.run.vm02.stdout:osd.1 vm02 running (30s) 26s ago 10m 21.8M 4096M 19.2.3-678-ge911bdeb 654f31e6858e 4de5c460789a 2026-03-10T05:55:24.537 INFO:teuthology.orchestra.run.vm02.stdout:osd.2 vm02 running (78s) 26s ago 10m 45.2M 4096M 19.2.3-678-ge911bdeb 654f31e6858e 51dac2f581d9 2026-03-10T05:55:24.537 INFO:teuthology.orchestra.run.vm02.stdout:osd.3 vm02 running (95s) 26s ago 9m 70.5M 4096M 19.2.3-678-ge911bdeb 654f31e6858e 0eca961791f4 2026-03-10T05:55:24.538 INFO:teuthology.orchestra.run.vm02.stdout:osd.4 vm05 running (14s) 9s ago 9m 33.8M 4096M 19.2.3-678-ge911bdeb 654f31e6858e 2c1b499265f4 2026-03-10T05:55:24.538 INFO:teuthology.orchestra.run.vm02.stdout:osd.5 vm05 running (9m) 9s ago 9m 55.8M 4096M 17.2.0 e1d6a67b021e cba5583c238e 2026-03-10T05:55:24.538 INFO:teuthology.orchestra.run.vm02.stdout:osd.6 vm05 running (9m) 9s ago 9m 53.1M 4096M 17.2.0 e1d6a67b021e 9d1b370357d7 2026-03-10T05:55:24.538 INFO:teuthology.orchestra.run.vm02.stdout:osd.7 vm05 running (8m) 9s ago 8m 55.2M 4096M 17.2.0 e1d6a67b021e 8a4837b788cf 2026-03-10T05:55:24.538 INFO:teuthology.orchestra.run.vm02.stdout:prometheus.a vm05 *:9095 running (2m) 9s ago 8m 38.9M - 2.51.0 1d3b7f56885b 3328811f8f28 2026-03-10T05:55:24.538 INFO:teuthology.orchestra.run.vm02.stdout:rgw.foo.vm02.pbogjd vm02 *:8000 running (7m) 26s ago 7m 87.2M - 17.2.0 e1d6a67b021e 2ab2ffd1abaa 2026-03-10T05:55:24.538 INFO:teuthology.orchestra.run.vm02.stdout:rgw.foo.vm05.hvmsxl vm05 *:8000 running (7m) 9s ago 7m 86.4M - 17.2.0 e1d6a67b021e 85d1c77b7e9d 2026-03-10T05:55:24.538 INFO:teuthology.orchestra.run.vm02.stdout:rgw.smpl.vm02.pglcfm vm02 *:80 running (7m) 26s ago 7m 86.0M - 17.2.0 e1d6a67b021e ef152a460673 2026-03-10T05:55:24.538 INFO:teuthology.orchestra.run.vm02.stdout:rgw.smpl.vm05.hqqmap vm05 *:80 running (7m) 9s ago 7m 86.4M - 17.2.0 e1d6a67b021e 29c9ee794f34 2026-03-10T05:55:24.584 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:55:24 vm02 bash[56371]: cluster 2026-03-10T05:55:22.851455+0000 mgr.y (mgr.24992) 174 : cluster [DBG] pgmap v94: 161 pgs: 161 active+clean; 457 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 1.0 KiB/s rd, 1 op/s 2026-03-10T05:55:24.585 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:55:24 vm02 bash[56371]: cluster 2026-03-10T05:55:22.851455+0000 mgr.y (mgr.24992) 174 : cluster [DBG] pgmap v94: 161 pgs: 161 active+clean; 457 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 1.0 KiB/s rd, 1 op/s 2026-03-10T05:55:24.585 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:55:24 vm02 bash[56371]: cluster 2026-03-10T05:55:23.292080+0000 mon.a (mon.0) 396 : cluster [INF] osd.5 marked itself down and dead 2026-03-10T05:55:24.585 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:55:24 vm02 bash[56371]: cluster 2026-03-10T05:55:23.292080+0000 mon.a (mon.0) 396 : cluster [INF] osd.5 marked itself down and dead 2026-03-10T05:55:24.585 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:55:24 vm02 bash[55303]: cluster 2026-03-10T05:55:22.851455+0000 mgr.y (mgr.24992) 174 : cluster [DBG] pgmap v94: 161 pgs: 161 active+clean; 457 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 1.0 KiB/s rd, 1 op/s 2026-03-10T05:55:24.585 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:55:24 vm02 bash[55303]: cluster 2026-03-10T05:55:22.851455+0000 mgr.y (mgr.24992) 174 : cluster [DBG] pgmap v94: 161 pgs: 161 active+clean; 457 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 1.0 KiB/s rd, 1 op/s 2026-03-10T05:55:24.585 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:55:24 vm02 bash[55303]: cluster 2026-03-10T05:55:23.292080+0000 mon.a (mon.0) 396 : cluster [INF] osd.5 marked itself down and dead 2026-03-10T05:55:24.585 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:55:24 vm02 bash[55303]: cluster 2026-03-10T05:55:23.292080+0000 mon.a (mon.0) 396 : cluster [INF] osd.5 marked itself down and dead 2026-03-10T05:55:24.703 INFO:journalctl@ceph.mgr.x.vm05.stdout:Mar 10 05:55:24 vm05 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:55:24.703 INFO:journalctl@ceph.osd.4.vm05.stdout:Mar 10 05:55:24 vm05 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:55:24.703 INFO:journalctl@ceph.osd.6.vm05.stdout:Mar 10 05:55:24 vm05 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:55:24.703 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:55:24 vm05 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:55:24.703 INFO:journalctl@ceph.osd.5.vm05.stdout:Mar 10 05:55:24 vm05 systemd[1]: ceph-107483ae-1c44-11f1-b530-c1172cd6122a@osd.5.service: Deactivated successfully. 2026-03-10T05:55:24.703 INFO:journalctl@ceph.osd.5.vm05.stdout:Mar 10 05:55:24 vm05 systemd[1]: Stopped Ceph osd.5 for 107483ae-1c44-11f1-b530-c1172cd6122a. 2026-03-10T05:55:24.703 INFO:journalctl@ceph.osd.5.vm05.stdout:Mar 10 05:55:24 vm05 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:55:24.703 INFO:journalctl@ceph.osd.5.vm05.stdout:Mar 10 05:55:24 vm05 systemd[1]: Started Ceph osd.5 for 107483ae-1c44-11f1-b530-c1172cd6122a. 2026-03-10T05:55:24.703 INFO:journalctl@ceph.osd.7.vm05.stdout:Mar 10 05:55:24 vm05 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:55:24.703 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:55:24 vm05 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:55:24.703 INFO:journalctl@ceph.prometheus.a.vm05.stdout:Mar 10 05:55:24 vm05 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:55:24.704 INFO:journalctl@ceph.node-exporter.b.vm05.stdout:Mar 10 05:55:24 vm05 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:55:24.765 INFO:teuthology.orchestra.run.vm02.stdout:{ 2026-03-10T05:55:24.765 INFO:teuthology.orchestra.run.vm02.stdout: "mon": { 2026-03-10T05:55:24.765 INFO:teuthology.orchestra.run.vm02.stdout: "ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable)": 3 2026-03-10T05:55:24.765 INFO:teuthology.orchestra.run.vm02.stdout: }, 2026-03-10T05:55:24.765 INFO:teuthology.orchestra.run.vm02.stdout: "mgr": { 2026-03-10T05:55:24.765 INFO:teuthology.orchestra.run.vm02.stdout: "ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable)": 2 2026-03-10T05:55:24.765 INFO:teuthology.orchestra.run.vm02.stdout: }, 2026-03-10T05:55:24.765 INFO:teuthology.orchestra.run.vm02.stdout: "osd": { 2026-03-10T05:55:24.765 INFO:teuthology.orchestra.run.vm02.stdout: "ceph version 17.2.0 (43e2e60a7559d3f46c9d53f1ca875fd499a1e35e) quincy (stable)": 2, 2026-03-10T05:55:24.765 INFO:teuthology.orchestra.run.vm02.stdout: "ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable)": 5 2026-03-10T05:55:24.765 INFO:teuthology.orchestra.run.vm02.stdout: }, 2026-03-10T05:55:24.765 INFO:teuthology.orchestra.run.vm02.stdout: "rgw": { 2026-03-10T05:55:24.765 INFO:teuthology.orchestra.run.vm02.stdout: "ceph version 17.2.0 (43e2e60a7559d3f46c9d53f1ca875fd499a1e35e) quincy (stable)": 4 2026-03-10T05:55:24.765 INFO:teuthology.orchestra.run.vm02.stdout: }, 2026-03-10T05:55:24.765 INFO:teuthology.orchestra.run.vm02.stdout: "overall": { 2026-03-10T05:55:24.765 INFO:teuthology.orchestra.run.vm02.stdout: "ceph version 17.2.0 (43e2e60a7559d3f46c9d53f1ca875fd499a1e35e) quincy (stable)": 6, 2026-03-10T05:55:24.765 INFO:teuthology.orchestra.run.vm02.stdout: "ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable)": 10 2026-03-10T05:55:24.765 INFO:teuthology.orchestra.run.vm02.stdout: } 2026-03-10T05:55:24.765 INFO:teuthology.orchestra.run.vm02.stdout:} 2026-03-10T05:55:24.958 INFO:teuthology.orchestra.run.vm02.stdout:{ 2026-03-10T05:55:24.958 INFO:teuthology.orchestra.run.vm02.stdout: "target_image": "quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df", 2026-03-10T05:55:24.958 INFO:teuthology.orchestra.run.vm02.stdout: "in_progress": true, 2026-03-10T05:55:24.958 INFO:teuthology.orchestra.run.vm02.stdout: "which": "Upgrading all daemon types on all hosts", 2026-03-10T05:55:24.958 INFO:teuthology.orchestra.run.vm02.stdout: "services_complete": [ 2026-03-10T05:55:24.958 INFO:teuthology.orchestra.run.vm02.stdout: "mgr", 2026-03-10T05:55:24.958 INFO:teuthology.orchestra.run.vm02.stdout: "mon" 2026-03-10T05:55:24.958 INFO:teuthology.orchestra.run.vm02.stdout: ], 2026-03-10T05:55:24.958 INFO:teuthology.orchestra.run.vm02.stdout: "progress": "10/23 daemons upgraded", 2026-03-10T05:55:24.958 INFO:teuthology.orchestra.run.vm02.stdout: "message": "Currently upgrading osd daemons", 2026-03-10T05:55:24.958 INFO:teuthology.orchestra.run.vm02.stdout: "is_paused": false 2026-03-10T05:55:24.958 INFO:teuthology.orchestra.run.vm02.stdout:} 2026-03-10T05:55:24.999 INFO:journalctl@ceph.osd.5.vm05.stdout:Mar 10 05:55:24 vm05 bash[47467]: Running command: /usr/bin/ceph-authtool --gen-print-key 2026-03-10T05:55:25.000 INFO:journalctl@ceph.osd.5.vm05.stdout:Mar 10 05:55:24 vm05 bash[47467]: Running command: /usr/bin/ceph-authtool --gen-print-key 2026-03-10T05:55:25.218 INFO:teuthology.orchestra.run.vm02.stdout:HEALTH_WARN 1 osds down 2026-03-10T05:55:25.218 INFO:teuthology.orchestra.run.vm02.stdout:[WRN] OSD_DOWN: 1 osds down 2026-03-10T05:55:25.218 INFO:teuthology.orchestra.run.vm02.stdout: osd.5 (root=default,host=vm05) is down 2026-03-10T05:55:25.499 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:55:25 vm05 bash[43541]: cluster 2026-03-10T05:55:24.088293+0000 mon.a (mon.0) 397 : cluster [WRN] Health check failed: 1 osds down (OSD_DOWN) 2026-03-10T05:55:25.499 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:55:25 vm05 bash[43541]: cluster 2026-03-10T05:55:24.088293+0000 mon.a (mon.0) 397 : cluster [WRN] Health check failed: 1 osds down (OSD_DOWN) 2026-03-10T05:55:25.500 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:55:25 vm05 bash[43541]: cluster 2026-03-10T05:55:24.128450+0000 mon.a (mon.0) 398 : cluster [DBG] osdmap e117: 8 total, 7 up, 8 in 2026-03-10T05:55:25.500 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:55:25 vm05 bash[43541]: cluster 2026-03-10T05:55:24.128450+0000 mon.a (mon.0) 398 : cluster [DBG] osdmap e117: 8 total, 7 up, 8 in 2026-03-10T05:55:25.500 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:55:25 vm05 bash[43541]: audit 2026-03-10T05:55:24.157867+0000 mgr.y (mgr.24992) 175 : audit [DBG] from='client.34288 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T05:55:25.500 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:55:25 vm05 bash[43541]: audit 2026-03-10T05:55:24.157867+0000 mgr.y (mgr.24992) 175 : audit [DBG] from='client.34288 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T05:55:25.500 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:55:25 vm05 bash[43541]: audit 2026-03-10T05:55:24.349290+0000 mgr.y (mgr.24992) 176 : audit [DBG] from='client.44280 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T05:55:25.500 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:55:25 vm05 bash[43541]: audit 2026-03-10T05:55:24.349290+0000 mgr.y (mgr.24992) 176 : audit [DBG] from='client.44280 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T05:55:25.500 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:55:25 vm05 bash[43541]: audit 2026-03-10T05:55:24.532213+0000 mgr.y (mgr.24992) 177 : audit [DBG] from='client.34294 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T05:55:25.500 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:55:25 vm05 bash[43541]: audit 2026-03-10T05:55:24.532213+0000 mgr.y (mgr.24992) 177 : audit [DBG] from='client.34294 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T05:55:25.500 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:55:25 vm05 bash[43541]: audit 2026-03-10T05:55:24.692169+0000 mon.a (mon.0) 399 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:55:25.500 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:55:25 vm05 bash[43541]: audit 2026-03-10T05:55:24.692169+0000 mon.a (mon.0) 399 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:55:25.500 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:55:25 vm05 bash[43541]: audit 2026-03-10T05:55:24.699410+0000 mon.a (mon.0) 400 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:55:25.500 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:55:25 vm05 bash[43541]: audit 2026-03-10T05:55:24.699410+0000 mon.a (mon.0) 400 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:55:25.500 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:55:25 vm05 bash[43541]: audit 2026-03-10T05:55:24.763521+0000 mon.c (mon.1) 10 : audit [DBG] from='client.? 192.168.123.102:0/3037529283' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T05:55:25.500 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:55:25 vm05 bash[43541]: audit 2026-03-10T05:55:24.763521+0000 mon.c (mon.1) 10 : audit [DBG] from='client.? 192.168.123.102:0/3037529283' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T05:55:25.584 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:55:25 vm02 bash[56371]: cluster 2026-03-10T05:55:24.088293+0000 mon.a (mon.0) 397 : cluster [WRN] Health check failed: 1 osds down (OSD_DOWN) 2026-03-10T05:55:25.585 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:55:25 vm02 bash[56371]: cluster 2026-03-10T05:55:24.088293+0000 mon.a (mon.0) 397 : cluster [WRN] Health check failed: 1 osds down (OSD_DOWN) 2026-03-10T05:55:25.585 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:55:25 vm02 bash[56371]: cluster 2026-03-10T05:55:24.128450+0000 mon.a (mon.0) 398 : cluster [DBG] osdmap e117: 8 total, 7 up, 8 in 2026-03-10T05:55:25.585 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:55:25 vm02 bash[56371]: cluster 2026-03-10T05:55:24.128450+0000 mon.a (mon.0) 398 : cluster [DBG] osdmap e117: 8 total, 7 up, 8 in 2026-03-10T05:55:25.585 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:55:25 vm02 bash[56371]: audit 2026-03-10T05:55:24.157867+0000 mgr.y (mgr.24992) 175 : audit [DBG] from='client.34288 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T05:55:25.585 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:55:25 vm02 bash[56371]: audit 2026-03-10T05:55:24.157867+0000 mgr.y (mgr.24992) 175 : audit [DBG] from='client.34288 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T05:55:25.585 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:55:25 vm02 bash[56371]: audit 2026-03-10T05:55:24.349290+0000 mgr.y (mgr.24992) 176 : audit [DBG] from='client.44280 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T05:55:25.585 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:55:25 vm02 bash[56371]: audit 2026-03-10T05:55:24.349290+0000 mgr.y (mgr.24992) 176 : audit [DBG] from='client.44280 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T05:55:25.585 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:55:25 vm02 bash[56371]: audit 2026-03-10T05:55:24.532213+0000 mgr.y (mgr.24992) 177 : audit [DBG] from='client.34294 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T05:55:25.585 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:55:25 vm02 bash[56371]: audit 2026-03-10T05:55:24.532213+0000 mgr.y (mgr.24992) 177 : audit [DBG] from='client.34294 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T05:55:25.585 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:55:25 vm02 bash[56371]: audit 2026-03-10T05:55:24.692169+0000 mon.a (mon.0) 399 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:55:25.585 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:55:25 vm02 bash[56371]: audit 2026-03-10T05:55:24.692169+0000 mon.a (mon.0) 399 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:55:25.585 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:55:25 vm02 bash[56371]: audit 2026-03-10T05:55:24.699410+0000 mon.a (mon.0) 400 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:55:25.585 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:55:25 vm02 bash[56371]: audit 2026-03-10T05:55:24.699410+0000 mon.a (mon.0) 400 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:55:25.585 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:55:25 vm02 bash[56371]: audit 2026-03-10T05:55:24.763521+0000 mon.c (mon.1) 10 : audit [DBG] from='client.? 192.168.123.102:0/3037529283' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T05:55:25.585 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:55:25 vm02 bash[56371]: audit 2026-03-10T05:55:24.763521+0000 mon.c (mon.1) 10 : audit [DBG] from='client.? 192.168.123.102:0/3037529283' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T05:55:25.585 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:55:25 vm02 bash[55303]: cluster 2026-03-10T05:55:24.088293+0000 mon.a (mon.0) 397 : cluster [WRN] Health check failed: 1 osds down (OSD_DOWN) 2026-03-10T05:55:25.585 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:55:25 vm02 bash[55303]: cluster 2026-03-10T05:55:24.088293+0000 mon.a (mon.0) 397 : cluster [WRN] Health check failed: 1 osds down (OSD_DOWN) 2026-03-10T05:55:25.585 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:55:25 vm02 bash[55303]: cluster 2026-03-10T05:55:24.128450+0000 mon.a (mon.0) 398 : cluster [DBG] osdmap e117: 8 total, 7 up, 8 in 2026-03-10T05:55:25.585 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:55:25 vm02 bash[55303]: cluster 2026-03-10T05:55:24.128450+0000 mon.a (mon.0) 398 : cluster [DBG] osdmap e117: 8 total, 7 up, 8 in 2026-03-10T05:55:25.585 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:55:25 vm02 bash[55303]: audit 2026-03-10T05:55:24.157867+0000 mgr.y (mgr.24992) 175 : audit [DBG] from='client.34288 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T05:55:25.585 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:55:25 vm02 bash[55303]: audit 2026-03-10T05:55:24.157867+0000 mgr.y (mgr.24992) 175 : audit [DBG] from='client.34288 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T05:55:25.585 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:55:25 vm02 bash[55303]: audit 2026-03-10T05:55:24.349290+0000 mgr.y (mgr.24992) 176 : audit [DBG] from='client.44280 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T05:55:25.585 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:55:25 vm02 bash[55303]: audit 2026-03-10T05:55:24.349290+0000 mgr.y (mgr.24992) 176 : audit [DBG] from='client.44280 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T05:55:25.585 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:55:25 vm02 bash[55303]: audit 2026-03-10T05:55:24.532213+0000 mgr.y (mgr.24992) 177 : audit [DBG] from='client.34294 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T05:55:25.585 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:55:25 vm02 bash[55303]: audit 2026-03-10T05:55:24.532213+0000 mgr.y (mgr.24992) 177 : audit [DBG] from='client.34294 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T05:55:25.585 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:55:25 vm02 bash[55303]: audit 2026-03-10T05:55:24.692169+0000 mon.a (mon.0) 399 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:55:25.585 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:55:25 vm02 bash[55303]: audit 2026-03-10T05:55:24.692169+0000 mon.a (mon.0) 399 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:55:25.585 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:55:25 vm02 bash[55303]: audit 2026-03-10T05:55:24.699410+0000 mon.a (mon.0) 400 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:55:25.585 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:55:25 vm02 bash[55303]: audit 2026-03-10T05:55:24.699410+0000 mon.a (mon.0) 400 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:55:25.585 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:55:25 vm02 bash[55303]: audit 2026-03-10T05:55:24.763521+0000 mon.c (mon.1) 10 : audit [DBG] from='client.? 192.168.123.102:0/3037529283' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T05:55:25.585 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:55:25 vm02 bash[55303]: audit 2026-03-10T05:55:24.763521+0000 mon.c (mon.1) 10 : audit [DBG] from='client.? 192.168.123.102:0/3037529283' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T05:55:25.999 INFO:journalctl@ceph.osd.5.vm05.stdout:Mar 10 05:55:25 vm05 bash[47467]: --> Failed to activate via raw: did not find any matching OSD to activate 2026-03-10T05:55:25.999 INFO:journalctl@ceph.osd.5.vm05.stdout:Mar 10 05:55:25 vm05 bash[47467]: Running command: /usr/bin/ceph-authtool --gen-print-key 2026-03-10T05:55:25.999 INFO:journalctl@ceph.osd.5.vm05.stdout:Mar 10 05:55:25 vm05 bash[47467]: Running command: /usr/bin/ceph-authtool --gen-print-key 2026-03-10T05:55:25.999 INFO:journalctl@ceph.osd.5.vm05.stdout:Mar 10 05:55:25 vm05 bash[47467]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-5 2026-03-10T05:55:25.999 INFO:journalctl@ceph.osd.5.vm05.stdout:Mar 10 05:55:25 vm05 bash[47467]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph-c4ba11ea-c441-47a6-8a32-196713681e4e/osd-block-2b35feb0-b492-4603-81e0-b864fb275f8c --path /var/lib/ceph/osd/ceph-5 --no-mon-config 2026-03-10T05:55:26.499 INFO:journalctl@ceph.osd.5.vm05.stdout:Mar 10 05:55:26 vm05 bash[47467]: Running command: /usr/bin/ln -snf /dev/ceph-c4ba11ea-c441-47a6-8a32-196713681e4e/osd-block-2b35feb0-b492-4603-81e0-b864fb275f8c /var/lib/ceph/osd/ceph-5/block 2026-03-10T05:55:26.499 INFO:journalctl@ceph.osd.5.vm05.stdout:Mar 10 05:55:26 vm05 bash[47467]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-5/block 2026-03-10T05:55:26.499 INFO:journalctl@ceph.osd.5.vm05.stdout:Mar 10 05:55:26 vm05 bash[47467]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-1 2026-03-10T05:55:26.499 INFO:journalctl@ceph.osd.5.vm05.stdout:Mar 10 05:55:26 vm05 bash[47467]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-5 2026-03-10T05:55:26.499 INFO:journalctl@ceph.osd.5.vm05.stdout:Mar 10 05:55:26 vm05 bash[47467]: --> ceph-volume lvm activate successful for osd ID: 5 2026-03-10T05:55:26.499 INFO:journalctl@ceph.osd.5.vm05.stdout:Mar 10 05:55:26 vm05 bash[47813]: debug 2026-03-10T05:55:26.215+0000 7f086a83c640 1 -- 192.168.123.105:0/3304187754 <== mon.2 v2:192.168.123.105:3300/0 4 ==== auth_reply(proto 2 0 (0) Success) ==== 194+0+0 (secure 0 0 0) 0x5558520cf680 con 0x5558520c6000 2026-03-10T05:55:26.500 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:55:26 vm05 bash[43541]: cluster 2026-03-10T05:55:24.851736+0000 mgr.y (mgr.24992) 178 : cluster [DBG] pgmap v96: 161 pgs: 21 stale+active+clean, 140 active+clean; 457 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-10T05:55:26.500 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:55:26 vm05 bash[43541]: cluster 2026-03-10T05:55:24.851736+0000 mgr.y (mgr.24992) 178 : cluster [DBG] pgmap v96: 161 pgs: 21 stale+active+clean, 140 active+clean; 457 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-10T05:55:26.500 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:55:26 vm05 bash[43541]: audit 2026-03-10T05:55:24.956710+0000 mgr.y (mgr.24992) 179 : audit [DBG] from='client.34306 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T05:55:26.500 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:55:26 vm05 bash[43541]: audit 2026-03-10T05:55:24.956710+0000 mgr.y (mgr.24992) 179 : audit [DBG] from='client.34306 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T05:55:26.500 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:55:26 vm05 bash[43541]: cluster 2026-03-10T05:55:25.108986+0000 mon.a (mon.0) 401 : cluster [DBG] osdmap e118: 8 total, 7 up, 8 in 2026-03-10T05:55:26.500 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:55:26 vm05 bash[43541]: cluster 2026-03-10T05:55:25.108986+0000 mon.a (mon.0) 401 : cluster [DBG] osdmap e118: 8 total, 7 up, 8 in 2026-03-10T05:55:26.500 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:55:26 vm05 bash[43541]: audit 2026-03-10T05:55:25.216505+0000 mon.c (mon.1) 11 : audit [DBG] from='client.? 192.168.123.102:0/3340123391' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch 2026-03-10T05:55:26.500 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:55:26 vm05 bash[43541]: audit 2026-03-10T05:55:25.216505+0000 mon.c (mon.1) 11 : audit [DBG] from='client.? 192.168.123.102:0/3340123391' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch 2026-03-10T05:55:26.500 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:55:26 vm05 bash[43541]: audit 2026-03-10T05:55:25.884075+0000 mon.a (mon.0) 402 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:55:26.500 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:55:26 vm05 bash[43541]: audit 2026-03-10T05:55:25.884075+0000 mon.a (mon.0) 402 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:55:26.500 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:55:26 vm05 bash[43541]: audit 2026-03-10T05:55:25.884626+0000 mon.a (mon.0) 403 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T05:55:26.500 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:55:26 vm05 bash[43541]: audit 2026-03-10T05:55:25.884626+0000 mon.a (mon.0) 403 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T05:55:26.500 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:55:26 vm05 bash[43541]: audit 2026-03-10T05:55:25.916244+0000 mon.a (mon.0) 404 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:55:26.500 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:55:26 vm05 bash[43541]: audit 2026-03-10T05:55:25.916244+0000 mon.a (mon.0) 404 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:55:26.584 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:55:26 vm02 bash[56371]: cluster 2026-03-10T05:55:24.851736+0000 mgr.y (mgr.24992) 178 : cluster [DBG] pgmap v96: 161 pgs: 21 stale+active+clean, 140 active+clean; 457 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-10T05:55:26.585 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:55:26 vm02 bash[56371]: cluster 2026-03-10T05:55:24.851736+0000 mgr.y (mgr.24992) 178 : cluster [DBG] pgmap v96: 161 pgs: 21 stale+active+clean, 140 active+clean; 457 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-10T05:55:26.585 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:55:26 vm02 bash[56371]: audit 2026-03-10T05:55:24.956710+0000 mgr.y (mgr.24992) 179 : audit [DBG] from='client.34306 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T05:55:26.585 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:55:26 vm02 bash[56371]: audit 2026-03-10T05:55:24.956710+0000 mgr.y (mgr.24992) 179 : audit [DBG] from='client.34306 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T05:55:26.585 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:55:26 vm02 bash[56371]: cluster 2026-03-10T05:55:25.108986+0000 mon.a (mon.0) 401 : cluster [DBG] osdmap e118: 8 total, 7 up, 8 in 2026-03-10T05:55:26.585 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:55:26 vm02 bash[56371]: cluster 2026-03-10T05:55:25.108986+0000 mon.a (mon.0) 401 : cluster [DBG] osdmap e118: 8 total, 7 up, 8 in 2026-03-10T05:55:26.585 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:55:26 vm02 bash[56371]: audit 2026-03-10T05:55:25.216505+0000 mon.c (mon.1) 11 : audit [DBG] from='client.? 192.168.123.102:0/3340123391' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch 2026-03-10T05:55:26.585 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:55:26 vm02 bash[56371]: audit 2026-03-10T05:55:25.216505+0000 mon.c (mon.1) 11 : audit [DBG] from='client.? 192.168.123.102:0/3340123391' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch 2026-03-10T05:55:26.585 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:55:26 vm02 bash[56371]: audit 2026-03-10T05:55:25.884075+0000 mon.a (mon.0) 402 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:55:26.585 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:55:26 vm02 bash[56371]: audit 2026-03-10T05:55:25.884075+0000 mon.a (mon.0) 402 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:55:26.585 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:55:26 vm02 bash[56371]: audit 2026-03-10T05:55:25.884626+0000 mon.a (mon.0) 403 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T05:55:26.585 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:55:26 vm02 bash[56371]: audit 2026-03-10T05:55:25.884626+0000 mon.a (mon.0) 403 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T05:55:26.585 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:55:26 vm02 bash[56371]: audit 2026-03-10T05:55:25.916244+0000 mon.a (mon.0) 404 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:55:26.585 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:55:26 vm02 bash[56371]: audit 2026-03-10T05:55:25.916244+0000 mon.a (mon.0) 404 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:55:26.585 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:55:26 vm02 bash[55303]: cluster 2026-03-10T05:55:24.851736+0000 mgr.y (mgr.24992) 178 : cluster [DBG] pgmap v96: 161 pgs: 21 stale+active+clean, 140 active+clean; 457 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-10T05:55:26.585 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:55:26 vm02 bash[55303]: cluster 2026-03-10T05:55:24.851736+0000 mgr.y (mgr.24992) 178 : cluster [DBG] pgmap v96: 161 pgs: 21 stale+active+clean, 140 active+clean; 457 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-10T05:55:26.585 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:55:26 vm02 bash[55303]: audit 2026-03-10T05:55:24.956710+0000 mgr.y (mgr.24992) 179 : audit [DBG] from='client.34306 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T05:55:26.585 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:55:26 vm02 bash[55303]: audit 2026-03-10T05:55:24.956710+0000 mgr.y (mgr.24992) 179 : audit [DBG] from='client.34306 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T05:55:26.585 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:55:26 vm02 bash[55303]: cluster 2026-03-10T05:55:25.108986+0000 mon.a (mon.0) 401 : cluster [DBG] osdmap e118: 8 total, 7 up, 8 in 2026-03-10T05:55:26.585 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:55:26 vm02 bash[55303]: cluster 2026-03-10T05:55:25.108986+0000 mon.a (mon.0) 401 : cluster [DBG] osdmap e118: 8 total, 7 up, 8 in 2026-03-10T05:55:26.585 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:55:26 vm02 bash[55303]: audit 2026-03-10T05:55:25.216505+0000 mon.c (mon.1) 11 : audit [DBG] from='client.? 192.168.123.102:0/3340123391' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch 2026-03-10T05:55:26.585 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:55:26 vm02 bash[55303]: audit 2026-03-10T05:55:25.216505+0000 mon.c (mon.1) 11 : audit [DBG] from='client.? 192.168.123.102:0/3340123391' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch 2026-03-10T05:55:26.585 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:55:26 vm02 bash[55303]: audit 2026-03-10T05:55:25.884075+0000 mon.a (mon.0) 402 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:55:26.585 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:55:26 vm02 bash[55303]: audit 2026-03-10T05:55:25.884075+0000 mon.a (mon.0) 402 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:55:26.585 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:55:26 vm02 bash[55303]: audit 2026-03-10T05:55:25.884626+0000 mon.a (mon.0) 403 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T05:55:26.585 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:55:26 vm02 bash[55303]: audit 2026-03-10T05:55:25.884626+0000 mon.a (mon.0) 403 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T05:55:26.585 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:55:26 vm02 bash[55303]: audit 2026-03-10T05:55:25.916244+0000 mon.a (mon.0) 404 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:55:26.585 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:55:26 vm02 bash[55303]: audit 2026-03-10T05:55:25.916244+0000 mon.a (mon.0) 404 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:55:27.162 INFO:journalctl@ceph.osd.5.vm05.stdout:Mar 10 05:55:26 vm05 bash[47813]: debug 2026-03-10T05:55:26.887+0000 7f086d0a6740 -1 Falling back to public interface 2026-03-10T05:55:27.162 INFO:journalctl@ceph.prometheus.a.vm05.stdout:Mar 10 05:55:26 vm05 bash[41269]: ts=2026-03-10T05:55:26.949Z caller=group.go:483 level=warn name=CephNodeDiskspaceWarning index=4 component="rule manager" file=/etc/prometheus/alerting/ceph_alerts.yml group=nodes msg="Evaluating rule failed" rule="alert: CephNodeDiskspaceWarning\nexpr: predict_linear(node_filesystem_free_bytes{device=~\"/.*\"}[2d], 3600 * 24 * 5)\n * on (instance) group_left (nodename) node_uname_info < 0\nlabels:\n oid: 1.3.6.1.4.1.50495.1.2.1.8.4\n severity: warning\n type: ceph_default\nannotations:\n description: Mountpoint {{ $labels.mountpoint }} on {{ $labels.nodename }} will\n be full in less than 5 days based on the 48 hour trailing fill rate.\n summary: Host filesystem free space is getting low\n" err="found duplicate series for the match group {instance=\"vm05\"} on the right hand-side of the operation: [{__name__=\"node_uname_info\", cluster=\"107483ae-1c44-11f1-b530-c1172cd6122a\", domainname=\"(none)\", instance=\"vm05\", job=\"node\", machine=\"x86_64\", nodename=\"vm05\", release=\"5.15.0-1092-kvm\", sysname=\"Linux\", version=\"#97-Ubuntu SMP Fri Jan 23 15:00:24 UTC 2026\"}, {__name__=\"node_uname_info\", domainname=\"(none)\", instance=\"vm05\", job=\"node\", machine=\"x86_64\", nodename=\"vm05\", release=\"5.15.0-1092-kvm\", sysname=\"Linux\", version=\"#97-Ubuntu SMP Fri Jan 23 15:00:24 UTC 2026\"}];many-to-many matching not allowed: matching labels must be unique on one side" 2026-03-10T05:55:27.335 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:55:27 vm02 bash[56371]: cluster 2026-03-10T05:55:27.107856+0000 mon.a (mon.0) 405 : cluster [WRN] Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY) 2026-03-10T05:55:27.335 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:55:27 vm02 bash[56371]: cluster 2026-03-10T05:55:27.107856+0000 mon.a (mon.0) 405 : cluster [WRN] Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY) 2026-03-10T05:55:27.335 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:55:27 vm02 bash[56371]: cluster 2026-03-10T05:55:27.107879+0000 mon.a (mon.0) 406 : cluster [WRN] Health check failed: Degraded data redundancy: 34/723 objects degraded (4.703%), 8 pgs degraded (PG_DEGRADED) 2026-03-10T05:55:27.335 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:55:27 vm02 bash[56371]: cluster 2026-03-10T05:55:27.107879+0000 mon.a (mon.0) 406 : cluster [WRN] Health check failed: Degraded data redundancy: 34/723 objects degraded (4.703%), 8 pgs degraded (PG_DEGRADED) 2026-03-10T05:55:27.335 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:55:27 vm02 bash[55303]: cluster 2026-03-10T05:55:27.107856+0000 mon.a (mon.0) 405 : cluster [WRN] Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY) 2026-03-10T05:55:27.335 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:55:27 vm02 bash[55303]: cluster 2026-03-10T05:55:27.107856+0000 mon.a (mon.0) 405 : cluster [WRN] Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY) 2026-03-10T05:55:27.335 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:55:27 vm02 bash[55303]: cluster 2026-03-10T05:55:27.107879+0000 mon.a (mon.0) 406 : cluster [WRN] Health check failed: Degraded data redundancy: 34/723 objects degraded (4.703%), 8 pgs degraded (PG_DEGRADED) 2026-03-10T05:55:27.335 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:55:27 vm02 bash[55303]: cluster 2026-03-10T05:55:27.107879+0000 mon.a (mon.0) 406 : cluster [WRN] Health check failed: Degraded data redundancy: 34/723 objects degraded (4.703%), 8 pgs degraded (PG_DEGRADED) 2026-03-10T05:55:27.499 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:55:27 vm05 bash[43541]: cluster 2026-03-10T05:55:27.107856+0000 mon.a (mon.0) 405 : cluster [WRN] Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY) 2026-03-10T05:55:27.499 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:55:27 vm05 bash[43541]: cluster 2026-03-10T05:55:27.107856+0000 mon.a (mon.0) 405 : cluster [WRN] Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY) 2026-03-10T05:55:27.499 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:55:27 vm05 bash[43541]: cluster 2026-03-10T05:55:27.107879+0000 mon.a (mon.0) 406 : cluster [WRN] Health check failed: Degraded data redundancy: 34/723 objects degraded (4.703%), 8 pgs degraded (PG_DEGRADED) 2026-03-10T05:55:27.500 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:55:27 vm05 bash[43541]: cluster 2026-03-10T05:55:27.107879+0000 mon.a (mon.0) 406 : cluster [WRN] Health check failed: Degraded data redundancy: 34/723 objects degraded (4.703%), 8 pgs degraded (PG_DEGRADED) 2026-03-10T05:55:28.165 INFO:journalctl@ceph.osd.5.vm05.stdout:Mar 10 05:55:27 vm05 bash[47813]: debug 2026-03-10T05:55:27.855+0000 7f086d0a6740 -1 osd.5 0 read_superblock omap replica is missing. 2026-03-10T05:55:28.165 INFO:journalctl@ceph.osd.5.vm05.stdout:Mar 10 05:55:27 vm05 bash[47813]: debug 2026-03-10T05:55:27.867+0000 7f086d0a6740 -1 osd.5 116 log_to_monitors true 2026-03-10T05:55:28.499 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:55:28 vm05 bash[43541]: cluster 2026-03-10T05:55:26.852110+0000 mgr.y (mgr.24992) 180 : cluster [DBG] pgmap v98: 161 pgs: 21 active+undersized, 6 peering, 8 stale+active+clean, 8 active+undersized+degraded, 118 active+clean; 457 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 34/723 objects degraded (4.703%) 2026-03-10T05:55:28.500 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:55:28 vm05 bash[43541]: cluster 2026-03-10T05:55:26.852110+0000 mgr.y (mgr.24992) 180 : cluster [DBG] pgmap v98: 161 pgs: 21 active+undersized, 6 peering, 8 stale+active+clean, 8 active+undersized+degraded, 118 active+clean; 457 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 34/723 objects degraded (4.703%) 2026-03-10T05:55:28.500 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:55:28 vm05 bash[43541]: audit 2026-03-10T05:55:26.966229+0000 mgr.y (mgr.24992) 181 : audit [DBG] from='client.25048 -' entity='client.iscsi.foo.vm02.mxbwmh' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T05:55:28.500 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:55:28 vm05 bash[43541]: audit 2026-03-10T05:55:26.966229+0000 mgr.y (mgr.24992) 181 : audit [DBG] from='client.25048 -' entity='client.iscsi.foo.vm02.mxbwmh' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T05:55:28.500 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:55:28 vm05 bash[43541]: audit 2026-03-10T05:55:27.869782+0000 mon.a (mon.0) 407 : audit [INF] from='osd.5 ' entity='osd.5' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["5"]}]: dispatch 2026-03-10T05:55:28.500 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:55:28 vm05 bash[43541]: audit 2026-03-10T05:55:27.869782+0000 mon.a (mon.0) 407 : audit [INF] from='osd.5 ' entity='osd.5' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["5"]}]: dispatch 2026-03-10T05:55:28.500 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:55:28 vm05 bash[43541]: audit 2026-03-10T05:55:27.873703+0000 mon.b (mon.2) 9 : audit [INF] from='osd.5 [v2:192.168.123.105:6808/878773099,v1:192.168.123.105:6809/878773099]' entity='osd.5' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["5"]}]: dispatch 2026-03-10T05:55:28.500 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:55:28 vm05 bash[43541]: audit 2026-03-10T05:55:27.873703+0000 mon.b (mon.2) 9 : audit [INF] from='osd.5 [v2:192.168.123.105:6808/878773099,v1:192.168.123.105:6809/878773099]' entity='osd.5' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["5"]}]: dispatch 2026-03-10T05:55:28.585 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:55:28 vm02 bash[56371]: cluster 2026-03-10T05:55:26.852110+0000 mgr.y (mgr.24992) 180 : cluster [DBG] pgmap v98: 161 pgs: 21 active+undersized, 6 peering, 8 stale+active+clean, 8 active+undersized+degraded, 118 active+clean; 457 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 34/723 objects degraded (4.703%) 2026-03-10T05:55:28.585 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:55:28 vm02 bash[56371]: cluster 2026-03-10T05:55:26.852110+0000 mgr.y (mgr.24992) 180 : cluster [DBG] pgmap v98: 161 pgs: 21 active+undersized, 6 peering, 8 stale+active+clean, 8 active+undersized+degraded, 118 active+clean; 457 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 34/723 objects degraded (4.703%) 2026-03-10T05:55:28.585 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:55:28 vm02 bash[56371]: audit 2026-03-10T05:55:26.966229+0000 mgr.y (mgr.24992) 181 : audit [DBG] from='client.25048 -' entity='client.iscsi.foo.vm02.mxbwmh' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T05:55:28.585 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:55:28 vm02 bash[56371]: audit 2026-03-10T05:55:26.966229+0000 mgr.y (mgr.24992) 181 : audit [DBG] from='client.25048 -' entity='client.iscsi.foo.vm02.mxbwmh' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T05:55:28.585 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:55:28 vm02 bash[56371]: audit 2026-03-10T05:55:27.869782+0000 mon.a (mon.0) 407 : audit [INF] from='osd.5 ' entity='osd.5' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["5"]}]: dispatch 2026-03-10T05:55:28.585 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:55:28 vm02 bash[56371]: audit 2026-03-10T05:55:27.869782+0000 mon.a (mon.0) 407 : audit [INF] from='osd.5 ' entity='osd.5' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["5"]}]: dispatch 2026-03-10T05:55:28.585 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:55:28 vm02 bash[56371]: audit 2026-03-10T05:55:27.873703+0000 mon.b (mon.2) 9 : audit [INF] from='osd.5 [v2:192.168.123.105:6808/878773099,v1:192.168.123.105:6809/878773099]' entity='osd.5' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["5"]}]: dispatch 2026-03-10T05:55:28.585 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:55:28 vm02 bash[56371]: audit 2026-03-10T05:55:27.873703+0000 mon.b (mon.2) 9 : audit [INF] from='osd.5 [v2:192.168.123.105:6808/878773099,v1:192.168.123.105:6809/878773099]' entity='osd.5' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["5"]}]: dispatch 2026-03-10T05:55:28.585 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:55:28 vm02 bash[55303]: cluster 2026-03-10T05:55:26.852110+0000 mgr.y (mgr.24992) 180 : cluster [DBG] pgmap v98: 161 pgs: 21 active+undersized, 6 peering, 8 stale+active+clean, 8 active+undersized+degraded, 118 active+clean; 457 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 34/723 objects degraded (4.703%) 2026-03-10T05:55:28.585 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:55:28 vm02 bash[55303]: cluster 2026-03-10T05:55:26.852110+0000 mgr.y (mgr.24992) 180 : cluster [DBG] pgmap v98: 161 pgs: 21 active+undersized, 6 peering, 8 stale+active+clean, 8 active+undersized+degraded, 118 active+clean; 457 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 34/723 objects degraded (4.703%) 2026-03-10T05:55:28.585 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:55:28 vm02 bash[55303]: audit 2026-03-10T05:55:26.966229+0000 mgr.y (mgr.24992) 181 : audit [DBG] from='client.25048 -' entity='client.iscsi.foo.vm02.mxbwmh' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T05:55:28.585 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:55:28 vm02 bash[55303]: audit 2026-03-10T05:55:26.966229+0000 mgr.y (mgr.24992) 181 : audit [DBG] from='client.25048 -' entity='client.iscsi.foo.vm02.mxbwmh' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T05:55:28.585 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:55:28 vm02 bash[55303]: audit 2026-03-10T05:55:27.869782+0000 mon.a (mon.0) 407 : audit [INF] from='osd.5 ' entity='osd.5' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["5"]}]: dispatch 2026-03-10T05:55:28.585 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:55:28 vm02 bash[55303]: audit 2026-03-10T05:55:27.869782+0000 mon.a (mon.0) 407 : audit [INF] from='osd.5 ' entity='osd.5' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["5"]}]: dispatch 2026-03-10T05:55:28.585 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:55:28 vm02 bash[55303]: audit 2026-03-10T05:55:27.873703+0000 mon.b (mon.2) 9 : audit [INF] from='osd.5 [v2:192.168.123.105:6808/878773099,v1:192.168.123.105:6809/878773099]' entity='osd.5' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["5"]}]: dispatch 2026-03-10T05:55:28.585 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:55:28 vm02 bash[55303]: audit 2026-03-10T05:55:27.873703+0000 mon.b (mon.2) 9 : audit [INF] from='osd.5 [v2:192.168.123.105:6808/878773099,v1:192.168.123.105:6809/878773099]' entity='osd.5' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["5"]}]: dispatch 2026-03-10T05:55:29.499 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:55:29 vm05 bash[43541]: audit 2026-03-10T05:55:28.172680+0000 mon.a (mon.0) 408 : audit [INF] from='osd.5 ' entity='osd.5' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["5"]}]': finished 2026-03-10T05:55:29.499 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:55:29 vm05 bash[43541]: audit 2026-03-10T05:55:28.172680+0000 mon.a (mon.0) 408 : audit [INF] from='osd.5 ' entity='osd.5' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["5"]}]': finished 2026-03-10T05:55:29.499 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:55:29 vm05 bash[43541]: cluster 2026-03-10T05:55:28.177674+0000 mon.a (mon.0) 409 : cluster [DBG] osdmap e119: 8 total, 7 up, 8 in 2026-03-10T05:55:29.499 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:55:29 vm05 bash[43541]: cluster 2026-03-10T05:55:28.177674+0000 mon.a (mon.0) 409 : cluster [DBG] osdmap e119: 8 total, 7 up, 8 in 2026-03-10T05:55:29.499 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:55:29 vm05 bash[43541]: audit 2026-03-10T05:55:28.185763+0000 mon.a (mon.0) 410 : audit [INF] from='osd.5 ' entity='osd.5' cmd=[{"prefix": "osd crush create-or-move", "id": 5, "weight":0.0195, "args": ["host=vm05", "root=default"]}]: dispatch 2026-03-10T05:55:29.499 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:55:29 vm05 bash[43541]: audit 2026-03-10T05:55:28.185763+0000 mon.a (mon.0) 410 : audit [INF] from='osd.5 ' entity='osd.5' cmd=[{"prefix": "osd crush create-or-move", "id": 5, "weight":0.0195, "args": ["host=vm05", "root=default"]}]: dispatch 2026-03-10T05:55:29.499 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:55:29 vm05 bash[43541]: audit 2026-03-10T05:55:28.189690+0000 mon.b (mon.2) 10 : audit [INF] from='osd.5 [v2:192.168.123.105:6808/878773099,v1:192.168.123.105:6809/878773099]' entity='osd.5' cmd=[{"prefix": "osd crush create-or-move", "id": 5, "weight":0.0195, "args": ["host=vm05", "root=default"]}]: dispatch 2026-03-10T05:55:29.500 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:55:29 vm05 bash[43541]: audit 2026-03-10T05:55:28.189690+0000 mon.b (mon.2) 10 : audit [INF] from='osd.5 [v2:192.168.123.105:6808/878773099,v1:192.168.123.105:6809/878773099]' entity='osd.5' cmd=[{"prefix": "osd crush create-or-move", "id": 5, "weight":0.0195, "args": ["host=vm05", "root=default"]}]: dispatch 2026-03-10T05:55:29.500 INFO:journalctl@ceph.osd.5.vm05.stdout:Mar 10 05:55:29 vm05 bash[47813]: debug 2026-03-10T05:55:29.099+0000 7f0864650640 -1 osd.5 116 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory 2026-03-10T05:55:29.585 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:55:29 vm02 bash[56371]: audit 2026-03-10T05:55:28.172680+0000 mon.a (mon.0) 408 : audit [INF] from='osd.5 ' entity='osd.5' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["5"]}]': finished 2026-03-10T05:55:29.585 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:55:29 vm02 bash[56371]: audit 2026-03-10T05:55:28.172680+0000 mon.a (mon.0) 408 : audit [INF] from='osd.5 ' entity='osd.5' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["5"]}]': finished 2026-03-10T05:55:29.585 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:55:29 vm02 bash[56371]: cluster 2026-03-10T05:55:28.177674+0000 mon.a (mon.0) 409 : cluster [DBG] osdmap e119: 8 total, 7 up, 8 in 2026-03-10T05:55:29.585 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:55:29 vm02 bash[56371]: cluster 2026-03-10T05:55:28.177674+0000 mon.a (mon.0) 409 : cluster [DBG] osdmap e119: 8 total, 7 up, 8 in 2026-03-10T05:55:29.585 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:55:29 vm02 bash[56371]: audit 2026-03-10T05:55:28.185763+0000 mon.a (mon.0) 410 : audit [INF] from='osd.5 ' entity='osd.5' cmd=[{"prefix": "osd crush create-or-move", "id": 5, "weight":0.0195, "args": ["host=vm05", "root=default"]}]: dispatch 2026-03-10T05:55:29.585 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:55:29 vm02 bash[56371]: audit 2026-03-10T05:55:28.185763+0000 mon.a (mon.0) 410 : audit [INF] from='osd.5 ' entity='osd.5' cmd=[{"prefix": "osd crush create-or-move", "id": 5, "weight":0.0195, "args": ["host=vm05", "root=default"]}]: dispatch 2026-03-10T05:55:29.585 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:55:29 vm02 bash[56371]: audit 2026-03-10T05:55:28.189690+0000 mon.b (mon.2) 10 : audit [INF] from='osd.5 [v2:192.168.123.105:6808/878773099,v1:192.168.123.105:6809/878773099]' entity='osd.5' cmd=[{"prefix": "osd crush create-or-move", "id": 5, "weight":0.0195, "args": ["host=vm05", "root=default"]}]: dispatch 2026-03-10T05:55:29.585 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:55:29 vm02 bash[56371]: audit 2026-03-10T05:55:28.189690+0000 mon.b (mon.2) 10 : audit [INF] from='osd.5 [v2:192.168.123.105:6808/878773099,v1:192.168.123.105:6809/878773099]' entity='osd.5' cmd=[{"prefix": "osd crush create-or-move", "id": 5, "weight":0.0195, "args": ["host=vm05", "root=default"]}]: dispatch 2026-03-10T05:55:29.585 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:55:29 vm02 bash[55303]: audit 2026-03-10T05:55:28.172680+0000 mon.a (mon.0) 408 : audit [INF] from='osd.5 ' entity='osd.5' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["5"]}]': finished 2026-03-10T05:55:29.585 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:55:29 vm02 bash[55303]: audit 2026-03-10T05:55:28.172680+0000 mon.a (mon.0) 408 : audit [INF] from='osd.5 ' entity='osd.5' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["5"]}]': finished 2026-03-10T05:55:29.585 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:55:29 vm02 bash[55303]: cluster 2026-03-10T05:55:28.177674+0000 mon.a (mon.0) 409 : cluster [DBG] osdmap e119: 8 total, 7 up, 8 in 2026-03-10T05:55:29.585 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:55:29 vm02 bash[55303]: cluster 2026-03-10T05:55:28.177674+0000 mon.a (mon.0) 409 : cluster [DBG] osdmap e119: 8 total, 7 up, 8 in 2026-03-10T05:55:29.585 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:55:29 vm02 bash[55303]: audit 2026-03-10T05:55:28.185763+0000 mon.a (mon.0) 410 : audit [INF] from='osd.5 ' entity='osd.5' cmd=[{"prefix": "osd crush create-or-move", "id": 5, "weight":0.0195, "args": ["host=vm05", "root=default"]}]: dispatch 2026-03-10T05:55:29.585 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:55:29 vm02 bash[55303]: audit 2026-03-10T05:55:28.185763+0000 mon.a (mon.0) 410 : audit [INF] from='osd.5 ' entity='osd.5' cmd=[{"prefix": "osd crush create-or-move", "id": 5, "weight":0.0195, "args": ["host=vm05", "root=default"]}]: dispatch 2026-03-10T05:55:29.585 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:55:29 vm02 bash[55303]: audit 2026-03-10T05:55:28.189690+0000 mon.b (mon.2) 10 : audit [INF] from='osd.5 [v2:192.168.123.105:6808/878773099,v1:192.168.123.105:6809/878773099]' entity='osd.5' cmd=[{"prefix": "osd crush create-or-move", "id": 5, "weight":0.0195, "args": ["host=vm05", "root=default"]}]: dispatch 2026-03-10T05:55:29.585 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:55:29 vm02 bash[55303]: audit 2026-03-10T05:55:28.189690+0000 mon.b (mon.2) 10 : audit [INF] from='osd.5 [v2:192.168.123.105:6808/878773099,v1:192.168.123.105:6809/878773099]' entity='osd.5' cmd=[{"prefix": "osd crush create-or-move", "id": 5, "weight":0.0195, "args": ["host=vm05", "root=default"]}]: dispatch 2026-03-10T05:55:30.499 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:55:30 vm05 bash[43541]: cluster 2026-03-10T05:55:28.852426+0000 mgr.y (mgr.24992) 182 : cluster [DBG] pgmap v100: 161 pgs: 35 active+undersized, 6 peering, 18 active+undersized+degraded, 102 active+clean; 457 KiB data, 233 MiB used, 160 GiB / 160 GiB avail; 68/723 objects degraded (9.405%) 2026-03-10T05:55:30.499 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:55:30 vm05 bash[43541]: cluster 2026-03-10T05:55:28.852426+0000 mgr.y (mgr.24992) 182 : cluster [DBG] pgmap v100: 161 pgs: 35 active+undersized, 6 peering, 18 active+undersized+degraded, 102 active+clean; 457 KiB data, 233 MiB used, 160 GiB / 160 GiB avail; 68/723 objects degraded (9.405%) 2026-03-10T05:55:30.499 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:55:30 vm05 bash[43541]: cluster 2026-03-10T05:55:29.174024+0000 mon.a (mon.0) 411 : cluster [INF] Health check cleared: OSD_DOWN (was: 1 osds down) 2026-03-10T05:55:30.499 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:55:30 vm05 bash[43541]: cluster 2026-03-10T05:55:29.174024+0000 mon.a (mon.0) 411 : cluster [INF] Health check cleared: OSD_DOWN (was: 1 osds down) 2026-03-10T05:55:30.500 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:55:30 vm05 bash[43541]: cluster 2026-03-10T05:55:29.215180+0000 mon.a (mon.0) 412 : cluster [INF] osd.5 [v2:192.168.123.105:6808/878773099,v1:192.168.123.105:6809/878773099] boot 2026-03-10T05:55:30.500 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:55:30 vm05 bash[43541]: cluster 2026-03-10T05:55:29.215180+0000 mon.a (mon.0) 412 : cluster [INF] osd.5 [v2:192.168.123.105:6808/878773099,v1:192.168.123.105:6809/878773099] boot 2026-03-10T05:55:30.500 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:55:30 vm05 bash[43541]: cluster 2026-03-10T05:55:29.215237+0000 mon.a (mon.0) 413 : cluster [DBG] osdmap e120: 8 total, 8 up, 8 in 2026-03-10T05:55:30.500 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:55:30 vm05 bash[43541]: cluster 2026-03-10T05:55:29.215237+0000 mon.a (mon.0) 413 : cluster [DBG] osdmap e120: 8 total, 8 up, 8 in 2026-03-10T05:55:30.500 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:55:30 vm05 bash[43541]: audit 2026-03-10T05:55:29.215467+0000 mon.a (mon.0) 414 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-10T05:55:30.500 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:55:30 vm05 bash[43541]: audit 2026-03-10T05:55:29.215467+0000 mon.a (mon.0) 414 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-10T05:55:30.585 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:55:30 vm02 bash[56371]: cluster 2026-03-10T05:55:28.852426+0000 mgr.y (mgr.24992) 182 : cluster [DBG] pgmap v100: 161 pgs: 35 active+undersized, 6 peering, 18 active+undersized+degraded, 102 active+clean; 457 KiB data, 233 MiB used, 160 GiB / 160 GiB avail; 68/723 objects degraded (9.405%) 2026-03-10T05:55:30.585 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:55:30 vm02 bash[56371]: cluster 2026-03-10T05:55:28.852426+0000 mgr.y (mgr.24992) 182 : cluster [DBG] pgmap v100: 161 pgs: 35 active+undersized, 6 peering, 18 active+undersized+degraded, 102 active+clean; 457 KiB data, 233 MiB used, 160 GiB / 160 GiB avail; 68/723 objects degraded (9.405%) 2026-03-10T05:55:30.585 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:55:30 vm02 bash[56371]: cluster 2026-03-10T05:55:29.174024+0000 mon.a (mon.0) 411 : cluster [INF] Health check cleared: OSD_DOWN (was: 1 osds down) 2026-03-10T05:55:30.585 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:55:30 vm02 bash[56371]: cluster 2026-03-10T05:55:29.174024+0000 mon.a (mon.0) 411 : cluster [INF] Health check cleared: OSD_DOWN (was: 1 osds down) 2026-03-10T05:55:30.585 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:55:30 vm02 bash[56371]: cluster 2026-03-10T05:55:29.215180+0000 mon.a (mon.0) 412 : cluster [INF] osd.5 [v2:192.168.123.105:6808/878773099,v1:192.168.123.105:6809/878773099] boot 2026-03-10T05:55:30.585 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:55:30 vm02 bash[56371]: cluster 2026-03-10T05:55:29.215180+0000 mon.a (mon.0) 412 : cluster [INF] osd.5 [v2:192.168.123.105:6808/878773099,v1:192.168.123.105:6809/878773099] boot 2026-03-10T05:55:30.585 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:55:30 vm02 bash[56371]: cluster 2026-03-10T05:55:29.215237+0000 mon.a (mon.0) 413 : cluster [DBG] osdmap e120: 8 total, 8 up, 8 in 2026-03-10T05:55:30.585 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:55:30 vm02 bash[56371]: cluster 2026-03-10T05:55:29.215237+0000 mon.a (mon.0) 413 : cluster [DBG] osdmap e120: 8 total, 8 up, 8 in 2026-03-10T05:55:30.585 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:55:30 vm02 bash[56371]: audit 2026-03-10T05:55:29.215467+0000 mon.a (mon.0) 414 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-10T05:55:30.585 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:55:30 vm02 bash[56371]: audit 2026-03-10T05:55:29.215467+0000 mon.a (mon.0) 414 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-10T05:55:30.585 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:55:30 vm02 bash[55303]: cluster 2026-03-10T05:55:28.852426+0000 mgr.y (mgr.24992) 182 : cluster [DBG] pgmap v100: 161 pgs: 35 active+undersized, 6 peering, 18 active+undersized+degraded, 102 active+clean; 457 KiB data, 233 MiB used, 160 GiB / 160 GiB avail; 68/723 objects degraded (9.405%) 2026-03-10T05:55:30.585 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:55:30 vm02 bash[55303]: cluster 2026-03-10T05:55:28.852426+0000 mgr.y (mgr.24992) 182 : cluster [DBG] pgmap v100: 161 pgs: 35 active+undersized, 6 peering, 18 active+undersized+degraded, 102 active+clean; 457 KiB data, 233 MiB used, 160 GiB / 160 GiB avail; 68/723 objects degraded (9.405%) 2026-03-10T05:55:30.585 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:55:30 vm02 bash[55303]: cluster 2026-03-10T05:55:29.174024+0000 mon.a (mon.0) 411 : cluster [INF] Health check cleared: OSD_DOWN (was: 1 osds down) 2026-03-10T05:55:30.585 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:55:30 vm02 bash[55303]: cluster 2026-03-10T05:55:29.174024+0000 mon.a (mon.0) 411 : cluster [INF] Health check cleared: OSD_DOWN (was: 1 osds down) 2026-03-10T05:55:30.585 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:55:30 vm02 bash[55303]: cluster 2026-03-10T05:55:29.215180+0000 mon.a (mon.0) 412 : cluster [INF] osd.5 [v2:192.168.123.105:6808/878773099,v1:192.168.123.105:6809/878773099] boot 2026-03-10T05:55:30.585 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:55:30 vm02 bash[55303]: cluster 2026-03-10T05:55:29.215180+0000 mon.a (mon.0) 412 : cluster [INF] osd.5 [v2:192.168.123.105:6808/878773099,v1:192.168.123.105:6809/878773099] boot 2026-03-10T05:55:30.585 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:55:30 vm02 bash[55303]: cluster 2026-03-10T05:55:29.215237+0000 mon.a (mon.0) 413 : cluster [DBG] osdmap e120: 8 total, 8 up, 8 in 2026-03-10T05:55:30.585 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:55:30 vm02 bash[55303]: cluster 2026-03-10T05:55:29.215237+0000 mon.a (mon.0) 413 : cluster [DBG] osdmap e120: 8 total, 8 up, 8 in 2026-03-10T05:55:30.585 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:55:30 vm02 bash[55303]: audit 2026-03-10T05:55:29.215467+0000 mon.a (mon.0) 414 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-10T05:55:30.585 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:55:30 vm02 bash[55303]: audit 2026-03-10T05:55:29.215467+0000 mon.a (mon.0) 414 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-10T05:55:31.509 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:55:31 vm05 bash[43541]: cluster 2026-03-10T05:55:29.104237+0000 osd.5 (osd.5) 1 : cluster [WRN] OSD bench result of 27291.241403 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.5. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd]. 2026-03-10T05:55:31.509 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:55:31 vm05 bash[43541]: cluster 2026-03-10T05:55:29.104237+0000 osd.5 (osd.5) 1 : cluster [WRN] OSD bench result of 27291.241403 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.5. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd]. 2026-03-10T05:55:31.509 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:55:31 vm05 bash[43541]: cluster 2026-03-10T05:55:30.196876+0000 mon.a (mon.0) 415 : cluster [DBG] osdmap e121: 8 total, 8 up, 8 in 2026-03-10T05:55:31.509 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:55:31 vm05 bash[43541]: cluster 2026-03-10T05:55:30.196876+0000 mon.a (mon.0) 415 : cluster [DBG] osdmap e121: 8 total, 8 up, 8 in 2026-03-10T05:55:31.584 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:55:31 vm02 bash[56371]: cluster 2026-03-10T05:55:29.104237+0000 osd.5 (osd.5) 1 : cluster [WRN] OSD bench result of 27291.241403 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.5. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd]. 2026-03-10T05:55:31.585 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:55:31 vm02 bash[56371]: cluster 2026-03-10T05:55:29.104237+0000 osd.5 (osd.5) 1 : cluster [WRN] OSD bench result of 27291.241403 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.5. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd]. 2026-03-10T05:55:31.585 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:55:31 vm02 bash[56371]: cluster 2026-03-10T05:55:30.196876+0000 mon.a (mon.0) 415 : cluster [DBG] osdmap e121: 8 total, 8 up, 8 in 2026-03-10T05:55:31.585 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:55:31 vm02 bash[56371]: cluster 2026-03-10T05:55:30.196876+0000 mon.a (mon.0) 415 : cluster [DBG] osdmap e121: 8 total, 8 up, 8 in 2026-03-10T05:55:31.585 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:55:31 vm02 bash[55303]: cluster 2026-03-10T05:55:29.104237+0000 osd.5 (osd.5) 1 : cluster [WRN] OSD bench result of 27291.241403 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.5. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd]. 2026-03-10T05:55:31.585 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:55:31 vm02 bash[55303]: cluster 2026-03-10T05:55:29.104237+0000 osd.5 (osd.5) 1 : cluster [WRN] OSD bench result of 27291.241403 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.5. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd]. 2026-03-10T05:55:31.585 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:55:31 vm02 bash[55303]: cluster 2026-03-10T05:55:30.196876+0000 mon.a (mon.0) 415 : cluster [DBG] osdmap e121: 8 total, 8 up, 8 in 2026-03-10T05:55:31.585 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:55:31 vm02 bash[55303]: cluster 2026-03-10T05:55:30.196876+0000 mon.a (mon.0) 415 : cluster [DBG] osdmap e121: 8 total, 8 up, 8 in 2026-03-10T05:55:32.749 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:55:32 vm05 bash[43541]: cluster 2026-03-10T05:55:30.852724+0000 mgr.y (mgr.24992) 183 : cluster [DBG] pgmap v103: 161 pgs: 35 active+undersized, 6 peering, 18 active+undersized+degraded, 102 active+clean; 457 KiB data, 233 MiB used, 160 GiB / 160 GiB avail; 68/723 objects degraded (9.405%) 2026-03-10T05:55:32.749 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:55:32 vm05 bash[43541]: cluster 2026-03-10T05:55:30.852724+0000 mgr.y (mgr.24992) 183 : cluster [DBG] pgmap v103: 161 pgs: 35 active+undersized, 6 peering, 18 active+undersized+degraded, 102 active+clean; 457 KiB data, 233 MiB used, 160 GiB / 160 GiB avail; 68/723 objects degraded (9.405%) 2026-03-10T05:55:32.749 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:55:32 vm05 bash[43541]: audit 2026-03-10T05:55:31.344792+0000 mon.a (mon.0) 416 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:55:32.749 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:55:32 vm05 bash[43541]: audit 2026-03-10T05:55:31.344792+0000 mon.a (mon.0) 416 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:55:32.749 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:55:32 vm05 bash[43541]: audit 2026-03-10T05:55:31.351298+0000 mon.a (mon.0) 417 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:55:32.749 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:55:32 vm05 bash[43541]: audit 2026-03-10T05:55:31.351298+0000 mon.a (mon.0) 417 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:55:32.749 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:55:32 vm05 bash[43541]: audit 2026-03-10T05:55:31.907648+0000 mon.a (mon.0) 418 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:55:32.749 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:55:32 vm05 bash[43541]: audit 2026-03-10T05:55:31.907648+0000 mon.a (mon.0) 418 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:55:32.749 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:55:32 vm05 bash[43541]: audit 2026-03-10T05:55:31.913907+0000 mon.a (mon.0) 419 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:55:32.749 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:55:32 vm05 bash[43541]: audit 2026-03-10T05:55:31.913907+0000 mon.a (mon.0) 419 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:55:32.834 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:55:32 vm02 bash[56371]: cluster 2026-03-10T05:55:30.852724+0000 mgr.y (mgr.24992) 183 : cluster [DBG] pgmap v103: 161 pgs: 35 active+undersized, 6 peering, 18 active+undersized+degraded, 102 active+clean; 457 KiB data, 233 MiB used, 160 GiB / 160 GiB avail; 68/723 objects degraded (9.405%) 2026-03-10T05:55:32.834 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:55:32 vm02 bash[56371]: cluster 2026-03-10T05:55:30.852724+0000 mgr.y (mgr.24992) 183 : cluster [DBG] pgmap v103: 161 pgs: 35 active+undersized, 6 peering, 18 active+undersized+degraded, 102 active+clean; 457 KiB data, 233 MiB used, 160 GiB / 160 GiB avail; 68/723 objects degraded (9.405%) 2026-03-10T05:55:32.834 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:55:32 vm02 bash[56371]: audit 2026-03-10T05:55:31.344792+0000 mon.a (mon.0) 416 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:55:32.834 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:55:32 vm02 bash[56371]: audit 2026-03-10T05:55:31.344792+0000 mon.a (mon.0) 416 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:55:32.835 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:55:32 vm02 bash[56371]: audit 2026-03-10T05:55:31.351298+0000 mon.a (mon.0) 417 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:55:32.835 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:55:32 vm02 bash[56371]: audit 2026-03-10T05:55:31.351298+0000 mon.a (mon.0) 417 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:55:32.835 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:55:32 vm02 bash[56371]: audit 2026-03-10T05:55:31.907648+0000 mon.a (mon.0) 418 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:55:32.835 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:55:32 vm02 bash[56371]: audit 2026-03-10T05:55:31.907648+0000 mon.a (mon.0) 418 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:55:32.835 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:55:32 vm02 bash[56371]: audit 2026-03-10T05:55:31.913907+0000 mon.a (mon.0) 419 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:55:32.835 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:55:32 vm02 bash[56371]: audit 2026-03-10T05:55:31.913907+0000 mon.a (mon.0) 419 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:55:32.835 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:55:32 vm02 bash[55303]: cluster 2026-03-10T05:55:30.852724+0000 mgr.y (mgr.24992) 183 : cluster [DBG] pgmap v103: 161 pgs: 35 active+undersized, 6 peering, 18 active+undersized+degraded, 102 active+clean; 457 KiB data, 233 MiB used, 160 GiB / 160 GiB avail; 68/723 objects degraded (9.405%) 2026-03-10T05:55:32.835 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:55:32 vm02 bash[55303]: cluster 2026-03-10T05:55:30.852724+0000 mgr.y (mgr.24992) 183 : cluster [DBG] pgmap v103: 161 pgs: 35 active+undersized, 6 peering, 18 active+undersized+degraded, 102 active+clean; 457 KiB data, 233 MiB used, 160 GiB / 160 GiB avail; 68/723 objects degraded (9.405%) 2026-03-10T05:55:32.835 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:55:32 vm02 bash[55303]: audit 2026-03-10T05:55:31.344792+0000 mon.a (mon.0) 416 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:55:32.835 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:55:32 vm02 bash[55303]: audit 2026-03-10T05:55:31.344792+0000 mon.a (mon.0) 416 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:55:32.835 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:55:32 vm02 bash[55303]: audit 2026-03-10T05:55:31.351298+0000 mon.a (mon.0) 417 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:55:32.835 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:55:32 vm02 bash[55303]: audit 2026-03-10T05:55:31.351298+0000 mon.a (mon.0) 417 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:55:32.835 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:55:32 vm02 bash[55303]: audit 2026-03-10T05:55:31.907648+0000 mon.a (mon.0) 418 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:55:32.835 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:55:32 vm02 bash[55303]: audit 2026-03-10T05:55:31.907648+0000 mon.a (mon.0) 418 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:55:32.835 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:55:32 vm02 bash[55303]: audit 2026-03-10T05:55:31.913907+0000 mon.a (mon.0) 419 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:55:32.835 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:55:32 vm02 bash[55303]: audit 2026-03-10T05:55:31.913907+0000 mon.a (mon.0) 419 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:55:33.334 INFO:journalctl@ceph.mgr.y.vm02.stdout:Mar 10 05:55:32 vm02 bash[52264]: ::ffff:192.168.123.105 - - [10/Mar/2026:05:55:32] "GET /metrics HTTP/1.1" 200 38082 "" "Prometheus/2.51.0" 2026-03-10T05:55:33.749 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:55:33 vm05 bash[43541]: cluster 2026-03-10T05:55:33.345697+0000 mon.a (mon.0) 420 : cluster [WRN] Health check update: Degraded data redundancy: 36/723 objects degraded (4.979%), 9 pgs degraded (PG_DEGRADED) 2026-03-10T05:55:33.749 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:55:33 vm05 bash[43541]: cluster 2026-03-10T05:55:33.345697+0000 mon.a (mon.0) 420 : cluster [WRN] Health check update: Degraded data redundancy: 36/723 objects degraded (4.979%), 9 pgs degraded (PG_DEGRADED) 2026-03-10T05:55:33.834 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:55:33 vm02 bash[56371]: cluster 2026-03-10T05:55:33.345697+0000 mon.a (mon.0) 420 : cluster [WRN] Health check update: Degraded data redundancy: 36/723 objects degraded (4.979%), 9 pgs degraded (PG_DEGRADED) 2026-03-10T05:55:33.835 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:55:33 vm02 bash[56371]: cluster 2026-03-10T05:55:33.345697+0000 mon.a (mon.0) 420 : cluster [WRN] Health check update: Degraded data redundancy: 36/723 objects degraded (4.979%), 9 pgs degraded (PG_DEGRADED) 2026-03-10T05:55:33.835 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:55:33 vm02 bash[55303]: cluster 2026-03-10T05:55:33.345697+0000 mon.a (mon.0) 420 : cluster [WRN] Health check update: Degraded data redundancy: 36/723 objects degraded (4.979%), 9 pgs degraded (PG_DEGRADED) 2026-03-10T05:55:33.835 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:55:33 vm02 bash[55303]: cluster 2026-03-10T05:55:33.345697+0000 mon.a (mon.0) 420 : cluster [WRN] Health check update: Degraded data redundancy: 36/723 objects degraded (4.979%), 9 pgs degraded (PG_DEGRADED) 2026-03-10T05:55:34.403 INFO:journalctl@ceph.prometheus.a.vm05.stdout:Mar 10 05:55:34 vm05 bash[41269]: ts=2026-03-10T05:55:34.147Z caller=alerting.go:391 level=warn component="rule manager" alert="unsupported value type" msg="Expanding alert template failed" err="error executing template __alert_CephOSDDown: template: __alert_CephOSDDown:1:358: executing \"__alert_CephOSDDown\" at : error calling query: found duplicate series for the match group {ceph_daemon=\"osd.5\"} on the right hand-side of the operation: [{__name__=\"ceph_osd_metadata\", ceph_daemon=\"osd.5\", ceph_version=\"ceph version 17.2.0 (43e2e60a7559d3f46c9d53f1ca875fd499a1e35e) quincy (stable)\", cluster=\"107483ae-1c44-11f1-b530-c1172cd6122a\", cluster_addr=\"192.168.123.105\", device_class=\"hdd\", hostname=\"vm05\", instance=\"ceph_cluster\", job=\"ceph\", objectstore=\"bluestore\", public_addr=\"192.168.123.105\"}, {__name__=\"ceph_osd_metadata\", ceph_daemon=\"osd.5\", ceph_version=\"ceph version 17.2.0 (43e2e60a7559d3f46c9d53f1ca875fd499a1e35e) quincy (stable)\", cluster_addr=\"192.168.123.105\", device_class=\"hdd\", hostname=\"vm05\", instance=\"192.168.123.105:9283\", job=\"ceph\", objectstore=\"bluestore\", public_addr=\"192.168.123.105\"}];many-to-many matching not allowed: matching labels must be unique on one side" data="unsupported value type" 2026-03-10T05:55:34.403 INFO:journalctl@ceph.prometheus.a.vm05.stdout:Mar 10 05:55:34 vm05 bash[41269]: ts=2026-03-10T05:55:34.147Z caller=group.go:483 level=warn name=CephOSDFlapping index=13 component="rule manager" file=/etc/prometheus/alerting/ceph_alerts.yml group=osd msg="Evaluating rule failed" rule="alert: CephOSDFlapping\nexpr: (rate(ceph_osd_up[5m]) * on (ceph_daemon) group_left (hostname) ceph_osd_metadata)\n * 60 > 1\nlabels:\n oid: 1.3.6.1.4.1.50495.1.2.1.4.4\n severity: warning\n type: ceph_default\nannotations:\n description: OSD {{ $labels.ceph_daemon }} on {{ $labels.hostname }} was marked\n down and back up {{ $value | humanize }} times once a minute for 5 minutes. This\n may indicate a network issue (latency, packet loss, MTU mismatch) on the cluster\n network, or the public network if no cluster network is deployed. Check the network\n stats on the listed host(s).\n documentation: https://docs.ceph.com/en/latest/rados/troubleshooting/troubleshooting-osd#flapping-osds\n summary: Network issues are causing OSDs to flap (mark each other down)\n" err="found duplicate series for the match group {ceph_daemon=\"osd.5\"} on the right hand-side of the operation: [{__name__=\"ceph_osd_metadata\", ceph_daemon=\"osd.5\", ceph_version=\"ceph version 17.2.0 (43e2e60a7559d3f46c9d53f1ca875fd499a1e35e) quincy (stable)\", cluster=\"107483ae-1c44-11f1-b530-c1172cd6122a\", cluster_addr=\"192.168.123.105\", device_class=\"hdd\", hostname=\"vm05\", instance=\"ceph_cluster\", job=\"ceph\", objectstore=\"bluestore\", public_addr=\"192.168.123.105\"}, {__name__=\"ceph_osd_metadata\", ceph_daemon=\"osd.5\", ceph_version=\"ceph version 17.2.0 (43e2e60a7559d3f46c9d53f1ca875fd499a1e35e) quincy (stable)\", cluster_addr=\"192.168.123.105\", device_class=\"hdd\", hostname=\"vm05\", instance=\"192.168.123.105:9283\", job=\"ceph\", objectstore=\"bluestore\", public_addr=\"192.168.123.105\"}];many-to-many matching not allowed: matching labels must be unique on one side" 2026-03-10T05:55:34.749 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:55:34 vm05 bash[43541]: cluster 2026-03-10T05:55:32.853165+0000 mgr.y (mgr.24992) 184 : cluster [DBG] pgmap v104: 161 pgs: 16 active+undersized, 6 peering, 9 active+undersized+degraded, 130 active+clean; 457 KiB data, 234 MiB used, 160 GiB / 160 GiB avail; 36/723 objects degraded (4.979%) 2026-03-10T05:55:34.749 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:55:34 vm05 bash[43541]: cluster 2026-03-10T05:55:32.853165+0000 mgr.y (mgr.24992) 184 : cluster [DBG] pgmap v104: 161 pgs: 16 active+undersized, 6 peering, 9 active+undersized+degraded, 130 active+clean; 457 KiB data, 234 MiB used, 160 GiB / 160 GiB avail; 36/723 objects degraded (4.979%) 2026-03-10T05:55:34.834 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:55:34 vm02 bash[56371]: cluster 2026-03-10T05:55:32.853165+0000 mgr.y (mgr.24992) 184 : cluster [DBG] pgmap v104: 161 pgs: 16 active+undersized, 6 peering, 9 active+undersized+degraded, 130 active+clean; 457 KiB data, 234 MiB used, 160 GiB / 160 GiB avail; 36/723 objects degraded (4.979%) 2026-03-10T05:55:34.835 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:55:34 vm02 bash[56371]: cluster 2026-03-10T05:55:32.853165+0000 mgr.y (mgr.24992) 184 : cluster [DBG] pgmap v104: 161 pgs: 16 active+undersized, 6 peering, 9 active+undersized+degraded, 130 active+clean; 457 KiB data, 234 MiB used, 160 GiB / 160 GiB avail; 36/723 objects degraded (4.979%) 2026-03-10T05:55:34.835 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:55:34 vm02 bash[55303]: cluster 2026-03-10T05:55:32.853165+0000 mgr.y (mgr.24992) 184 : cluster [DBG] pgmap v104: 161 pgs: 16 active+undersized, 6 peering, 9 active+undersized+degraded, 130 active+clean; 457 KiB data, 234 MiB used, 160 GiB / 160 GiB avail; 36/723 objects degraded (4.979%) 2026-03-10T05:55:34.835 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:55:34 vm02 bash[55303]: cluster 2026-03-10T05:55:32.853165+0000 mgr.y (mgr.24992) 184 : cluster [DBG] pgmap v104: 161 pgs: 16 active+undersized, 6 peering, 9 active+undersized+degraded, 130 active+clean; 457 KiB data, 234 MiB used, 160 GiB / 160 GiB avail; 36/723 objects degraded (4.979%) 2026-03-10T05:55:35.749 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:55:35 vm05 bash[43541]: cluster 2026-03-10T05:55:35.401682+0000 mon.a (mon.0) 421 : cluster [INF] Health check cleared: PG_AVAILABILITY (was: Reduced data availability: 1 pg inactive, 1 pg peering) 2026-03-10T05:55:35.749 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:55:35 vm05 bash[43541]: cluster 2026-03-10T05:55:35.401682+0000 mon.a (mon.0) 421 : cluster [INF] Health check cleared: PG_AVAILABILITY (was: Reduced data availability: 1 pg inactive, 1 pg peering) 2026-03-10T05:55:35.749 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:55:35 vm05 bash[43541]: cluster 2026-03-10T05:55:35.401705+0000 mon.a (mon.0) 422 : cluster [INF] Health check cleared: PG_DEGRADED (was: Degraded data redundancy: 36/723 objects degraded (4.979%), 9 pgs degraded) 2026-03-10T05:55:35.749 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:55:35 vm05 bash[43541]: cluster 2026-03-10T05:55:35.401705+0000 mon.a (mon.0) 422 : cluster [INF] Health check cleared: PG_DEGRADED (was: Degraded data redundancy: 36/723 objects degraded (4.979%), 9 pgs degraded) 2026-03-10T05:55:35.749 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:55:35 vm05 bash[43541]: cluster 2026-03-10T05:55:35.401711+0000 mon.a (mon.0) 423 : cluster [INF] Cluster is now healthy 2026-03-10T05:55:35.749 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:55:35 vm05 bash[43541]: cluster 2026-03-10T05:55:35.401711+0000 mon.a (mon.0) 423 : cluster [INF] Cluster is now healthy 2026-03-10T05:55:35.835 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:55:35 vm02 bash[56371]: cluster 2026-03-10T05:55:35.401682+0000 mon.a (mon.0) 421 : cluster [INF] Health check cleared: PG_AVAILABILITY (was: Reduced data availability: 1 pg inactive, 1 pg peering) 2026-03-10T05:55:35.835 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:55:35 vm02 bash[56371]: cluster 2026-03-10T05:55:35.401682+0000 mon.a (mon.0) 421 : cluster [INF] Health check cleared: PG_AVAILABILITY (was: Reduced data availability: 1 pg inactive, 1 pg peering) 2026-03-10T05:55:35.835 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:55:35 vm02 bash[56371]: cluster 2026-03-10T05:55:35.401705+0000 mon.a (mon.0) 422 : cluster [INF] Health check cleared: PG_DEGRADED (was: Degraded data redundancy: 36/723 objects degraded (4.979%), 9 pgs degraded) 2026-03-10T05:55:35.835 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:55:35 vm02 bash[56371]: cluster 2026-03-10T05:55:35.401705+0000 mon.a (mon.0) 422 : cluster [INF] Health check cleared: PG_DEGRADED (was: Degraded data redundancy: 36/723 objects degraded (4.979%), 9 pgs degraded) 2026-03-10T05:55:35.835 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:55:35 vm02 bash[56371]: cluster 2026-03-10T05:55:35.401711+0000 mon.a (mon.0) 423 : cluster [INF] Cluster is now healthy 2026-03-10T05:55:35.835 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:55:35 vm02 bash[56371]: cluster 2026-03-10T05:55:35.401711+0000 mon.a (mon.0) 423 : cluster [INF] Cluster is now healthy 2026-03-10T05:55:35.835 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:55:35 vm02 bash[55303]: cluster 2026-03-10T05:55:35.401682+0000 mon.a (mon.0) 421 : cluster [INF] Health check cleared: PG_AVAILABILITY (was: Reduced data availability: 1 pg inactive, 1 pg peering) 2026-03-10T05:55:35.835 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:55:35 vm02 bash[55303]: cluster 2026-03-10T05:55:35.401682+0000 mon.a (mon.0) 421 : cluster [INF] Health check cleared: PG_AVAILABILITY (was: Reduced data availability: 1 pg inactive, 1 pg peering) 2026-03-10T05:55:35.835 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:55:35 vm02 bash[55303]: cluster 2026-03-10T05:55:35.401705+0000 mon.a (mon.0) 422 : cluster [INF] Health check cleared: PG_DEGRADED (was: Degraded data redundancy: 36/723 objects degraded (4.979%), 9 pgs degraded) 2026-03-10T05:55:35.835 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:55:35 vm02 bash[55303]: cluster 2026-03-10T05:55:35.401705+0000 mon.a (mon.0) 422 : cluster [INF] Health check cleared: PG_DEGRADED (was: Degraded data redundancy: 36/723 objects degraded (4.979%), 9 pgs degraded) 2026-03-10T05:55:35.835 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:55:35 vm02 bash[55303]: cluster 2026-03-10T05:55:35.401711+0000 mon.a (mon.0) 423 : cluster [INF] Cluster is now healthy 2026-03-10T05:55:35.835 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:55:35 vm02 bash[55303]: cluster 2026-03-10T05:55:35.401711+0000 mon.a (mon.0) 423 : cluster [INF] Cluster is now healthy 2026-03-10T05:55:36.726 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:55:36 vm05 bash[43541]: cluster 2026-03-10T05:55:34.853495+0000 mgr.y (mgr.24992) 185 : cluster [DBG] pgmap v105: 161 pgs: 5 peering, 156 active+clean; 457 KiB data, 234 MiB used, 160 GiB / 160 GiB avail; 613 B/s rd, 0 op/s 2026-03-10T05:55:36.726 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:55:36 vm05 bash[43541]: cluster 2026-03-10T05:55:34.853495+0000 mgr.y (mgr.24992) 185 : cluster [DBG] pgmap v105: 161 pgs: 5 peering, 156 active+clean; 457 KiB data, 234 MiB used, 160 GiB / 160 GiB avail; 613 B/s rd, 0 op/s 2026-03-10T05:55:36.835 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:55:36 vm02 bash[56371]: cluster 2026-03-10T05:55:34.853495+0000 mgr.y (mgr.24992) 185 : cluster [DBG] pgmap v105: 161 pgs: 5 peering, 156 active+clean; 457 KiB data, 234 MiB used, 160 GiB / 160 GiB avail; 613 B/s rd, 0 op/s 2026-03-10T05:55:36.835 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:55:36 vm02 bash[56371]: cluster 2026-03-10T05:55:34.853495+0000 mgr.y (mgr.24992) 185 : cluster [DBG] pgmap v105: 161 pgs: 5 peering, 156 active+clean; 457 KiB data, 234 MiB used, 160 GiB / 160 GiB avail; 613 B/s rd, 0 op/s 2026-03-10T05:55:36.835 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:55:36 vm02 bash[55303]: cluster 2026-03-10T05:55:34.853495+0000 mgr.y (mgr.24992) 185 : cluster [DBG] pgmap v105: 161 pgs: 5 peering, 156 active+clean; 457 KiB data, 234 MiB used, 160 GiB / 160 GiB avail; 613 B/s rd, 0 op/s 2026-03-10T05:55:36.835 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:55:36 vm02 bash[55303]: cluster 2026-03-10T05:55:34.853495+0000 mgr.y (mgr.24992) 185 : cluster [DBG] pgmap v105: 161 pgs: 5 peering, 156 active+clean; 457 KiB data, 234 MiB used, 160 GiB / 160 GiB avail; 613 B/s rd, 0 op/s 2026-03-10T05:55:37.000 INFO:journalctl@ceph.prometheus.a.vm05.stdout:Mar 10 05:55:36 vm05 bash[41269]: ts=2026-03-10T05:55:36.948Z caller=group.go:483 level=warn name=CephNodeDiskspaceWarning index=4 component="rule manager" file=/etc/prometheus/alerting/ceph_alerts.yml group=nodes msg="Evaluating rule failed" rule="alert: CephNodeDiskspaceWarning\nexpr: predict_linear(node_filesystem_free_bytes{device=~\"/.*\"}[2d], 3600 * 24 * 5)\n * on (instance) group_left (nodename) node_uname_info < 0\nlabels:\n oid: 1.3.6.1.4.1.50495.1.2.1.8.4\n severity: warning\n type: ceph_default\nannotations:\n description: Mountpoint {{ $labels.mountpoint }} on {{ $labels.nodename }} will\n be full in less than 5 days based on the 48 hour trailing fill rate.\n summary: Host filesystem free space is getting low\n" err="found duplicate series for the match group {instance=\"vm05\"} on the right hand-side of the operation: [{__name__=\"node_uname_info\", cluster=\"107483ae-1c44-11f1-b530-c1172cd6122a\", domainname=\"(none)\", instance=\"vm05\", job=\"node\", machine=\"x86_64\", nodename=\"vm05\", release=\"5.15.0-1092-kvm\", sysname=\"Linux\", version=\"#97-Ubuntu SMP Fri Jan 23 15:00:24 UTC 2026\"}, {__name__=\"node_uname_info\", domainname=\"(none)\", instance=\"vm05\", job=\"node\", machine=\"x86_64\", nodename=\"vm05\", release=\"5.15.0-1092-kvm\", sysname=\"Linux\", version=\"#97-Ubuntu SMP Fri Jan 23 15:00:24 UTC 2026\"}];many-to-many matching not allowed: matching labels must be unique on one side" 2026-03-10T05:55:38.724 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:55:38 vm05 bash[43541]: cluster 2026-03-10T05:55:36.853911+0000 mgr.y (mgr.24992) 186 : cluster [DBG] pgmap v106: 161 pgs: 161 active+clean; 457 KiB data, 234 MiB used, 160 GiB / 160 GiB avail; 1.1 KiB/s rd, 1 op/s 2026-03-10T05:55:38.724 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:55:38 vm05 bash[43541]: cluster 2026-03-10T05:55:36.853911+0000 mgr.y (mgr.24992) 186 : cluster [DBG] pgmap v106: 161 pgs: 161 active+clean; 457 KiB data, 234 MiB used, 160 GiB / 160 GiB avail; 1.1 KiB/s rd, 1 op/s 2026-03-10T05:55:38.724 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:55:38 vm05 bash[43541]: audit 2026-03-10T05:55:36.976759+0000 mgr.y (mgr.24992) 187 : audit [DBG] from='client.25048 -' entity='client.iscsi.foo.vm02.mxbwmh' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T05:55:38.724 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:55:38 vm05 bash[43541]: audit 2026-03-10T05:55:36.976759+0000 mgr.y (mgr.24992) 187 : audit [DBG] from='client.25048 -' entity='client.iscsi.foo.vm02.mxbwmh' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T05:55:38.724 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:55:38 vm05 bash[43541]: audit 2026-03-10T05:55:38.416912+0000 mon.a (mon.0) 424 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:55:38.724 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:55:38 vm05 bash[43541]: audit 2026-03-10T05:55:38.416912+0000 mon.a (mon.0) 424 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:55:38.724 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:55:38 vm05 bash[43541]: audit 2026-03-10T05:55:38.422612+0000 mon.a (mon.0) 425 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:55:38.724 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:55:38 vm05 bash[43541]: audit 2026-03-10T05:55:38.422612+0000 mon.a (mon.0) 425 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:55:38.724 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:55:38 vm05 bash[43541]: audit 2026-03-10T05:55:38.423750+0000 mon.a (mon.0) 426 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:55:38.724 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:55:38 vm05 bash[43541]: audit 2026-03-10T05:55:38.423750+0000 mon.a (mon.0) 426 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:55:38.724 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:55:38 vm05 bash[43541]: audit 2026-03-10T05:55:38.424164+0000 mon.a (mon.0) 427 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T05:55:38.724 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:55:38 vm05 bash[43541]: audit 2026-03-10T05:55:38.424164+0000 mon.a (mon.0) 427 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T05:55:38.724 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:55:38 vm05 bash[43541]: audit 2026-03-10T05:55:38.428282+0000 mon.a (mon.0) 428 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:55:38.724 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:55:38 vm05 bash[43541]: audit 2026-03-10T05:55:38.428282+0000 mon.a (mon.0) 428 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:55:38.835 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:55:38 vm02 bash[56371]: cluster 2026-03-10T05:55:36.853911+0000 mgr.y (mgr.24992) 186 : cluster [DBG] pgmap v106: 161 pgs: 161 active+clean; 457 KiB data, 234 MiB used, 160 GiB / 160 GiB avail; 1.1 KiB/s rd, 1 op/s 2026-03-10T05:55:38.835 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:55:38 vm02 bash[56371]: cluster 2026-03-10T05:55:36.853911+0000 mgr.y (mgr.24992) 186 : cluster [DBG] pgmap v106: 161 pgs: 161 active+clean; 457 KiB data, 234 MiB used, 160 GiB / 160 GiB avail; 1.1 KiB/s rd, 1 op/s 2026-03-10T05:55:38.835 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:55:38 vm02 bash[56371]: audit 2026-03-10T05:55:36.976759+0000 mgr.y (mgr.24992) 187 : audit [DBG] from='client.25048 -' entity='client.iscsi.foo.vm02.mxbwmh' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T05:55:38.835 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:55:38 vm02 bash[56371]: audit 2026-03-10T05:55:36.976759+0000 mgr.y (mgr.24992) 187 : audit [DBG] from='client.25048 -' entity='client.iscsi.foo.vm02.mxbwmh' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T05:55:38.835 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:55:38 vm02 bash[56371]: audit 2026-03-10T05:55:38.416912+0000 mon.a (mon.0) 424 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:55:38.835 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:55:38 vm02 bash[56371]: audit 2026-03-10T05:55:38.416912+0000 mon.a (mon.0) 424 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:55:38.835 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:55:38 vm02 bash[56371]: audit 2026-03-10T05:55:38.422612+0000 mon.a (mon.0) 425 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:55:38.835 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:55:38 vm02 bash[56371]: audit 2026-03-10T05:55:38.422612+0000 mon.a (mon.0) 425 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:55:38.835 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:55:38 vm02 bash[56371]: audit 2026-03-10T05:55:38.423750+0000 mon.a (mon.0) 426 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:55:38.835 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:55:38 vm02 bash[56371]: audit 2026-03-10T05:55:38.423750+0000 mon.a (mon.0) 426 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:55:38.835 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:55:38 vm02 bash[56371]: audit 2026-03-10T05:55:38.424164+0000 mon.a (mon.0) 427 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T05:55:38.835 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:55:38 vm02 bash[56371]: audit 2026-03-10T05:55:38.424164+0000 mon.a (mon.0) 427 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T05:55:38.835 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:55:38 vm02 bash[56371]: audit 2026-03-10T05:55:38.428282+0000 mon.a (mon.0) 428 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:55:38.835 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:55:38 vm02 bash[56371]: audit 2026-03-10T05:55:38.428282+0000 mon.a (mon.0) 428 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:55:38.835 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:55:38 vm02 bash[55303]: cluster 2026-03-10T05:55:36.853911+0000 mgr.y (mgr.24992) 186 : cluster [DBG] pgmap v106: 161 pgs: 161 active+clean; 457 KiB data, 234 MiB used, 160 GiB / 160 GiB avail; 1.1 KiB/s rd, 1 op/s 2026-03-10T05:55:38.835 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:55:38 vm02 bash[55303]: cluster 2026-03-10T05:55:36.853911+0000 mgr.y (mgr.24992) 186 : cluster [DBG] pgmap v106: 161 pgs: 161 active+clean; 457 KiB data, 234 MiB used, 160 GiB / 160 GiB avail; 1.1 KiB/s rd, 1 op/s 2026-03-10T05:55:38.835 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:55:38 vm02 bash[55303]: audit 2026-03-10T05:55:36.976759+0000 mgr.y (mgr.24992) 187 : audit [DBG] from='client.25048 -' entity='client.iscsi.foo.vm02.mxbwmh' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T05:55:38.835 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:55:38 vm02 bash[55303]: audit 2026-03-10T05:55:36.976759+0000 mgr.y (mgr.24992) 187 : audit [DBG] from='client.25048 -' entity='client.iscsi.foo.vm02.mxbwmh' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T05:55:38.835 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:55:38 vm02 bash[55303]: audit 2026-03-10T05:55:38.416912+0000 mon.a (mon.0) 424 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:55:38.835 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:55:38 vm02 bash[55303]: audit 2026-03-10T05:55:38.416912+0000 mon.a (mon.0) 424 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:55:38.835 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:55:38 vm02 bash[55303]: audit 2026-03-10T05:55:38.422612+0000 mon.a (mon.0) 425 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:55:38.835 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:55:38 vm02 bash[55303]: audit 2026-03-10T05:55:38.422612+0000 mon.a (mon.0) 425 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:55:38.835 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:55:38 vm02 bash[55303]: audit 2026-03-10T05:55:38.423750+0000 mon.a (mon.0) 426 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:55:38.835 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:55:38 vm02 bash[55303]: audit 2026-03-10T05:55:38.423750+0000 mon.a (mon.0) 426 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:55:38.835 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:55:38 vm02 bash[55303]: audit 2026-03-10T05:55:38.424164+0000 mon.a (mon.0) 427 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T05:55:38.835 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:55:38 vm02 bash[55303]: audit 2026-03-10T05:55:38.424164+0000 mon.a (mon.0) 427 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T05:55:38.835 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:55:38 vm02 bash[55303]: audit 2026-03-10T05:55:38.428282+0000 mon.a (mon.0) 428 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:55:38.835 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:55:38 vm02 bash[55303]: audit 2026-03-10T05:55:38.428282+0000 mon.a (mon.0) 428 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:55:39.749 INFO:journalctl@ceph.mgr.x.vm05.stdout:Mar 10 05:55:39 vm05 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:55:39.749 INFO:journalctl@ceph.osd.4.vm05.stdout:Mar 10 05:55:39 vm05 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:55:39.750 INFO:journalctl@ceph.osd.7.vm05.stdout:Mar 10 05:55:39 vm05 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:55:39.750 INFO:journalctl@ceph.prometheus.a.vm05.stdout:Mar 10 05:55:39 vm05 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:55:39.750 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:55:39 vm05 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:55:39.750 INFO:journalctl@ceph.osd.5.vm05.stdout:Mar 10 05:55:39 vm05 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:55:39.750 INFO:journalctl@ceph.node-exporter.b.vm05.stdout:Mar 10 05:55:39 vm05 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:55:39.750 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:55:39 vm05 bash[43541]: audit 2026-03-10T05:55:38.472933+0000 mon.a (mon.0) 429 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T05:55:39.750 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:55:39 vm05 bash[43541]: audit 2026-03-10T05:55:38.472933+0000 mon.a (mon.0) 429 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T05:55:39.750 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:55:39 vm05 bash[43541]: audit 2026-03-10T05:55:38.473868+0000 mon.a (mon.0) 430 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T05:55:39.750 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:55:39 vm05 bash[43541]: audit 2026-03-10T05:55:38.473868+0000 mon.a (mon.0) 430 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T05:55:39.750 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:55:39 vm05 bash[43541]: audit 2026-03-10T05:55:38.474526+0000 mon.a (mon.0) 431 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T05:55:39.750 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:55:39 vm05 bash[43541]: audit 2026-03-10T05:55:38.474526+0000 mon.a (mon.0) 431 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T05:55:39.750 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:55:39 vm05 bash[43541]: audit 2026-03-10T05:55:38.475038+0000 mon.a (mon.0) 432 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T05:55:39.750 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:55:39 vm05 bash[43541]: audit 2026-03-10T05:55:38.475038+0000 mon.a (mon.0) 432 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T05:55:39.750 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:55:39 vm05 bash[43541]: audit 2026-03-10T05:55:38.475691+0000 mon.a (mon.0) 433 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "osd ok-to-stop", "ids": ["6"], "max": 16}]: dispatch 2026-03-10T05:55:39.750 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:55:39 vm05 bash[43541]: audit 2026-03-10T05:55:38.475691+0000 mon.a (mon.0) 433 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "osd ok-to-stop", "ids": ["6"], "max": 16}]: dispatch 2026-03-10T05:55:39.750 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:55:39 vm05 bash[43541]: audit 2026-03-10T05:55:38.475832+0000 mgr.y (mgr.24992) 188 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "osd ok-to-stop", "ids": ["6"], "max": 16}]: dispatch 2026-03-10T05:55:39.750 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:55:39 vm05 bash[43541]: audit 2026-03-10T05:55:38.475832+0000 mgr.y (mgr.24992) 188 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "osd ok-to-stop", "ids": ["6"], "max": 16}]: dispatch 2026-03-10T05:55:39.750 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:55:39 vm05 bash[43541]: cephadm 2026-03-10T05:55:38.476299+0000 mgr.y (mgr.24992) 189 : cephadm [INF] Upgrade: osd.6 is safe to restart 2026-03-10T05:55:39.750 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:55:39 vm05 bash[43541]: cephadm 2026-03-10T05:55:38.476299+0000 mgr.y (mgr.24992) 189 : cephadm [INF] Upgrade: osd.6 is safe to restart 2026-03-10T05:55:39.750 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:55:39 vm05 bash[43541]: audit 2026-03-10T05:55:38.864124+0000 mon.a (mon.0) 434 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:55:39.750 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:55:39 vm05 bash[43541]: audit 2026-03-10T05:55:38.864124+0000 mon.a (mon.0) 434 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:55:39.750 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:55:39 vm05 bash[43541]: audit 2026-03-10T05:55:38.865721+0000 mon.a (mon.0) 435 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.6"}]: dispatch 2026-03-10T05:55:39.750 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:55:39 vm05 bash[43541]: audit 2026-03-10T05:55:38.865721+0000 mon.a (mon.0) 435 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.6"}]: dispatch 2026-03-10T05:55:39.750 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:55:39 vm05 bash[43541]: audit 2026-03-10T05:55:38.866148+0000 mon.a (mon.0) 436 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:55:39.750 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:55:39 vm05 bash[43541]: audit 2026-03-10T05:55:38.866148+0000 mon.a (mon.0) 436 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:55:39.750 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:55:39 vm05 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:55:39.750 INFO:journalctl@ceph.osd.6.vm05.stdout:Mar 10 05:55:39 vm05 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:55:39.750 INFO:journalctl@ceph.osd.6.vm05.stdout:Mar 10 05:55:39 vm05 systemd[1]: Stopping Ceph osd.6 for 107483ae-1c44-11f1-b530-c1172cd6122a... 2026-03-10T05:55:39.750 INFO:journalctl@ceph.osd.6.vm05.stdout:Mar 10 05:55:39 vm05 bash[27098]: debug 2026-03-10T05:55:39.623+0000 7fed9cb30700 -1 received signal: Terminated from /sbin/docker-init -- /usr/bin/ceph-osd -n osd.6 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-stderr=true --default-log-stderr-prefix=debug (PID: 1) UID: 0 2026-03-10T05:55:39.750 INFO:journalctl@ceph.osd.6.vm05.stdout:Mar 10 05:55:39 vm05 bash[27098]: debug 2026-03-10T05:55:39.623+0000 7fed9cb30700 -1 osd.6 121 *** Got signal Terminated *** 2026-03-10T05:55:39.750 INFO:journalctl@ceph.osd.6.vm05.stdout:Mar 10 05:55:39 vm05 bash[27098]: debug 2026-03-10T05:55:39.623+0000 7fed9cb30700 -1 osd.6 121 *** Immediate shutdown (osd_fast_shutdown=true) *** 2026-03-10T05:55:39.834 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:55:39 vm02 bash[56371]: audit 2026-03-10T05:55:38.472933+0000 mon.a (mon.0) 429 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T05:55:39.835 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:55:39 vm02 bash[56371]: audit 2026-03-10T05:55:38.472933+0000 mon.a (mon.0) 429 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T05:55:39.835 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:55:39 vm02 bash[56371]: audit 2026-03-10T05:55:38.473868+0000 mon.a (mon.0) 430 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T05:55:39.835 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:55:39 vm02 bash[56371]: audit 2026-03-10T05:55:38.473868+0000 mon.a (mon.0) 430 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T05:55:39.835 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:55:39 vm02 bash[56371]: audit 2026-03-10T05:55:38.474526+0000 mon.a (mon.0) 431 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T05:55:39.835 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:55:39 vm02 bash[56371]: audit 2026-03-10T05:55:38.474526+0000 mon.a (mon.0) 431 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T05:55:39.835 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:55:39 vm02 bash[56371]: audit 2026-03-10T05:55:38.475038+0000 mon.a (mon.0) 432 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T05:55:39.835 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:55:39 vm02 bash[56371]: audit 2026-03-10T05:55:38.475038+0000 mon.a (mon.0) 432 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T05:55:39.835 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:55:39 vm02 bash[56371]: audit 2026-03-10T05:55:38.475691+0000 mon.a (mon.0) 433 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "osd ok-to-stop", "ids": ["6"], "max": 16}]: dispatch 2026-03-10T05:55:39.835 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:55:39 vm02 bash[56371]: audit 2026-03-10T05:55:38.475691+0000 mon.a (mon.0) 433 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "osd ok-to-stop", "ids": ["6"], "max": 16}]: dispatch 2026-03-10T05:55:39.835 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:55:39 vm02 bash[56371]: audit 2026-03-10T05:55:38.475832+0000 mgr.y (mgr.24992) 188 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "osd ok-to-stop", "ids": ["6"], "max": 16}]: dispatch 2026-03-10T05:55:39.835 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:55:39 vm02 bash[56371]: audit 2026-03-10T05:55:38.475832+0000 mgr.y (mgr.24992) 188 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "osd ok-to-stop", "ids": ["6"], "max": 16}]: dispatch 2026-03-10T05:55:39.835 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:55:39 vm02 bash[56371]: cephadm 2026-03-10T05:55:38.476299+0000 mgr.y (mgr.24992) 189 : cephadm [INF] Upgrade: osd.6 is safe to restart 2026-03-10T05:55:39.835 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:55:39 vm02 bash[56371]: cephadm 2026-03-10T05:55:38.476299+0000 mgr.y (mgr.24992) 189 : cephadm [INF] Upgrade: osd.6 is safe to restart 2026-03-10T05:55:39.835 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:55:39 vm02 bash[56371]: audit 2026-03-10T05:55:38.864124+0000 mon.a (mon.0) 434 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:55:39.835 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:55:39 vm02 bash[56371]: audit 2026-03-10T05:55:38.864124+0000 mon.a (mon.0) 434 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:55:39.835 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:55:39 vm02 bash[56371]: audit 2026-03-10T05:55:38.865721+0000 mon.a (mon.0) 435 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.6"}]: dispatch 2026-03-10T05:55:39.835 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:55:39 vm02 bash[56371]: audit 2026-03-10T05:55:38.865721+0000 mon.a (mon.0) 435 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.6"}]: dispatch 2026-03-10T05:55:39.835 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:55:39 vm02 bash[56371]: audit 2026-03-10T05:55:38.866148+0000 mon.a (mon.0) 436 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:55:39.835 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:55:39 vm02 bash[56371]: audit 2026-03-10T05:55:38.866148+0000 mon.a (mon.0) 436 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:55:39.835 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:55:39 vm02 bash[55303]: audit 2026-03-10T05:55:38.472933+0000 mon.a (mon.0) 429 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T05:55:39.835 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:55:39 vm02 bash[55303]: audit 2026-03-10T05:55:38.472933+0000 mon.a (mon.0) 429 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T05:55:39.835 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:55:39 vm02 bash[55303]: audit 2026-03-10T05:55:38.473868+0000 mon.a (mon.0) 430 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T05:55:39.835 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:55:39 vm02 bash[55303]: audit 2026-03-10T05:55:38.473868+0000 mon.a (mon.0) 430 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T05:55:39.835 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:55:39 vm02 bash[55303]: audit 2026-03-10T05:55:38.474526+0000 mon.a (mon.0) 431 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T05:55:39.835 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:55:39 vm02 bash[55303]: audit 2026-03-10T05:55:38.474526+0000 mon.a (mon.0) 431 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T05:55:39.835 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:55:39 vm02 bash[55303]: audit 2026-03-10T05:55:38.475038+0000 mon.a (mon.0) 432 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T05:55:39.835 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:55:39 vm02 bash[55303]: audit 2026-03-10T05:55:38.475038+0000 mon.a (mon.0) 432 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T05:55:39.835 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:55:39 vm02 bash[55303]: audit 2026-03-10T05:55:38.475691+0000 mon.a (mon.0) 433 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "osd ok-to-stop", "ids": ["6"], "max": 16}]: dispatch 2026-03-10T05:55:39.835 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:55:39 vm02 bash[55303]: audit 2026-03-10T05:55:38.475691+0000 mon.a (mon.0) 433 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "osd ok-to-stop", "ids": ["6"], "max": 16}]: dispatch 2026-03-10T05:55:39.835 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:55:39 vm02 bash[55303]: audit 2026-03-10T05:55:38.475832+0000 mgr.y (mgr.24992) 188 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "osd ok-to-stop", "ids": ["6"], "max": 16}]: dispatch 2026-03-10T05:55:39.835 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:55:39 vm02 bash[55303]: audit 2026-03-10T05:55:38.475832+0000 mgr.y (mgr.24992) 188 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "osd ok-to-stop", "ids": ["6"], "max": 16}]: dispatch 2026-03-10T05:55:39.835 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:55:39 vm02 bash[55303]: cephadm 2026-03-10T05:55:38.476299+0000 mgr.y (mgr.24992) 189 : cephadm [INF] Upgrade: osd.6 is safe to restart 2026-03-10T05:55:39.835 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:55:39 vm02 bash[55303]: cephadm 2026-03-10T05:55:38.476299+0000 mgr.y (mgr.24992) 189 : cephadm [INF] Upgrade: osd.6 is safe to restart 2026-03-10T05:55:39.835 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:55:39 vm02 bash[55303]: audit 2026-03-10T05:55:38.864124+0000 mon.a (mon.0) 434 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:55:39.835 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:55:39 vm02 bash[55303]: audit 2026-03-10T05:55:38.864124+0000 mon.a (mon.0) 434 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:55:39.835 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:55:39 vm02 bash[55303]: audit 2026-03-10T05:55:38.865721+0000 mon.a (mon.0) 435 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.6"}]: dispatch 2026-03-10T05:55:39.835 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:55:39 vm02 bash[55303]: audit 2026-03-10T05:55:38.865721+0000 mon.a (mon.0) 435 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.6"}]: dispatch 2026-03-10T05:55:39.835 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:55:39 vm02 bash[55303]: audit 2026-03-10T05:55:38.866148+0000 mon.a (mon.0) 436 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:55:39.835 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:55:39 vm02 bash[55303]: audit 2026-03-10T05:55:38.866148+0000 mon.a (mon.0) 436 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:55:40.749 INFO:journalctl@ceph.osd.6.vm05.stdout:Mar 10 05:55:40 vm05 bash[49283]: ceph-107483ae-1c44-11f1-b530-c1172cd6122a-osd-6 2026-03-10T05:55:40.749 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:55:40 vm05 bash[43541]: cluster 2026-03-10T05:55:38.854292+0000 mgr.y (mgr.24992) 190 : cluster [DBG] pgmap v107: 161 pgs: 161 active+clean; 457 KiB data, 238 MiB used, 160 GiB / 160 GiB avail; 953 B/s rd, 0 op/s 2026-03-10T05:55:40.749 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:55:40 vm05 bash[43541]: cluster 2026-03-10T05:55:38.854292+0000 mgr.y (mgr.24992) 190 : cluster [DBG] pgmap v107: 161 pgs: 161 active+clean; 457 KiB data, 238 MiB used, 160 GiB / 160 GiB avail; 953 B/s rd, 0 op/s 2026-03-10T05:55:40.749 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:55:40 vm05 bash[43541]: cephadm 2026-03-10T05:55:38.860212+0000 mgr.y (mgr.24992) 191 : cephadm [INF] Upgrade: Updating osd.6 2026-03-10T05:55:40.749 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:55:40 vm05 bash[43541]: cephadm 2026-03-10T05:55:38.860212+0000 mgr.y (mgr.24992) 191 : cephadm [INF] Upgrade: Updating osd.6 2026-03-10T05:55:40.749 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:55:40 vm05 bash[43541]: cephadm 2026-03-10T05:55:38.867478+0000 mgr.y (mgr.24992) 192 : cephadm [INF] Deploying daemon osd.6 on vm05 2026-03-10T05:55:40.749 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:55:40 vm05 bash[43541]: cephadm 2026-03-10T05:55:38.867478+0000 mgr.y (mgr.24992) 192 : cephadm [INF] Deploying daemon osd.6 on vm05 2026-03-10T05:55:40.749 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:55:40 vm05 bash[43541]: cluster 2026-03-10T05:55:39.624733+0000 mon.a (mon.0) 437 : cluster [INF] osd.6 marked itself down and dead 2026-03-10T05:55:40.749 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:55:40 vm05 bash[43541]: cluster 2026-03-10T05:55:39.624733+0000 mon.a (mon.0) 437 : cluster [INF] osd.6 marked itself down and dead 2026-03-10T05:55:40.835 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:55:40 vm02 bash[55303]: cluster 2026-03-10T05:55:38.854292+0000 mgr.y (mgr.24992) 190 : cluster [DBG] pgmap v107: 161 pgs: 161 active+clean; 457 KiB data, 238 MiB used, 160 GiB / 160 GiB avail; 953 B/s rd, 0 op/s 2026-03-10T05:55:40.835 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:55:40 vm02 bash[55303]: cluster 2026-03-10T05:55:38.854292+0000 mgr.y (mgr.24992) 190 : cluster [DBG] pgmap v107: 161 pgs: 161 active+clean; 457 KiB data, 238 MiB used, 160 GiB / 160 GiB avail; 953 B/s rd, 0 op/s 2026-03-10T05:55:40.835 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:55:40 vm02 bash[55303]: cephadm 2026-03-10T05:55:38.860212+0000 mgr.y (mgr.24992) 191 : cephadm [INF] Upgrade: Updating osd.6 2026-03-10T05:55:40.835 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:55:40 vm02 bash[55303]: cephadm 2026-03-10T05:55:38.860212+0000 mgr.y (mgr.24992) 191 : cephadm [INF] Upgrade: Updating osd.6 2026-03-10T05:55:40.835 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:55:40 vm02 bash[55303]: cephadm 2026-03-10T05:55:38.867478+0000 mgr.y (mgr.24992) 192 : cephadm [INF] Deploying daemon osd.6 on vm05 2026-03-10T05:55:40.835 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:55:40 vm02 bash[55303]: cephadm 2026-03-10T05:55:38.867478+0000 mgr.y (mgr.24992) 192 : cephadm [INF] Deploying daemon osd.6 on vm05 2026-03-10T05:55:40.835 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:55:40 vm02 bash[55303]: cluster 2026-03-10T05:55:39.624733+0000 mon.a (mon.0) 437 : cluster [INF] osd.6 marked itself down and dead 2026-03-10T05:55:40.835 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:55:40 vm02 bash[55303]: cluster 2026-03-10T05:55:39.624733+0000 mon.a (mon.0) 437 : cluster [INF] osd.6 marked itself down and dead 2026-03-10T05:55:40.835 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:55:40 vm02 bash[56371]: cluster 2026-03-10T05:55:38.854292+0000 mgr.y (mgr.24992) 190 : cluster [DBG] pgmap v107: 161 pgs: 161 active+clean; 457 KiB data, 238 MiB used, 160 GiB / 160 GiB avail; 953 B/s rd, 0 op/s 2026-03-10T05:55:40.835 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:55:40 vm02 bash[56371]: cluster 2026-03-10T05:55:38.854292+0000 mgr.y (mgr.24992) 190 : cluster [DBG] pgmap v107: 161 pgs: 161 active+clean; 457 KiB data, 238 MiB used, 160 GiB / 160 GiB avail; 953 B/s rd, 0 op/s 2026-03-10T05:55:40.835 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:55:40 vm02 bash[56371]: cephadm 2026-03-10T05:55:38.860212+0000 mgr.y (mgr.24992) 191 : cephadm [INF] Upgrade: Updating osd.6 2026-03-10T05:55:40.835 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:55:40 vm02 bash[56371]: cephadm 2026-03-10T05:55:38.860212+0000 mgr.y (mgr.24992) 191 : cephadm [INF] Upgrade: Updating osd.6 2026-03-10T05:55:40.835 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:55:40 vm02 bash[56371]: cephadm 2026-03-10T05:55:38.867478+0000 mgr.y (mgr.24992) 192 : cephadm [INF] Deploying daemon osd.6 on vm05 2026-03-10T05:55:40.835 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:55:40 vm02 bash[56371]: cephadm 2026-03-10T05:55:38.867478+0000 mgr.y (mgr.24992) 192 : cephadm [INF] Deploying daemon osd.6 on vm05 2026-03-10T05:55:40.835 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:55:40 vm02 bash[56371]: cluster 2026-03-10T05:55:39.624733+0000 mon.a (mon.0) 437 : cluster [INF] osd.6 marked itself down and dead 2026-03-10T05:55:40.835 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:55:40 vm02 bash[56371]: cluster 2026-03-10T05:55:39.624733+0000 mon.a (mon.0) 437 : cluster [INF] osd.6 marked itself down and dead 2026-03-10T05:55:41.074 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:55:40 vm05 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:55:41.075 INFO:journalctl@ceph.mgr.x.vm05.stdout:Mar 10 05:55:40 vm05 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:55:41.075 INFO:journalctl@ceph.osd.4.vm05.stdout:Mar 10 05:55:40 vm05 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:55:41.075 INFO:journalctl@ceph.osd.5.vm05.stdout:Mar 10 05:55:40 vm05 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:55:41.075 INFO:journalctl@ceph.osd.7.vm05.stdout:Mar 10 05:55:40 vm05 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:55:41.075 INFO:journalctl@ceph.prometheus.a.vm05.stdout:Mar 10 05:55:40 vm05 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:55:41.075 INFO:journalctl@ceph.node-exporter.b.vm05.stdout:Mar 10 05:55:40 vm05 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:55:41.075 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:55:40 vm05 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:55:41.075 INFO:journalctl@ceph.osd.6.vm05.stdout:Mar 10 05:55:40 vm05 systemd[1]: ceph-107483ae-1c44-11f1-b530-c1172cd6122a@osd.6.service: Deactivated successfully. 2026-03-10T05:55:41.075 INFO:journalctl@ceph.osd.6.vm05.stdout:Mar 10 05:55:40 vm05 systemd[1]: Stopped Ceph osd.6 for 107483ae-1c44-11f1-b530-c1172cd6122a. 2026-03-10T05:55:41.075 INFO:journalctl@ceph.osd.6.vm05.stdout:Mar 10 05:55:40 vm05 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:55:41.075 INFO:journalctl@ceph.osd.6.vm05.stdout:Mar 10 05:55:41 vm05 systemd[1]: Started Ceph osd.6 for 107483ae-1c44-11f1-b530-c1172cd6122a. 2026-03-10T05:55:41.479 INFO:journalctl@ceph.osd.6.vm05.stdout:Mar 10 05:55:41 vm05 bash[49487]: Running command: /usr/bin/ceph-authtool --gen-print-key 2026-03-10T05:55:41.479 INFO:journalctl@ceph.osd.6.vm05.stdout:Mar 10 05:55:41 vm05 bash[49487]: Running command: /usr/bin/ceph-authtool --gen-print-key 2026-03-10T05:55:41.749 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:55:41 vm05 bash[43541]: cluster 2026-03-10T05:55:40.472863+0000 mon.a (mon.0) 438 : cluster [WRN] Health check failed: 1 osds down (OSD_DOWN) 2026-03-10T05:55:41.749 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:55:41 vm05 bash[43541]: cluster 2026-03-10T05:55:40.472863+0000 mon.a (mon.0) 438 : cluster [WRN] Health check failed: 1 osds down (OSD_DOWN) 2026-03-10T05:55:41.749 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:55:41 vm05 bash[43541]: cluster 2026-03-10T05:55:40.508074+0000 mon.a (mon.0) 439 : cluster [DBG] osdmap e122: 8 total, 7 up, 8 in 2026-03-10T05:55:41.749 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:55:41 vm05 bash[43541]: cluster 2026-03-10T05:55:40.508074+0000 mon.a (mon.0) 439 : cluster [DBG] osdmap e122: 8 total, 7 up, 8 in 2026-03-10T05:55:41.749 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:55:41 vm05 bash[43541]: audit 2026-03-10T05:55:40.881255+0000 mon.a (mon.0) 440 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T05:55:41.749 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:55:41 vm05 bash[43541]: audit 2026-03-10T05:55:40.881255+0000 mon.a (mon.0) 440 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T05:55:41.749 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:55:41 vm05 bash[43541]: audit 2026-03-10T05:55:41.053707+0000 mon.a (mon.0) 441 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:55:41.749 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:55:41 vm05 bash[43541]: audit 2026-03-10T05:55:41.053707+0000 mon.a (mon.0) 441 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:55:41.749 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:55:41 vm05 bash[43541]: audit 2026-03-10T05:55:41.061616+0000 mon.a (mon.0) 442 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:55:41.749 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:55:41 vm05 bash[43541]: audit 2026-03-10T05:55:41.061616+0000 mon.a (mon.0) 442 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:55:41.835 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:55:41 vm02 bash[56371]: cluster 2026-03-10T05:55:40.472863+0000 mon.a (mon.0) 438 : cluster [WRN] Health check failed: 1 osds down (OSD_DOWN) 2026-03-10T05:55:41.835 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:55:41 vm02 bash[56371]: cluster 2026-03-10T05:55:40.472863+0000 mon.a (mon.0) 438 : cluster [WRN] Health check failed: 1 osds down (OSD_DOWN) 2026-03-10T05:55:41.835 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:55:41 vm02 bash[56371]: cluster 2026-03-10T05:55:40.508074+0000 mon.a (mon.0) 439 : cluster [DBG] osdmap e122: 8 total, 7 up, 8 in 2026-03-10T05:55:41.835 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:55:41 vm02 bash[56371]: cluster 2026-03-10T05:55:40.508074+0000 mon.a (mon.0) 439 : cluster [DBG] osdmap e122: 8 total, 7 up, 8 in 2026-03-10T05:55:41.835 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:55:41 vm02 bash[56371]: audit 2026-03-10T05:55:40.881255+0000 mon.a (mon.0) 440 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T05:55:41.835 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:55:41 vm02 bash[56371]: audit 2026-03-10T05:55:40.881255+0000 mon.a (mon.0) 440 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T05:55:41.835 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:55:41 vm02 bash[56371]: audit 2026-03-10T05:55:41.053707+0000 mon.a (mon.0) 441 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:55:41.835 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:55:41 vm02 bash[56371]: audit 2026-03-10T05:55:41.053707+0000 mon.a (mon.0) 441 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:55:41.835 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:55:41 vm02 bash[56371]: audit 2026-03-10T05:55:41.061616+0000 mon.a (mon.0) 442 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:55:41.835 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:55:41 vm02 bash[56371]: audit 2026-03-10T05:55:41.061616+0000 mon.a (mon.0) 442 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:55:41.835 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:55:41 vm02 bash[55303]: cluster 2026-03-10T05:55:40.472863+0000 mon.a (mon.0) 438 : cluster [WRN] Health check failed: 1 osds down (OSD_DOWN) 2026-03-10T05:55:41.835 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:55:41 vm02 bash[55303]: cluster 2026-03-10T05:55:40.472863+0000 mon.a (mon.0) 438 : cluster [WRN] Health check failed: 1 osds down (OSD_DOWN) 2026-03-10T05:55:41.835 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:55:41 vm02 bash[55303]: cluster 2026-03-10T05:55:40.508074+0000 mon.a (mon.0) 439 : cluster [DBG] osdmap e122: 8 total, 7 up, 8 in 2026-03-10T05:55:41.835 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:55:41 vm02 bash[55303]: cluster 2026-03-10T05:55:40.508074+0000 mon.a (mon.0) 439 : cluster [DBG] osdmap e122: 8 total, 7 up, 8 in 2026-03-10T05:55:41.835 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:55:41 vm02 bash[55303]: audit 2026-03-10T05:55:40.881255+0000 mon.a (mon.0) 440 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T05:55:41.835 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:55:41 vm02 bash[55303]: audit 2026-03-10T05:55:40.881255+0000 mon.a (mon.0) 440 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T05:55:41.835 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:55:41 vm02 bash[55303]: audit 2026-03-10T05:55:41.053707+0000 mon.a (mon.0) 441 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:55:41.835 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:55:41 vm02 bash[55303]: audit 2026-03-10T05:55:41.053707+0000 mon.a (mon.0) 441 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:55:41.835 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:55:41 vm02 bash[55303]: audit 2026-03-10T05:55:41.061616+0000 mon.a (mon.0) 442 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:55:41.835 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:55:41 vm02 bash[55303]: audit 2026-03-10T05:55:41.061616+0000 mon.a (mon.0) 442 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:55:42.395 INFO:journalctl@ceph.osd.6.vm05.stdout:Mar 10 05:55:42 vm05 bash[49487]: --> Failed to activate via raw: did not find any matching OSD to activate 2026-03-10T05:55:42.395 INFO:journalctl@ceph.osd.6.vm05.stdout:Mar 10 05:55:42 vm05 bash[49487]: Running command: /usr/bin/ceph-authtool --gen-print-key 2026-03-10T05:55:42.395 INFO:journalctl@ceph.osd.6.vm05.stdout:Mar 10 05:55:42 vm05 bash[49487]: Running command: /usr/bin/ceph-authtool --gen-print-key 2026-03-10T05:55:42.395 INFO:journalctl@ceph.osd.6.vm05.stdout:Mar 10 05:55:42 vm05 bash[49487]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-6 2026-03-10T05:55:42.395 INFO:journalctl@ceph.osd.6.vm05.stdout:Mar 10 05:55:42 vm05 bash[49487]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph-fc1e1ab8-5d4a-4559-9655-71bf1a4da7a3/osd-block-b2fa96ba-d56a-43b9-ab42-f9fc8abe2daf --path /var/lib/ceph/osd/ceph-6 --no-mon-config 2026-03-10T05:55:42.749 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:55:42 vm05 bash[43541]: cluster 2026-03-10T05:55:40.854548+0000 mgr.y (mgr.24992) 193 : cluster [DBG] pgmap v109: 161 pgs: 13 stale+active+clean, 148 active+clean; 457 KiB data, 238 MiB used, 160 GiB / 160 GiB avail; 921 B/s rd, 0 op/s 2026-03-10T05:55:42.749 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:55:42 vm05 bash[43541]: cluster 2026-03-10T05:55:40.854548+0000 mgr.y (mgr.24992) 193 : cluster [DBG] pgmap v109: 161 pgs: 13 stale+active+clean, 148 active+clean; 457 KiB data, 238 MiB used, 160 GiB / 160 GiB avail; 921 B/s rd, 0 op/s 2026-03-10T05:55:42.749 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:55:42 vm05 bash[43541]: cluster 2026-03-10T05:55:41.510795+0000 mon.a (mon.0) 443 : cluster [DBG] osdmap e123: 8 total, 7 up, 8 in 2026-03-10T05:55:42.749 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:55:42 vm05 bash[43541]: cluster 2026-03-10T05:55:41.510795+0000 mon.a (mon.0) 443 : cluster [DBG] osdmap e123: 8 total, 7 up, 8 in 2026-03-10T05:55:42.749 INFO:journalctl@ceph.osd.6.vm05.stdout:Mar 10 05:55:42 vm05 bash[49487]: Running command: /usr/bin/ln -snf /dev/ceph-fc1e1ab8-5d4a-4559-9655-71bf1a4da7a3/osd-block-b2fa96ba-d56a-43b9-ab42-f9fc8abe2daf /var/lib/ceph/osd/ceph-6/block 2026-03-10T05:55:42.749 INFO:journalctl@ceph.osd.6.vm05.stdout:Mar 10 05:55:42 vm05 bash[49487]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-6/block 2026-03-10T05:55:42.749 INFO:journalctl@ceph.osd.6.vm05.stdout:Mar 10 05:55:42 vm05 bash[49487]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-2 2026-03-10T05:55:42.749 INFO:journalctl@ceph.osd.6.vm05.stdout:Mar 10 05:55:42 vm05 bash[49487]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-6 2026-03-10T05:55:42.749 INFO:journalctl@ceph.osd.6.vm05.stdout:Mar 10 05:55:42 vm05 bash[49487]: --> ceph-volume lvm activate successful for osd ID: 6 2026-03-10T05:55:42.749 INFO:journalctl@ceph.osd.6.vm05.stdout:Mar 10 05:55:42 vm05 bash[49827]: debug 2026-03-10T05:55:42.543+0000 7fac1e7f9640 1 -- 192.168.123.105:0/182273747 <== mon.1 v2:192.168.123.102:3301/0 4 ==== auth_reply(proto 2 0 (0) Success) ==== 194+0+0 (secure 0 0 0) 0x55f9378a1680 con 0x55f93789a000 2026-03-10T05:55:42.834 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:55:42 vm02 bash[56371]: cluster 2026-03-10T05:55:40.854548+0000 mgr.y (mgr.24992) 193 : cluster [DBG] pgmap v109: 161 pgs: 13 stale+active+clean, 148 active+clean; 457 KiB data, 238 MiB used, 160 GiB / 160 GiB avail; 921 B/s rd, 0 op/s 2026-03-10T05:55:42.834 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:55:42 vm02 bash[56371]: cluster 2026-03-10T05:55:40.854548+0000 mgr.y (mgr.24992) 193 : cluster [DBG] pgmap v109: 161 pgs: 13 stale+active+clean, 148 active+clean; 457 KiB data, 238 MiB used, 160 GiB / 160 GiB avail; 921 B/s rd, 0 op/s 2026-03-10T05:55:42.834 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:55:42 vm02 bash[56371]: cluster 2026-03-10T05:55:41.510795+0000 mon.a (mon.0) 443 : cluster [DBG] osdmap e123: 8 total, 7 up, 8 in 2026-03-10T05:55:42.834 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:55:42 vm02 bash[56371]: cluster 2026-03-10T05:55:41.510795+0000 mon.a (mon.0) 443 : cluster [DBG] osdmap e123: 8 total, 7 up, 8 in 2026-03-10T05:55:42.835 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:55:42 vm02 bash[55303]: cluster 2026-03-10T05:55:40.854548+0000 mgr.y (mgr.24992) 193 : cluster [DBG] pgmap v109: 161 pgs: 13 stale+active+clean, 148 active+clean; 457 KiB data, 238 MiB used, 160 GiB / 160 GiB avail; 921 B/s rd, 0 op/s 2026-03-10T05:55:42.835 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:55:42 vm02 bash[55303]: cluster 2026-03-10T05:55:40.854548+0000 mgr.y (mgr.24992) 193 : cluster [DBG] pgmap v109: 161 pgs: 13 stale+active+clean, 148 active+clean; 457 KiB data, 238 MiB used, 160 GiB / 160 GiB avail; 921 B/s rd, 0 op/s 2026-03-10T05:55:42.835 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:55:42 vm02 bash[55303]: cluster 2026-03-10T05:55:41.510795+0000 mon.a (mon.0) 443 : cluster [DBG] osdmap e123: 8 total, 7 up, 8 in 2026-03-10T05:55:42.835 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:55:42 vm02 bash[55303]: cluster 2026-03-10T05:55:41.510795+0000 mon.a (mon.0) 443 : cluster [DBG] osdmap e123: 8 total, 7 up, 8 in 2026-03-10T05:55:43.334 INFO:journalctl@ceph.mgr.y.vm02.stdout:Mar 10 05:55:42 vm02 bash[52264]: ::ffff:192.168.123.105 - - [10/Mar/2026:05:55:42] "GET /metrics HTTP/1.1" 200 38096 "" "Prometheus/2.51.0" 2026-03-10T05:55:43.547 INFO:journalctl@ceph.osd.6.vm05.stdout:Mar 10 05:55:43 vm05 bash[49827]: debug 2026-03-10T05:55:43.255+0000 7fac21063740 -1 Falling back to public interface 2026-03-10T05:55:43.835 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:55:43 vm02 bash[56371]: cluster 2026-03-10T05:55:43.492734+0000 mon.a (mon.0) 444 : cluster [WRN] Health check failed: Degraded data redundancy: 56/723 objects degraded (7.746%), 10 pgs degraded (PG_DEGRADED) 2026-03-10T05:55:43.835 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:55:43 vm02 bash[56371]: cluster 2026-03-10T05:55:43.492734+0000 mon.a (mon.0) 444 : cluster [WRN] Health check failed: Degraded data redundancy: 56/723 objects degraded (7.746%), 10 pgs degraded (PG_DEGRADED) 2026-03-10T05:55:43.835 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:55:43 vm02 bash[55303]: cluster 2026-03-10T05:55:43.492734+0000 mon.a (mon.0) 444 : cluster [WRN] Health check failed: Degraded data redundancy: 56/723 objects degraded (7.746%), 10 pgs degraded (PG_DEGRADED) 2026-03-10T05:55:43.835 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:55:43 vm02 bash[55303]: cluster 2026-03-10T05:55:43.492734+0000 mon.a (mon.0) 444 : cluster [WRN] Health check failed: Degraded data redundancy: 56/723 objects degraded (7.746%), 10 pgs degraded (PG_DEGRADED) 2026-03-10T05:55:43.999 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:55:43 vm05 bash[43541]: cluster 2026-03-10T05:55:43.492734+0000 mon.a (mon.0) 444 : cluster [WRN] Health check failed: Degraded data redundancy: 56/723 objects degraded (7.746%), 10 pgs degraded (PG_DEGRADED) 2026-03-10T05:55:43.999 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:55:43 vm05 bash[43541]: cluster 2026-03-10T05:55:43.492734+0000 mon.a (mon.0) 444 : cluster [WRN] Health check failed: Degraded data redundancy: 56/723 objects degraded (7.746%), 10 pgs degraded (PG_DEGRADED) 2026-03-10T05:55:44.499 INFO:journalctl@ceph.osd.6.vm05.stdout:Mar 10 05:55:44 vm05 bash[49827]: debug 2026-03-10T05:55:44.211+0000 7fac21063740 -1 osd.6 0 read_superblock omap replica is missing. 2026-03-10T05:55:44.499 INFO:journalctl@ceph.osd.6.vm05.stdout:Mar 10 05:55:44 vm05 bash[49827]: debug 2026-03-10T05:55:44.223+0000 7fac21063740 -1 osd.6 121 log_to_monitors true 2026-03-10T05:55:44.499 INFO:journalctl@ceph.prometheus.a.vm05.stdout:Mar 10 05:55:44 vm05 bash[41269]: ts=2026-03-10T05:55:44.147Z caller=alerting.go:391 level=warn component="rule manager" alert="unsupported value type" msg="Expanding alert template failed" err="error executing template __alert_CephOSDDown: template: __alert_CephOSDDown:1:358: executing \"__alert_CephOSDDown\" at : error calling query: found duplicate series for the match group {ceph_daemon=\"osd.6\"} on the right hand-side of the operation: [{__name__=\"ceph_osd_metadata\", ceph_daemon=\"osd.6\", ceph_version=\"ceph version 17.2.0 (43e2e60a7559d3f46c9d53f1ca875fd499a1e35e) quincy (stable)\", cluster=\"107483ae-1c44-11f1-b530-c1172cd6122a\", cluster_addr=\"192.168.123.105\", device_class=\"hdd\", hostname=\"vm05\", instance=\"ceph_cluster\", job=\"ceph\", objectstore=\"bluestore\", public_addr=\"192.168.123.105\"}, {__name__=\"ceph_osd_metadata\", ceph_daemon=\"osd.6\", ceph_version=\"ceph version 17.2.0 (43e2e60a7559d3f46c9d53f1ca875fd499a1e35e) quincy (stable)\", cluster_addr=\"192.168.123.105\", device_class=\"hdd\", hostname=\"vm05\", instance=\"192.168.123.105:9283\", job=\"ceph\", objectstore=\"bluestore\", public_addr=\"192.168.123.105\"}];many-to-many matching not allowed: matching labels must be unique on one side" data="unsupported value type" 2026-03-10T05:55:44.499 INFO:journalctl@ceph.prometheus.a.vm05.stdout:Mar 10 05:55:44 vm05 bash[41269]: ts=2026-03-10T05:55:44.148Z caller=group.go:483 level=warn name=CephOSDFlapping index=13 component="rule manager" file=/etc/prometheus/alerting/ceph_alerts.yml group=osd msg="Evaluating rule failed" rule="alert: CephOSDFlapping\nexpr: (rate(ceph_osd_up[5m]) * on (ceph_daemon) group_left (hostname) ceph_osd_metadata)\n * 60 > 1\nlabels:\n oid: 1.3.6.1.4.1.50495.1.2.1.4.4\n severity: warning\n type: ceph_default\nannotations:\n description: OSD {{ $labels.ceph_daemon }} on {{ $labels.hostname }} was marked\n down and back up {{ $value | humanize }} times once a minute for 5 minutes. This\n may indicate a network issue (latency, packet loss, MTU mismatch) on the cluster\n network, or the public network if no cluster network is deployed. Check the network\n stats on the listed host(s).\n documentation: https://docs.ceph.com/en/latest/rados/troubleshooting/troubleshooting-osd#flapping-osds\n summary: Network issues are causing OSDs to flap (mark each other down)\n" err="found duplicate series for the match group {ceph_daemon=\"osd.6\"} on the right hand-side of the operation: [{__name__=\"ceph_osd_metadata\", ceph_daemon=\"osd.6\", ceph_version=\"ceph version 17.2.0 (43e2e60a7559d3f46c9d53f1ca875fd499a1e35e) quincy (stable)\", cluster=\"107483ae-1c44-11f1-b530-c1172cd6122a\", cluster_addr=\"192.168.123.105\", device_class=\"hdd\", hostname=\"vm05\", instance=\"ceph_cluster\", job=\"ceph\", objectstore=\"bluestore\", public_addr=\"192.168.123.105\"}, {__name__=\"ceph_osd_metadata\", ceph_daemon=\"osd.6\", ceph_version=\"ceph version 17.2.0 (43e2e60a7559d3f46c9d53f1ca875fd499a1e35e) quincy (stable)\", cluster_addr=\"192.168.123.105\", device_class=\"hdd\", hostname=\"vm05\", instance=\"192.168.123.105:9283\", job=\"ceph\", objectstore=\"bluestore\", public_addr=\"192.168.123.105\"}];many-to-many matching not allowed: matching labels must be unique on one side" 2026-03-10T05:55:44.835 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:55:44 vm02 bash[55303]: cluster 2026-03-10T05:55:42.854939+0000 mgr.y (mgr.24992) 194 : cluster [DBG] pgmap v111: 161 pgs: 26 active+undersized, 5 peering, 4 stale+active+clean, 10 active+undersized+degraded, 116 active+clean; 457 KiB data, 238 MiB used, 160 GiB / 160 GiB avail; 639 B/s rd, 0 op/s; 56/723 objects degraded (7.746%) 2026-03-10T05:55:44.835 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:55:44 vm02 bash[55303]: cluster 2026-03-10T05:55:42.854939+0000 mgr.y (mgr.24992) 194 : cluster [DBG] pgmap v111: 161 pgs: 26 active+undersized, 5 peering, 4 stale+active+clean, 10 active+undersized+degraded, 116 active+clean; 457 KiB data, 238 MiB used, 160 GiB / 160 GiB avail; 639 B/s rd, 0 op/s; 56/723 objects degraded (7.746%) 2026-03-10T05:55:44.835 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:55:44 vm02 bash[55303]: audit 2026-03-10T05:55:44.226881+0000 mon.a (mon.0) 445 : audit [INF] from='osd.6 ' entity='osd.6' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["6"]}]: dispatch 2026-03-10T05:55:44.835 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:55:44 vm02 bash[55303]: audit 2026-03-10T05:55:44.226881+0000 mon.a (mon.0) 445 : audit [INF] from='osd.6 ' entity='osd.6' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["6"]}]: dispatch 2026-03-10T05:55:44.835 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:55:44 vm02 bash[55303]: audit 2026-03-10T05:55:44.231052+0000 mon.b (mon.2) 11 : audit [INF] from='osd.6 [v2:192.168.123.105:6816/3905027923,v1:192.168.123.105:6817/3905027923]' entity='osd.6' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["6"]}]: dispatch 2026-03-10T05:55:44.835 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:55:44 vm02 bash[55303]: audit 2026-03-10T05:55:44.231052+0000 mon.b (mon.2) 11 : audit [INF] from='osd.6 [v2:192.168.123.105:6816/3905027923,v1:192.168.123.105:6817/3905027923]' entity='osd.6' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["6"]}]: dispatch 2026-03-10T05:55:44.835 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:55:44 vm02 bash[56371]: cluster 2026-03-10T05:55:42.854939+0000 mgr.y (mgr.24992) 194 : cluster [DBG] pgmap v111: 161 pgs: 26 active+undersized, 5 peering, 4 stale+active+clean, 10 active+undersized+degraded, 116 active+clean; 457 KiB data, 238 MiB used, 160 GiB / 160 GiB avail; 639 B/s rd, 0 op/s; 56/723 objects degraded (7.746%) 2026-03-10T05:55:44.835 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:55:44 vm02 bash[56371]: cluster 2026-03-10T05:55:42.854939+0000 mgr.y (mgr.24992) 194 : cluster [DBG] pgmap v111: 161 pgs: 26 active+undersized, 5 peering, 4 stale+active+clean, 10 active+undersized+degraded, 116 active+clean; 457 KiB data, 238 MiB used, 160 GiB / 160 GiB avail; 639 B/s rd, 0 op/s; 56/723 objects degraded (7.746%) 2026-03-10T05:55:44.835 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:55:44 vm02 bash[56371]: audit 2026-03-10T05:55:44.226881+0000 mon.a (mon.0) 445 : audit [INF] from='osd.6 ' entity='osd.6' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["6"]}]: dispatch 2026-03-10T05:55:44.835 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:55:44 vm02 bash[56371]: audit 2026-03-10T05:55:44.226881+0000 mon.a (mon.0) 445 : audit [INF] from='osd.6 ' entity='osd.6' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["6"]}]: dispatch 2026-03-10T05:55:44.835 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:55:44 vm02 bash[56371]: audit 2026-03-10T05:55:44.231052+0000 mon.b (mon.2) 11 : audit [INF] from='osd.6 [v2:192.168.123.105:6816/3905027923,v1:192.168.123.105:6817/3905027923]' entity='osd.6' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["6"]}]: dispatch 2026-03-10T05:55:44.835 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:55:44 vm02 bash[56371]: audit 2026-03-10T05:55:44.231052+0000 mon.b (mon.2) 11 : audit [INF] from='osd.6 [v2:192.168.123.105:6816/3905027923,v1:192.168.123.105:6817/3905027923]' entity='osd.6' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["6"]}]: dispatch 2026-03-10T05:55:44.999 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:55:44 vm05 bash[43541]: cluster 2026-03-10T05:55:42.854939+0000 mgr.y (mgr.24992) 194 : cluster [DBG] pgmap v111: 161 pgs: 26 active+undersized, 5 peering, 4 stale+active+clean, 10 active+undersized+degraded, 116 active+clean; 457 KiB data, 238 MiB used, 160 GiB / 160 GiB avail; 639 B/s rd, 0 op/s; 56/723 objects degraded (7.746%) 2026-03-10T05:55:44.999 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:55:44 vm05 bash[43541]: cluster 2026-03-10T05:55:42.854939+0000 mgr.y (mgr.24992) 194 : cluster [DBG] pgmap v111: 161 pgs: 26 active+undersized, 5 peering, 4 stale+active+clean, 10 active+undersized+degraded, 116 active+clean; 457 KiB data, 238 MiB used, 160 GiB / 160 GiB avail; 639 B/s rd, 0 op/s; 56/723 objects degraded (7.746%) 2026-03-10T05:55:44.999 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:55:44 vm05 bash[43541]: audit 2026-03-10T05:55:44.226881+0000 mon.a (mon.0) 445 : audit [INF] from='osd.6 ' entity='osd.6' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["6"]}]: dispatch 2026-03-10T05:55:44.999 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:55:44 vm05 bash[43541]: audit 2026-03-10T05:55:44.226881+0000 mon.a (mon.0) 445 : audit [INF] from='osd.6 ' entity='osd.6' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["6"]}]: dispatch 2026-03-10T05:55:44.999 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:55:44 vm05 bash[43541]: audit 2026-03-10T05:55:44.231052+0000 mon.b (mon.2) 11 : audit [INF] from='osd.6 [v2:192.168.123.105:6816/3905027923,v1:192.168.123.105:6817/3905027923]' entity='osd.6' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["6"]}]: dispatch 2026-03-10T05:55:44.999 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:55:44 vm05 bash[43541]: audit 2026-03-10T05:55:44.231052+0000 mon.b (mon.2) 11 : audit [INF] from='osd.6 [v2:192.168.123.105:6816/3905027923,v1:192.168.123.105:6817/3905027923]' entity='osd.6' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["6"]}]: dispatch 2026-03-10T05:55:45.749 INFO:journalctl@ceph.osd.6.vm05.stdout:Mar 10 05:55:45 vm05 bash[49827]: debug 2026-03-10T05:55:45.483+0000 7fac1860d640 -1 osd.6 121 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory 2026-03-10T05:55:45.749 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:55:45 vm05 bash[43541]: audit 2026-03-10T05:55:44.551235+0000 mon.a (mon.0) 446 : audit [INF] from='osd.6 ' entity='osd.6' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["6"]}]': finished 2026-03-10T05:55:45.749 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:55:45 vm05 bash[43541]: audit 2026-03-10T05:55:44.551235+0000 mon.a (mon.0) 446 : audit [INF] from='osd.6 ' entity='osd.6' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["6"]}]': finished 2026-03-10T05:55:45.749 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:55:45 vm05 bash[43541]: cluster 2026-03-10T05:55:44.555147+0000 mon.a (mon.0) 447 : cluster [DBG] osdmap e124: 8 total, 7 up, 8 in 2026-03-10T05:55:45.749 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:55:45 vm05 bash[43541]: cluster 2026-03-10T05:55:44.555147+0000 mon.a (mon.0) 447 : cluster [DBG] osdmap e124: 8 total, 7 up, 8 in 2026-03-10T05:55:45.749 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:55:45 vm05 bash[43541]: audit 2026-03-10T05:55:44.563106+0000 mon.a (mon.0) 448 : audit [INF] from='osd.6 ' entity='osd.6' cmd=[{"prefix": "osd crush create-or-move", "id": 6, "weight":0.0195, "args": ["host=vm05", "root=default"]}]: dispatch 2026-03-10T05:55:45.749 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:55:45 vm05 bash[43541]: audit 2026-03-10T05:55:44.563106+0000 mon.a (mon.0) 448 : audit [INF] from='osd.6 ' entity='osd.6' cmd=[{"prefix": "osd crush create-or-move", "id": 6, "weight":0.0195, "args": ["host=vm05", "root=default"]}]: dispatch 2026-03-10T05:55:45.749 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:55:45 vm05 bash[43541]: audit 2026-03-10T05:55:44.565979+0000 mon.b (mon.2) 12 : audit [INF] from='osd.6 [v2:192.168.123.105:6816/3905027923,v1:192.168.123.105:6817/3905027923]' entity='osd.6' cmd=[{"prefix": "osd crush create-or-move", "id": 6, "weight":0.0195, "args": ["host=vm05", "root=default"]}]: dispatch 2026-03-10T05:55:45.749 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:55:45 vm05 bash[43541]: audit 2026-03-10T05:55:44.565979+0000 mon.b (mon.2) 12 : audit [INF] from='osd.6 [v2:192.168.123.105:6816/3905027923,v1:192.168.123.105:6817/3905027923]' entity='osd.6' cmd=[{"prefix": "osd crush create-or-move", "id": 6, "weight":0.0195, "args": ["host=vm05", "root=default"]}]: dispatch 2026-03-10T05:55:45.834 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:55:45 vm02 bash[56371]: audit 2026-03-10T05:55:44.551235+0000 mon.a (mon.0) 446 : audit [INF] from='osd.6 ' entity='osd.6' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["6"]}]': finished 2026-03-10T05:55:45.835 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:55:45 vm02 bash[56371]: audit 2026-03-10T05:55:44.551235+0000 mon.a (mon.0) 446 : audit [INF] from='osd.6 ' entity='osd.6' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["6"]}]': finished 2026-03-10T05:55:45.835 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:55:45 vm02 bash[56371]: cluster 2026-03-10T05:55:44.555147+0000 mon.a (mon.0) 447 : cluster [DBG] osdmap e124: 8 total, 7 up, 8 in 2026-03-10T05:55:45.835 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:55:45 vm02 bash[56371]: cluster 2026-03-10T05:55:44.555147+0000 mon.a (mon.0) 447 : cluster [DBG] osdmap e124: 8 total, 7 up, 8 in 2026-03-10T05:55:45.835 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:55:45 vm02 bash[56371]: audit 2026-03-10T05:55:44.563106+0000 mon.a (mon.0) 448 : audit [INF] from='osd.6 ' entity='osd.6' cmd=[{"prefix": "osd crush create-or-move", "id": 6, "weight":0.0195, "args": ["host=vm05", "root=default"]}]: dispatch 2026-03-10T05:55:45.835 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:55:45 vm02 bash[56371]: audit 2026-03-10T05:55:44.563106+0000 mon.a (mon.0) 448 : audit [INF] from='osd.6 ' entity='osd.6' cmd=[{"prefix": "osd crush create-or-move", "id": 6, "weight":0.0195, "args": ["host=vm05", "root=default"]}]: dispatch 2026-03-10T05:55:45.835 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:55:45 vm02 bash[56371]: audit 2026-03-10T05:55:44.565979+0000 mon.b (mon.2) 12 : audit [INF] from='osd.6 [v2:192.168.123.105:6816/3905027923,v1:192.168.123.105:6817/3905027923]' entity='osd.6' cmd=[{"prefix": "osd crush create-or-move", "id": 6, "weight":0.0195, "args": ["host=vm05", "root=default"]}]: dispatch 2026-03-10T05:55:45.835 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:55:45 vm02 bash[56371]: audit 2026-03-10T05:55:44.565979+0000 mon.b (mon.2) 12 : audit [INF] from='osd.6 [v2:192.168.123.105:6816/3905027923,v1:192.168.123.105:6817/3905027923]' entity='osd.6' cmd=[{"prefix": "osd crush create-or-move", "id": 6, "weight":0.0195, "args": ["host=vm05", "root=default"]}]: dispatch 2026-03-10T05:55:45.835 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:55:45 vm02 bash[55303]: audit 2026-03-10T05:55:44.551235+0000 mon.a (mon.0) 446 : audit [INF] from='osd.6 ' entity='osd.6' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["6"]}]': finished 2026-03-10T05:55:45.835 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:55:45 vm02 bash[55303]: audit 2026-03-10T05:55:44.551235+0000 mon.a (mon.0) 446 : audit [INF] from='osd.6 ' entity='osd.6' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["6"]}]': finished 2026-03-10T05:55:45.835 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:55:45 vm02 bash[55303]: cluster 2026-03-10T05:55:44.555147+0000 mon.a (mon.0) 447 : cluster [DBG] osdmap e124: 8 total, 7 up, 8 in 2026-03-10T05:55:45.835 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:55:45 vm02 bash[55303]: cluster 2026-03-10T05:55:44.555147+0000 mon.a (mon.0) 447 : cluster [DBG] osdmap e124: 8 total, 7 up, 8 in 2026-03-10T05:55:45.835 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:55:45 vm02 bash[55303]: audit 2026-03-10T05:55:44.563106+0000 mon.a (mon.0) 448 : audit [INF] from='osd.6 ' entity='osd.6' cmd=[{"prefix": "osd crush create-or-move", "id": 6, "weight":0.0195, "args": ["host=vm05", "root=default"]}]: dispatch 2026-03-10T05:55:45.835 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:55:45 vm02 bash[55303]: audit 2026-03-10T05:55:44.563106+0000 mon.a (mon.0) 448 : audit [INF] from='osd.6 ' entity='osd.6' cmd=[{"prefix": "osd crush create-or-move", "id": 6, "weight":0.0195, "args": ["host=vm05", "root=default"]}]: dispatch 2026-03-10T05:55:45.835 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:55:45 vm02 bash[55303]: audit 2026-03-10T05:55:44.565979+0000 mon.b (mon.2) 12 : audit [INF] from='osd.6 [v2:192.168.123.105:6816/3905027923,v1:192.168.123.105:6817/3905027923]' entity='osd.6' cmd=[{"prefix": "osd crush create-or-move", "id": 6, "weight":0.0195, "args": ["host=vm05", "root=default"]}]: dispatch 2026-03-10T05:55:45.835 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:55:45 vm02 bash[55303]: audit 2026-03-10T05:55:44.565979+0000 mon.b (mon.2) 12 : audit [INF] from='osd.6 [v2:192.168.123.105:6816/3905027923,v1:192.168.123.105:6817/3905027923]' entity='osd.6' cmd=[{"prefix": "osd crush create-or-move", "id": 6, "weight":0.0195, "args": ["host=vm05", "root=default"]}]: dispatch 2026-03-10T05:55:46.835 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:55:46 vm02 bash[56371]: cluster 2026-03-10T05:55:44.855240+0000 mgr.y (mgr.24992) 195 : cluster [DBG] pgmap v113: 161 pgs: 2 unknown, 32 active+undersized, 5 peering, 13 active+undersized+degraded, 109 active+clean; 457 KiB data, 256 MiB used, 160 GiB / 160 GiB avail; 68/723 objects degraded (9.405%) 2026-03-10T05:55:46.835 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:55:46 vm02 bash[56371]: cluster 2026-03-10T05:55:44.855240+0000 mgr.y (mgr.24992) 195 : cluster [DBG] pgmap v113: 161 pgs: 2 unknown, 32 active+undersized, 5 peering, 13 active+undersized+degraded, 109 active+clean; 457 KiB data, 256 MiB used, 160 GiB / 160 GiB avail; 68/723 objects degraded (9.405%) 2026-03-10T05:55:46.835 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:55:46 vm02 bash[56371]: cluster 2026-03-10T05:55:45.560476+0000 mon.a (mon.0) 449 : cluster [INF] Health check cleared: OSD_DOWN (was: 1 osds down) 2026-03-10T05:55:46.835 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:55:46 vm02 bash[56371]: cluster 2026-03-10T05:55:45.560476+0000 mon.a (mon.0) 449 : cluster [INF] Health check cleared: OSD_DOWN (was: 1 osds down) 2026-03-10T05:55:46.835 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:55:46 vm02 bash[56371]: cluster 2026-03-10T05:55:45.590337+0000 mon.a (mon.0) 450 : cluster [INF] osd.6 [v2:192.168.123.105:6816/3905027923,v1:192.168.123.105:6817/3905027923] boot 2026-03-10T05:55:46.835 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:55:46 vm02 bash[56371]: cluster 2026-03-10T05:55:45.590337+0000 mon.a (mon.0) 450 : cluster [INF] osd.6 [v2:192.168.123.105:6816/3905027923,v1:192.168.123.105:6817/3905027923] boot 2026-03-10T05:55:46.835 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:55:46 vm02 bash[56371]: cluster 2026-03-10T05:55:45.590480+0000 mon.a (mon.0) 451 : cluster [DBG] osdmap e125: 8 total, 8 up, 8 in 2026-03-10T05:55:46.835 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:55:46 vm02 bash[56371]: cluster 2026-03-10T05:55:45.590480+0000 mon.a (mon.0) 451 : cluster [DBG] osdmap e125: 8 total, 8 up, 8 in 2026-03-10T05:55:46.835 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:55:46 vm02 bash[56371]: audit 2026-03-10T05:55:45.590920+0000 mon.a (mon.0) 452 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-10T05:55:46.835 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:55:46 vm02 bash[56371]: audit 2026-03-10T05:55:45.590920+0000 mon.a (mon.0) 452 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-10T05:55:46.835 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:55:46 vm02 bash[56371]: audit 2026-03-10T05:55:45.926867+0000 mon.a (mon.0) 453 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:55:46.835 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:55:46 vm02 bash[56371]: audit 2026-03-10T05:55:45.926867+0000 mon.a (mon.0) 453 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:55:46.835 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:55:46 vm02 bash[55303]: cluster 2026-03-10T05:55:44.855240+0000 mgr.y (mgr.24992) 195 : cluster [DBG] pgmap v113: 161 pgs: 2 unknown, 32 active+undersized, 5 peering, 13 active+undersized+degraded, 109 active+clean; 457 KiB data, 256 MiB used, 160 GiB / 160 GiB avail; 68/723 objects degraded (9.405%) 2026-03-10T05:55:46.835 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:55:46 vm02 bash[55303]: cluster 2026-03-10T05:55:44.855240+0000 mgr.y (mgr.24992) 195 : cluster [DBG] pgmap v113: 161 pgs: 2 unknown, 32 active+undersized, 5 peering, 13 active+undersized+degraded, 109 active+clean; 457 KiB data, 256 MiB used, 160 GiB / 160 GiB avail; 68/723 objects degraded (9.405%) 2026-03-10T05:55:46.835 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:55:46 vm02 bash[55303]: cluster 2026-03-10T05:55:45.560476+0000 mon.a (mon.0) 449 : cluster [INF] Health check cleared: OSD_DOWN (was: 1 osds down) 2026-03-10T05:55:46.835 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:55:46 vm02 bash[55303]: cluster 2026-03-10T05:55:45.560476+0000 mon.a (mon.0) 449 : cluster [INF] Health check cleared: OSD_DOWN (was: 1 osds down) 2026-03-10T05:55:46.835 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:55:46 vm02 bash[55303]: cluster 2026-03-10T05:55:45.590337+0000 mon.a (mon.0) 450 : cluster [INF] osd.6 [v2:192.168.123.105:6816/3905027923,v1:192.168.123.105:6817/3905027923] boot 2026-03-10T05:55:46.835 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:55:46 vm02 bash[55303]: cluster 2026-03-10T05:55:45.590337+0000 mon.a (mon.0) 450 : cluster [INF] osd.6 [v2:192.168.123.105:6816/3905027923,v1:192.168.123.105:6817/3905027923] boot 2026-03-10T05:55:46.835 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:55:46 vm02 bash[55303]: cluster 2026-03-10T05:55:45.590480+0000 mon.a (mon.0) 451 : cluster [DBG] osdmap e125: 8 total, 8 up, 8 in 2026-03-10T05:55:46.835 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:55:46 vm02 bash[55303]: cluster 2026-03-10T05:55:45.590480+0000 mon.a (mon.0) 451 : cluster [DBG] osdmap e125: 8 total, 8 up, 8 in 2026-03-10T05:55:46.835 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:55:46 vm02 bash[55303]: audit 2026-03-10T05:55:45.590920+0000 mon.a (mon.0) 452 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-10T05:55:46.835 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:55:46 vm02 bash[55303]: audit 2026-03-10T05:55:45.590920+0000 mon.a (mon.0) 452 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-10T05:55:46.835 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:55:46 vm02 bash[55303]: audit 2026-03-10T05:55:45.926867+0000 mon.a (mon.0) 453 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:55:46.835 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:55:46 vm02 bash[55303]: audit 2026-03-10T05:55:45.926867+0000 mon.a (mon.0) 453 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:55:46.945 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:55:46 vm05 bash[43541]: cluster 2026-03-10T05:55:44.855240+0000 mgr.y (mgr.24992) 195 : cluster [DBG] pgmap v113: 161 pgs: 2 unknown, 32 active+undersized, 5 peering, 13 active+undersized+degraded, 109 active+clean; 457 KiB data, 256 MiB used, 160 GiB / 160 GiB avail; 68/723 objects degraded (9.405%) 2026-03-10T05:55:46.945 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:55:46 vm05 bash[43541]: cluster 2026-03-10T05:55:44.855240+0000 mgr.y (mgr.24992) 195 : cluster [DBG] pgmap v113: 161 pgs: 2 unknown, 32 active+undersized, 5 peering, 13 active+undersized+degraded, 109 active+clean; 457 KiB data, 256 MiB used, 160 GiB / 160 GiB avail; 68/723 objects degraded (9.405%) 2026-03-10T05:55:46.945 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:55:46 vm05 bash[43541]: cluster 2026-03-10T05:55:45.560476+0000 mon.a (mon.0) 449 : cluster [INF] Health check cleared: OSD_DOWN (was: 1 osds down) 2026-03-10T05:55:46.945 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:55:46 vm05 bash[43541]: cluster 2026-03-10T05:55:45.560476+0000 mon.a (mon.0) 449 : cluster [INF] Health check cleared: OSD_DOWN (was: 1 osds down) 2026-03-10T05:55:46.945 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:55:46 vm05 bash[43541]: cluster 2026-03-10T05:55:45.590337+0000 mon.a (mon.0) 450 : cluster [INF] osd.6 [v2:192.168.123.105:6816/3905027923,v1:192.168.123.105:6817/3905027923] boot 2026-03-10T05:55:46.945 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:55:46 vm05 bash[43541]: cluster 2026-03-10T05:55:45.590337+0000 mon.a (mon.0) 450 : cluster [INF] osd.6 [v2:192.168.123.105:6816/3905027923,v1:192.168.123.105:6817/3905027923] boot 2026-03-10T05:55:46.945 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:55:46 vm05 bash[43541]: cluster 2026-03-10T05:55:45.590480+0000 mon.a (mon.0) 451 : cluster [DBG] osdmap e125: 8 total, 8 up, 8 in 2026-03-10T05:55:46.945 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:55:46 vm05 bash[43541]: cluster 2026-03-10T05:55:45.590480+0000 mon.a (mon.0) 451 : cluster [DBG] osdmap e125: 8 total, 8 up, 8 in 2026-03-10T05:55:46.945 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:55:46 vm05 bash[43541]: audit 2026-03-10T05:55:45.590920+0000 mon.a (mon.0) 452 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-10T05:55:46.945 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:55:46 vm05 bash[43541]: audit 2026-03-10T05:55:45.590920+0000 mon.a (mon.0) 452 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-10T05:55:46.945 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:55:46 vm05 bash[43541]: audit 2026-03-10T05:55:45.926867+0000 mon.a (mon.0) 453 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:55:46.945 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:55:46 vm05 bash[43541]: audit 2026-03-10T05:55:45.926867+0000 mon.a (mon.0) 453 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:55:47.248 INFO:journalctl@ceph.prometheus.a.vm05.stdout:Mar 10 05:55:46 vm05 bash[41269]: ts=2026-03-10T05:55:46.948Z caller=group.go:483 level=warn name=CephNodeDiskspaceWarning index=4 component="rule manager" file=/etc/prometheus/alerting/ceph_alerts.yml group=nodes msg="Evaluating rule failed" rule="alert: CephNodeDiskspaceWarning\nexpr: predict_linear(node_filesystem_free_bytes{device=~\"/.*\"}[2d], 3600 * 24 * 5)\n * on (instance) group_left (nodename) node_uname_info < 0\nlabels:\n oid: 1.3.6.1.4.1.50495.1.2.1.8.4\n severity: warning\n type: ceph_default\nannotations:\n description: Mountpoint {{ $labels.mountpoint }} on {{ $labels.nodename }} will\n be full in less than 5 days based on the 48 hour trailing fill rate.\n summary: Host filesystem free space is getting low\n" err="found duplicate series for the match group {instance=\"vm05\"} on the right hand-side of the operation: [{__name__=\"node_uname_info\", cluster=\"107483ae-1c44-11f1-b530-c1172cd6122a\", domainname=\"(none)\", instance=\"vm05\", job=\"node\", machine=\"x86_64\", nodename=\"vm05\", release=\"5.15.0-1092-kvm\", sysname=\"Linux\", version=\"#97-Ubuntu SMP Fri Jan 23 15:00:24 UTC 2026\"}, {__name__=\"node_uname_info\", domainname=\"(none)\", instance=\"vm05\", job=\"node\", machine=\"x86_64\", nodename=\"vm05\", release=\"5.15.0-1092-kvm\", sysname=\"Linux\", version=\"#97-Ubuntu SMP Fri Jan 23 15:00:24 UTC 2026\"}];many-to-many matching not allowed: matching labels must be unique on one side" 2026-03-10T05:55:47.904 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:55:47 vm05 bash[43541]: cluster 2026-03-10T05:55:45.469985+0000 osd.6 (osd.6) 1 : cluster [WRN] OSD bench result of 33903.059175 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.6. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd]. 2026-03-10T05:55:47.904 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:55:47 vm05 bash[43541]: cluster 2026-03-10T05:55:45.469985+0000 osd.6 (osd.6) 1 : cluster [WRN] OSD bench result of 33903.059175 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.6. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd]. 2026-03-10T05:55:47.904 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:55:47 vm05 bash[43541]: cluster 2026-03-10T05:55:46.583704+0000 mon.a (mon.0) 454 : cluster [DBG] osdmap e126: 8 total, 8 up, 8 in 2026-03-10T05:55:47.904 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:55:47 vm05 bash[43541]: cluster 2026-03-10T05:55:46.583704+0000 mon.a (mon.0) 454 : cluster [DBG] osdmap e126: 8 total, 8 up, 8 in 2026-03-10T05:55:47.904 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:55:47 vm05 bash[43541]: audit 2026-03-10T05:55:47.487366+0000 mon.a (mon.0) 455 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:55:47.904 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:55:47 vm05 bash[43541]: audit 2026-03-10T05:55:47.487366+0000 mon.a (mon.0) 455 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:55:47.904 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:55:47 vm05 bash[43541]: audit 2026-03-10T05:55:47.493726+0000 mon.a (mon.0) 456 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:55:47.904 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:55:47 vm05 bash[43541]: audit 2026-03-10T05:55:47.493726+0000 mon.a (mon.0) 456 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:55:48.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:55:47 vm02 bash[55303]: cluster 2026-03-10T05:55:45.469985+0000 osd.6 (osd.6) 1 : cluster [WRN] OSD bench result of 33903.059175 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.6. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd]. 2026-03-10T05:55:48.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:55:47 vm02 bash[55303]: cluster 2026-03-10T05:55:45.469985+0000 osd.6 (osd.6) 1 : cluster [WRN] OSD bench result of 33903.059175 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.6. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd]. 2026-03-10T05:55:48.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:55:47 vm02 bash[55303]: cluster 2026-03-10T05:55:46.583704+0000 mon.a (mon.0) 454 : cluster [DBG] osdmap e126: 8 total, 8 up, 8 in 2026-03-10T05:55:48.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:55:47 vm02 bash[55303]: cluster 2026-03-10T05:55:46.583704+0000 mon.a (mon.0) 454 : cluster [DBG] osdmap e126: 8 total, 8 up, 8 in 2026-03-10T05:55:48.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:55:47 vm02 bash[55303]: audit 2026-03-10T05:55:47.487366+0000 mon.a (mon.0) 455 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:55:48.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:55:47 vm02 bash[55303]: audit 2026-03-10T05:55:47.487366+0000 mon.a (mon.0) 455 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:55:48.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:55:47 vm02 bash[55303]: audit 2026-03-10T05:55:47.493726+0000 mon.a (mon.0) 456 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:55:48.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:55:47 vm02 bash[55303]: audit 2026-03-10T05:55:47.493726+0000 mon.a (mon.0) 456 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:55:48.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:55:47 vm02 bash[56371]: cluster 2026-03-10T05:55:45.469985+0000 osd.6 (osd.6) 1 : cluster [WRN] OSD bench result of 33903.059175 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.6. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd]. 2026-03-10T05:55:48.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:55:47 vm02 bash[56371]: cluster 2026-03-10T05:55:45.469985+0000 osd.6 (osd.6) 1 : cluster [WRN] OSD bench result of 33903.059175 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.6. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd]. 2026-03-10T05:55:48.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:55:47 vm02 bash[56371]: cluster 2026-03-10T05:55:46.583704+0000 mon.a (mon.0) 454 : cluster [DBG] osdmap e126: 8 total, 8 up, 8 in 2026-03-10T05:55:48.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:55:47 vm02 bash[56371]: cluster 2026-03-10T05:55:46.583704+0000 mon.a (mon.0) 454 : cluster [DBG] osdmap e126: 8 total, 8 up, 8 in 2026-03-10T05:55:48.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:55:47 vm02 bash[56371]: audit 2026-03-10T05:55:47.487366+0000 mon.a (mon.0) 455 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:55:48.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:55:47 vm02 bash[56371]: audit 2026-03-10T05:55:47.487366+0000 mon.a (mon.0) 455 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:55:48.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:55:47 vm02 bash[56371]: audit 2026-03-10T05:55:47.493726+0000 mon.a (mon.0) 456 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:55:48.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:55:47 vm02 bash[56371]: audit 2026-03-10T05:55:47.493726+0000 mon.a (mon.0) 456 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:55:48.999 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:55:48 vm05 bash[43541]: cluster 2026-03-10T05:55:46.855599+0000 mgr.y (mgr.24992) 196 : cluster [DBG] pgmap v116: 161 pgs: 26 active+undersized, 5 peering, 12 active+undersized+degraded, 118 active+clean; 457 KiB data, 256 MiB used, 160 GiB / 160 GiB avail; 58/723 objects degraded (8.022%) 2026-03-10T05:55:48.999 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:55:48 vm05 bash[43541]: cluster 2026-03-10T05:55:46.855599+0000 mgr.y (mgr.24992) 196 : cluster [DBG] pgmap v116: 161 pgs: 26 active+undersized, 5 peering, 12 active+undersized+degraded, 118 active+clean; 457 KiB data, 256 MiB used, 160 GiB / 160 GiB avail; 58/723 objects degraded (8.022%) 2026-03-10T05:55:48.999 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:55:48 vm05 bash[43541]: audit 2026-03-10T05:55:46.982549+0000 mgr.y (mgr.24992) 197 : audit [DBG] from='client.25048 -' entity='client.iscsi.foo.vm02.mxbwmh' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T05:55:48.999 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:55:48 vm05 bash[43541]: audit 2026-03-10T05:55:46.982549+0000 mgr.y (mgr.24992) 197 : audit [DBG] from='client.25048 -' entity='client.iscsi.foo.vm02.mxbwmh' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T05:55:48.999 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:55:48 vm05 bash[43541]: audit 2026-03-10T05:55:48.017302+0000 mon.a (mon.0) 457 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:55:48.999 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:55:48 vm05 bash[43541]: audit 2026-03-10T05:55:48.017302+0000 mon.a (mon.0) 457 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:55:48.999 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:55:48 vm05 bash[43541]: audit 2026-03-10T05:55:48.024304+0000 mon.a (mon.0) 458 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:55:48.999 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:55:48 vm05 bash[43541]: audit 2026-03-10T05:55:48.024304+0000 mon.a (mon.0) 458 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:55:49.084 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:55:48 vm02 bash[56371]: cluster 2026-03-10T05:55:46.855599+0000 mgr.y (mgr.24992) 196 : cluster [DBG] pgmap v116: 161 pgs: 26 active+undersized, 5 peering, 12 active+undersized+degraded, 118 active+clean; 457 KiB data, 256 MiB used, 160 GiB / 160 GiB avail; 58/723 objects degraded (8.022%) 2026-03-10T05:55:49.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:55:48 vm02 bash[56371]: cluster 2026-03-10T05:55:46.855599+0000 mgr.y (mgr.24992) 196 : cluster [DBG] pgmap v116: 161 pgs: 26 active+undersized, 5 peering, 12 active+undersized+degraded, 118 active+clean; 457 KiB data, 256 MiB used, 160 GiB / 160 GiB avail; 58/723 objects degraded (8.022%) 2026-03-10T05:55:49.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:55:48 vm02 bash[56371]: audit 2026-03-10T05:55:46.982549+0000 mgr.y (mgr.24992) 197 : audit [DBG] from='client.25048 -' entity='client.iscsi.foo.vm02.mxbwmh' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T05:55:49.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:55:48 vm02 bash[56371]: audit 2026-03-10T05:55:46.982549+0000 mgr.y (mgr.24992) 197 : audit [DBG] from='client.25048 -' entity='client.iscsi.foo.vm02.mxbwmh' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T05:55:49.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:55:48 vm02 bash[56371]: audit 2026-03-10T05:55:48.017302+0000 mon.a (mon.0) 457 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:55:49.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:55:48 vm02 bash[56371]: audit 2026-03-10T05:55:48.017302+0000 mon.a (mon.0) 457 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:55:49.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:55:48 vm02 bash[56371]: audit 2026-03-10T05:55:48.024304+0000 mon.a (mon.0) 458 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:55:49.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:55:48 vm02 bash[56371]: audit 2026-03-10T05:55:48.024304+0000 mon.a (mon.0) 458 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:55:49.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:55:48 vm02 bash[55303]: cluster 2026-03-10T05:55:46.855599+0000 mgr.y (mgr.24992) 196 : cluster [DBG] pgmap v116: 161 pgs: 26 active+undersized, 5 peering, 12 active+undersized+degraded, 118 active+clean; 457 KiB data, 256 MiB used, 160 GiB / 160 GiB avail; 58/723 objects degraded (8.022%) 2026-03-10T05:55:49.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:55:48 vm02 bash[55303]: cluster 2026-03-10T05:55:46.855599+0000 mgr.y (mgr.24992) 196 : cluster [DBG] pgmap v116: 161 pgs: 26 active+undersized, 5 peering, 12 active+undersized+degraded, 118 active+clean; 457 KiB data, 256 MiB used, 160 GiB / 160 GiB avail; 58/723 objects degraded (8.022%) 2026-03-10T05:55:49.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:55:48 vm02 bash[55303]: audit 2026-03-10T05:55:46.982549+0000 mgr.y (mgr.24992) 197 : audit [DBG] from='client.25048 -' entity='client.iscsi.foo.vm02.mxbwmh' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T05:55:49.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:55:48 vm02 bash[55303]: audit 2026-03-10T05:55:46.982549+0000 mgr.y (mgr.24992) 197 : audit [DBG] from='client.25048 -' entity='client.iscsi.foo.vm02.mxbwmh' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T05:55:49.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:55:48 vm02 bash[55303]: audit 2026-03-10T05:55:48.017302+0000 mon.a (mon.0) 457 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:55:49.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:55:48 vm02 bash[55303]: audit 2026-03-10T05:55:48.017302+0000 mon.a (mon.0) 457 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:55:49.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:55:48 vm02 bash[55303]: audit 2026-03-10T05:55:48.024304+0000 mon.a (mon.0) 458 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:55:49.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:55:48 vm02 bash[55303]: audit 2026-03-10T05:55:48.024304+0000 mon.a (mon.0) 458 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:55:49.999 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:55:49 vm05 bash[43541]: cluster 2026-03-10T05:55:49.590898+0000 mon.a (mon.0) 459 : cluster [WRN] Health check update: Degraded data redundancy: 14/723 objects degraded (1.936%), 4 pgs degraded (PG_DEGRADED) 2026-03-10T05:55:49.999 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:55:49 vm05 bash[43541]: cluster 2026-03-10T05:55:49.590898+0000 mon.a (mon.0) 459 : cluster [WRN] Health check update: Degraded data redundancy: 14/723 objects degraded (1.936%), 4 pgs degraded (PG_DEGRADED) 2026-03-10T05:55:50.084 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:55:49 vm02 bash[56371]: cluster 2026-03-10T05:55:49.590898+0000 mon.a (mon.0) 459 : cluster [WRN] Health check update: Degraded data redundancy: 14/723 objects degraded (1.936%), 4 pgs degraded (PG_DEGRADED) 2026-03-10T05:55:50.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:55:49 vm02 bash[56371]: cluster 2026-03-10T05:55:49.590898+0000 mon.a (mon.0) 459 : cluster [WRN] Health check update: Degraded data redundancy: 14/723 objects degraded (1.936%), 4 pgs degraded (PG_DEGRADED) 2026-03-10T05:55:50.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:55:49 vm02 bash[55303]: cluster 2026-03-10T05:55:49.590898+0000 mon.a (mon.0) 459 : cluster [WRN] Health check update: Degraded data redundancy: 14/723 objects degraded (1.936%), 4 pgs degraded (PG_DEGRADED) 2026-03-10T05:55:50.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:55:49 vm02 bash[55303]: cluster 2026-03-10T05:55:49.590898+0000 mon.a (mon.0) 459 : cluster [WRN] Health check update: Degraded data redundancy: 14/723 objects degraded (1.936%), 4 pgs degraded (PG_DEGRADED) 2026-03-10T05:55:50.999 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:55:50 vm05 bash[43541]: cluster 2026-03-10T05:55:48.855965+0000 mgr.y (mgr.24992) 198 : cluster [DBG] pgmap v117: 161 pgs: 9 active+undersized, 5 peering, 4 active+undersized+degraded, 143 active+clean; 457 KiB data, 256 MiB used, 160 GiB / 160 GiB avail; 14/723 objects degraded (1.936%) 2026-03-10T05:55:50.999 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:55:50 vm05 bash[43541]: cluster 2026-03-10T05:55:48.855965+0000 mgr.y (mgr.24992) 198 : cluster [DBG] pgmap v117: 161 pgs: 9 active+undersized, 5 peering, 4 active+undersized+degraded, 143 active+clean; 457 KiB data, 256 MiB used, 160 GiB / 160 GiB avail; 14/723 objects degraded (1.936%) 2026-03-10T05:55:51.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:55:50 vm02 bash[56371]: cluster 2026-03-10T05:55:48.855965+0000 mgr.y (mgr.24992) 198 : cluster [DBG] pgmap v117: 161 pgs: 9 active+undersized, 5 peering, 4 active+undersized+degraded, 143 active+clean; 457 KiB data, 256 MiB used, 160 GiB / 160 GiB avail; 14/723 objects degraded (1.936%) 2026-03-10T05:55:51.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:55:50 vm02 bash[56371]: cluster 2026-03-10T05:55:48.855965+0000 mgr.y (mgr.24992) 198 : cluster [DBG] pgmap v117: 161 pgs: 9 active+undersized, 5 peering, 4 active+undersized+degraded, 143 active+clean; 457 KiB data, 256 MiB used, 160 GiB / 160 GiB avail; 14/723 objects degraded (1.936%) 2026-03-10T05:55:51.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:55:50 vm02 bash[55303]: cluster 2026-03-10T05:55:48.855965+0000 mgr.y (mgr.24992) 198 : cluster [DBG] pgmap v117: 161 pgs: 9 active+undersized, 5 peering, 4 active+undersized+degraded, 143 active+clean; 457 KiB data, 256 MiB used, 160 GiB / 160 GiB avail; 14/723 objects degraded (1.936%) 2026-03-10T05:55:51.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:55:50 vm02 bash[55303]: cluster 2026-03-10T05:55:48.855965+0000 mgr.y (mgr.24992) 198 : cluster [DBG] pgmap v117: 161 pgs: 9 active+undersized, 5 peering, 4 active+undersized+degraded, 143 active+clean; 457 KiB data, 256 MiB used, 160 GiB / 160 GiB avail; 14/723 objects degraded (1.936%) 2026-03-10T05:55:51.999 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:55:51 vm05 bash[43541]: cluster 2026-03-10T05:55:51.646287+0000 mon.a (mon.0) 460 : cluster [INF] Health check cleared: PG_DEGRADED (was: Degraded data redundancy: 14/723 objects degraded (1.936%), 4 pgs degraded) 2026-03-10T05:55:51.999 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:55:51 vm05 bash[43541]: cluster 2026-03-10T05:55:51.646287+0000 mon.a (mon.0) 460 : cluster [INF] Health check cleared: PG_DEGRADED (was: Degraded data redundancy: 14/723 objects degraded (1.936%), 4 pgs degraded) 2026-03-10T05:55:51.999 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:55:51 vm05 bash[43541]: cluster 2026-03-10T05:55:51.646324+0000 mon.a (mon.0) 461 : cluster [INF] Cluster is now healthy 2026-03-10T05:55:51.999 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:55:51 vm05 bash[43541]: cluster 2026-03-10T05:55:51.646324+0000 mon.a (mon.0) 461 : cluster [INF] Cluster is now healthy 2026-03-10T05:55:52.084 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:55:51 vm02 bash[56371]: cluster 2026-03-10T05:55:51.646287+0000 mon.a (mon.0) 460 : cluster [INF] Health check cleared: PG_DEGRADED (was: Degraded data redundancy: 14/723 objects degraded (1.936%), 4 pgs degraded) 2026-03-10T05:55:52.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:55:51 vm02 bash[56371]: cluster 2026-03-10T05:55:51.646287+0000 mon.a (mon.0) 460 : cluster [INF] Health check cleared: PG_DEGRADED (was: Degraded data redundancy: 14/723 objects degraded (1.936%), 4 pgs degraded) 2026-03-10T05:55:52.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:55:51 vm02 bash[56371]: cluster 2026-03-10T05:55:51.646324+0000 mon.a (mon.0) 461 : cluster [INF] Cluster is now healthy 2026-03-10T05:55:52.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:55:51 vm02 bash[56371]: cluster 2026-03-10T05:55:51.646324+0000 mon.a (mon.0) 461 : cluster [INF] Cluster is now healthy 2026-03-10T05:55:52.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:55:51 vm02 bash[55303]: cluster 2026-03-10T05:55:51.646287+0000 mon.a (mon.0) 460 : cluster [INF] Health check cleared: PG_DEGRADED (was: Degraded data redundancy: 14/723 objects degraded (1.936%), 4 pgs degraded) 2026-03-10T05:55:52.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:55:51 vm02 bash[55303]: cluster 2026-03-10T05:55:51.646287+0000 mon.a (mon.0) 460 : cluster [INF] Health check cleared: PG_DEGRADED (was: Degraded data redundancy: 14/723 objects degraded (1.936%), 4 pgs degraded) 2026-03-10T05:55:52.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:55:51 vm02 bash[55303]: cluster 2026-03-10T05:55:51.646324+0000 mon.a (mon.0) 461 : cluster [INF] Cluster is now healthy 2026-03-10T05:55:52.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:55:51 vm02 bash[55303]: cluster 2026-03-10T05:55:51.646324+0000 mon.a (mon.0) 461 : cluster [INF] Cluster is now healthy 2026-03-10T05:55:52.970 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:55:52 vm05 bash[43541]: cluster 2026-03-10T05:55:50.856270+0000 mgr.y (mgr.24992) 199 : cluster [DBG] pgmap v118: 161 pgs: 4 peering, 157 active+clean; 457 KiB data, 256 MiB used, 160 GiB / 160 GiB avail 2026-03-10T05:55:52.971 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:55:52 vm05 bash[43541]: cluster 2026-03-10T05:55:50.856270+0000 mgr.y (mgr.24992) 199 : cluster [DBG] pgmap v118: 161 pgs: 4 peering, 157 active+clean; 457 KiB data, 256 MiB used, 160 GiB / 160 GiB avail 2026-03-10T05:55:53.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:55:52 vm02 bash[56371]: cluster 2026-03-10T05:55:50.856270+0000 mgr.y (mgr.24992) 199 : cluster [DBG] pgmap v118: 161 pgs: 4 peering, 157 active+clean; 457 KiB data, 256 MiB used, 160 GiB / 160 GiB avail 2026-03-10T05:55:53.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:55:52 vm02 bash[56371]: cluster 2026-03-10T05:55:50.856270+0000 mgr.y (mgr.24992) 199 : cluster [DBG] pgmap v118: 161 pgs: 4 peering, 157 active+clean; 457 KiB data, 256 MiB used, 160 GiB / 160 GiB avail 2026-03-10T05:55:53.085 INFO:journalctl@ceph.mgr.y.vm02.stdout:Mar 10 05:55:52 vm02 bash[52264]: ::ffff:192.168.123.105 - - [10/Mar/2026:05:55:52] "GET /metrics HTTP/1.1" 200 38096 "" "Prometheus/2.51.0" 2026-03-10T05:55:53.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:55:52 vm02 bash[55303]: cluster 2026-03-10T05:55:50.856270+0000 mgr.y (mgr.24992) 199 : cluster [DBG] pgmap v118: 161 pgs: 4 peering, 157 active+clean; 457 KiB data, 256 MiB used, 160 GiB / 160 GiB avail 2026-03-10T05:55:53.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:55:52 vm02 bash[55303]: cluster 2026-03-10T05:55:50.856270+0000 mgr.y (mgr.24992) 199 : cluster [DBG] pgmap v118: 161 pgs: 4 peering, 157 active+clean; 457 KiB data, 256 MiB used, 160 GiB / 160 GiB avail 2026-03-10T05:55:54.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:55:53 vm02 bash[56371]: cluster 2026-03-10T05:55:52.856801+0000 mgr.y (mgr.24992) 200 : cluster [DBG] pgmap v119: 161 pgs: 161 active+clean; 457 KiB data, 256 MiB used, 160 GiB / 160 GiB avail; 639 B/s rd, 0 op/s 2026-03-10T05:55:54.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:55:53 vm02 bash[56371]: cluster 2026-03-10T05:55:52.856801+0000 mgr.y (mgr.24992) 200 : cluster [DBG] pgmap v119: 161 pgs: 161 active+clean; 457 KiB data, 256 MiB used, 160 GiB / 160 GiB avail; 639 B/s rd, 0 op/s 2026-03-10T05:55:54.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:55:53 vm02 bash[55303]: cluster 2026-03-10T05:55:52.856801+0000 mgr.y (mgr.24992) 200 : cluster [DBG] pgmap v119: 161 pgs: 161 active+clean; 457 KiB data, 256 MiB used, 160 GiB / 160 GiB avail; 639 B/s rd, 0 op/s 2026-03-10T05:55:54.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:55:53 vm02 bash[55303]: cluster 2026-03-10T05:55:52.856801+0000 mgr.y (mgr.24992) 200 : cluster [DBG] pgmap v119: 161 pgs: 161 active+clean; 457 KiB data, 256 MiB used, 160 GiB / 160 GiB avail; 639 B/s rd, 0 op/s 2026-03-10T05:55:54.144 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:55:53 vm05 bash[43541]: cluster 2026-03-10T05:55:52.856801+0000 mgr.y (mgr.24992) 200 : cluster [DBG] pgmap v119: 161 pgs: 161 active+clean; 457 KiB data, 256 MiB used, 160 GiB / 160 GiB avail; 639 B/s rd, 0 op/s 2026-03-10T05:55:54.144 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:55:53 vm05 bash[43541]: cluster 2026-03-10T05:55:52.856801+0000 mgr.y (mgr.24992) 200 : cluster [DBG] pgmap v119: 161 pgs: 161 active+clean; 457 KiB data, 256 MiB used, 160 GiB / 160 GiB avail; 639 B/s rd, 0 op/s 2026-03-10T05:55:54.499 INFO:journalctl@ceph.prometheus.a.vm05.stdout:Mar 10 05:55:54 vm05 bash[41269]: ts=2026-03-10T05:55:54.147Z caller=alerting.go:391 level=warn component="rule manager" alert="unsupported value type" msg="Expanding alert template failed" err="error executing template __alert_CephOSDDown: template: __alert_CephOSDDown:1:358: executing \"__alert_CephOSDDown\" at : error calling query: found duplicate series for the match group {ceph_daemon=\"osd.6\"} on the right hand-side of the operation: [{__name__=\"ceph_osd_metadata\", ceph_daemon=\"osd.6\", ceph_version=\"ceph version 17.2.0 (43e2e60a7559d3f46c9d53f1ca875fd499a1e35e) quincy (stable)\", cluster=\"107483ae-1c44-11f1-b530-c1172cd6122a\", cluster_addr=\"192.168.123.105\", device_class=\"hdd\", hostname=\"vm05\", instance=\"ceph_cluster\", job=\"ceph\", objectstore=\"bluestore\", public_addr=\"192.168.123.105\"}, {__name__=\"ceph_osd_metadata\", ceph_daemon=\"osd.6\", ceph_version=\"ceph version 17.2.0 (43e2e60a7559d3f46c9d53f1ca875fd499a1e35e) quincy (stable)\", cluster_addr=\"192.168.123.105\", device_class=\"hdd\", hostname=\"vm05\", instance=\"192.168.123.105:9283\", job=\"ceph\", objectstore=\"bluestore\", public_addr=\"192.168.123.105\"}];many-to-many matching not allowed: matching labels must be unique on one side" data="unsupported value type" 2026-03-10T05:55:54.499 INFO:journalctl@ceph.prometheus.a.vm05.stdout:Mar 10 05:55:54 vm05 bash[41269]: ts=2026-03-10T05:55:54.148Z caller=group.go:483 level=warn name=CephOSDFlapping index=13 component="rule manager" file=/etc/prometheus/alerting/ceph_alerts.yml group=osd msg="Evaluating rule failed" rule="alert: CephOSDFlapping\nexpr: (rate(ceph_osd_up[5m]) * on (ceph_daemon) group_left (hostname) ceph_osd_metadata)\n * 60 > 1\nlabels:\n oid: 1.3.6.1.4.1.50495.1.2.1.4.4\n severity: warning\n type: ceph_default\nannotations:\n description: OSD {{ $labels.ceph_daemon }} on {{ $labels.hostname }} was marked\n down and back up {{ $value | humanize }} times once a minute for 5 minutes. This\n may indicate a network issue (latency, packet loss, MTU mismatch) on the cluster\n network, or the public network if no cluster network is deployed. Check the network\n stats on the listed host(s).\n documentation: https://docs.ceph.com/en/latest/rados/troubleshooting/troubleshooting-osd#flapping-osds\n summary: Network issues are causing OSDs to flap (mark each other down)\n" err="found duplicate series for the match group {ceph_daemon=\"osd.6\"} on the right hand-side of the operation: [{__name__=\"ceph_osd_metadata\", ceph_daemon=\"osd.6\", ceph_version=\"ceph version 17.2.0 (43e2e60a7559d3f46c9d53f1ca875fd499a1e35e) quincy (stable)\", cluster=\"107483ae-1c44-11f1-b530-c1172cd6122a\", cluster_addr=\"192.168.123.105\", device_class=\"hdd\", hostname=\"vm05\", instance=\"ceph_cluster\", job=\"ceph\", objectstore=\"bluestore\", public_addr=\"192.168.123.105\"}, {__name__=\"ceph_osd_metadata\", ceph_daemon=\"osd.6\", ceph_version=\"ceph version 17.2.0 (43e2e60a7559d3f46c9d53f1ca875fd499a1e35e) quincy (stable)\", cluster_addr=\"192.168.123.105\", device_class=\"hdd\", hostname=\"vm05\", instance=\"192.168.123.105:9283\", job=\"ceph\", objectstore=\"bluestore\", public_addr=\"192.168.123.105\"}];many-to-many matching not allowed: matching labels must be unique on one side" 2026-03-10T05:55:55.424 INFO:teuthology.orchestra.run.vm02.stdout:true 2026-03-10T05:55:55.814 INFO:teuthology.orchestra.run.vm02.stdout:NAME HOST PORTS STATUS REFRESHED AGE MEM USE MEM LIM VERSION IMAGE ID CONTAINER ID 2026-03-10T05:55:55.815 INFO:teuthology.orchestra.run.vm02.stdout:alertmanager.a vm02 *:9093,9094 running (3m) 57s ago 8m 14.9M - 0.25.0 c8568f914cd2 7a7c5c2cddb6 2026-03-10T05:55:55.815 INFO:teuthology.orchestra.run.vm02.stdout:grafana.a vm05 *:3000 running (3m) 8s ago 8m 40.6M - dad864ee21e9 95c6d977988a 2026-03-10T05:55:55.815 INFO:teuthology.orchestra.run.vm02.stdout:iscsi.foo.vm02.mxbwmh vm02 running (3m) 57s ago 8m 44.2M - 3.5 e1d6a67b021e 62aba5b41046 2026-03-10T05:55:55.815 INFO:teuthology.orchestra.run.vm02.stdout:mgr.x vm05 *:8443,9283,8765 running (3m) 8s ago 11m 465M - 19.2.3-678-ge911bdeb 654f31e6858e 7579626ada90 2026-03-10T05:55:55.815 INFO:teuthology.orchestra.run.vm02.stdout:mgr.y vm02 *:8443,9283,8765 running (3m) 57s ago 12m 529M - 19.2.3-678-ge911bdeb 654f31e6858e ef46d0f7b15e 2026-03-10T05:55:55.815 INFO:teuthology.orchestra.run.vm02.stdout:mon.a vm02 running (2m) 57s ago 12m 47.2M 2048M 19.2.3-678-ge911bdeb 654f31e6858e df3a0a290a95 2026-03-10T05:55:55.815 INFO:teuthology.orchestra.run.vm02.stdout:mon.b vm05 running (2m) 8s ago 11m 41.2M 2048M 19.2.3-678-ge911bdeb 654f31e6858e 1da04b90d16b 2026-03-10T05:55:55.815 INFO:teuthology.orchestra.run.vm02.stdout:mon.c vm02 running (3m) 57s ago 11m 44.2M 2048M 19.2.3-678-ge911bdeb 654f31e6858e 7f2cdf1b7aa6 2026-03-10T05:55:55.815 INFO:teuthology.orchestra.run.vm02.stdout:node-exporter.a vm02 *:9100 running (3m) 57s ago 9m 7535k - 1.7.0 72c9c2088986 90288450bd1f 2026-03-10T05:55:55.815 INFO:teuthology.orchestra.run.vm02.stdout:node-exporter.b vm05 *:9100 running (3m) 8s ago 8m 7635k - 1.7.0 72c9c2088986 4e859143cb0e 2026-03-10T05:55:55.815 INFO:teuthology.orchestra.run.vm02.stdout:osd.0 vm02 running (93s) 57s ago 11m 66.8M 4096M 19.2.3-678-ge911bdeb 654f31e6858e 640360275f83 2026-03-10T05:55:55.815 INFO:teuthology.orchestra.run.vm02.stdout:osd.1 vm02 running (62s) 57s ago 10m 21.8M 4096M 19.2.3-678-ge911bdeb 654f31e6858e 4de5c460789a 2026-03-10T05:55:55.815 INFO:teuthology.orchestra.run.vm02.stdout:osd.2 vm02 running (110s) 57s ago 10m 45.2M 4096M 19.2.3-678-ge911bdeb 654f31e6858e 51dac2f581d9 2026-03-10T05:55:55.815 INFO:teuthology.orchestra.run.vm02.stdout:osd.3 vm02 running (2m) 57s ago 10m 70.5M 4096M 19.2.3-678-ge911bdeb 654f31e6858e 0eca961791f4 2026-03-10T05:55:55.815 INFO:teuthology.orchestra.run.vm02.stdout:osd.4 vm05 running (46s) 8s ago 10m 48.0M 4096M 19.2.3-678-ge911bdeb 654f31e6858e 2c1b499265f4 2026-03-10T05:55:55.815 INFO:teuthology.orchestra.run.vm02.stdout:osd.5 vm05 running (29s) 8s ago 9m 67.8M 4096M 19.2.3-678-ge911bdeb 654f31e6858e 7ec1a1246098 2026-03-10T05:55:55.815 INFO:teuthology.orchestra.run.vm02.stdout:osd.6 vm05 running (13s) 8s ago 9m 31.7M 4096M 19.2.3-678-ge911bdeb 654f31e6858e bd151ab03026 2026-03-10T05:55:55.815 INFO:teuthology.orchestra.run.vm02.stdout:osd.7 vm05 running (9m) 8s ago 9m 56.9M 4096M 17.2.0 e1d6a67b021e 8a4837b788cf 2026-03-10T05:55:55.815 INFO:teuthology.orchestra.run.vm02.stdout:prometheus.a vm05 *:9095 running (3m) 8s ago 8m 39.1M - 2.51.0 1d3b7f56885b 3328811f8f28 2026-03-10T05:55:55.815 INFO:teuthology.orchestra.run.vm02.stdout:rgw.foo.vm02.pbogjd vm02 *:8000 running (8m) 57s ago 8m 87.2M - 17.2.0 e1d6a67b021e 2ab2ffd1abaa 2026-03-10T05:55:55.815 INFO:teuthology.orchestra.run.vm02.stdout:rgw.foo.vm05.hvmsxl vm05 *:8000 running (8m) 8s ago 8m 86.6M - 17.2.0 e1d6a67b021e 85d1c77b7e9d 2026-03-10T05:55:55.815 INFO:teuthology.orchestra.run.vm02.stdout:rgw.smpl.vm02.pglcfm vm02 *:80 running (8m) 57s ago 8m 86.0M - 17.2.0 e1d6a67b021e ef152a460673 2026-03-10T05:55:55.815 INFO:teuthology.orchestra.run.vm02.stdout:rgw.smpl.vm05.hqqmap vm05 *:80 running (8m) 8s ago 8m 86.6M - 17.2.0 e1d6a67b021e 29c9ee794f34 2026-03-10T05:55:55.852 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:55:55 vm05 bash[43541]: audit 2026-03-10T05:55:54.603899+0000 mon.a (mon.0) 462 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:55:55.852 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:55:55 vm05 bash[43541]: audit 2026-03-10T05:55:54.603899+0000 mon.a (mon.0) 462 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:55:55.852 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:55:55 vm05 bash[43541]: audit 2026-03-10T05:55:54.610607+0000 mon.a (mon.0) 463 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:55:55.852 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:55:55 vm05 bash[43541]: audit 2026-03-10T05:55:54.610607+0000 mon.a (mon.0) 463 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:55:55.852 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:55:55 vm05 bash[43541]: audit 2026-03-10T05:55:54.612294+0000 mon.a (mon.0) 464 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:55:55.852 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:55:55 vm05 bash[43541]: audit 2026-03-10T05:55:54.612294+0000 mon.a (mon.0) 464 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:55:55.852 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:55:55 vm05 bash[43541]: audit 2026-03-10T05:55:54.612879+0000 mon.a (mon.0) 465 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T05:55:55.852 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:55:55 vm05 bash[43541]: audit 2026-03-10T05:55:54.612879+0000 mon.a (mon.0) 465 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T05:55:55.852 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:55:55 vm05 bash[43541]: audit 2026-03-10T05:55:54.619153+0000 mon.a (mon.0) 466 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:55:55.852 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:55:55 vm05 bash[43541]: audit 2026-03-10T05:55:54.619153+0000 mon.a (mon.0) 466 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:55:55.852 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:55:55 vm05 bash[43541]: audit 2026-03-10T05:55:54.663470+0000 mon.a (mon.0) 467 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T05:55:55.852 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:55:55 vm05 bash[43541]: audit 2026-03-10T05:55:54.663470+0000 mon.a (mon.0) 467 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T05:55:55.852 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:55:55 vm05 bash[43541]: audit 2026-03-10T05:55:54.664627+0000 mon.a (mon.0) 468 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T05:55:55.852 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:55:55 vm05 bash[43541]: audit 2026-03-10T05:55:54.664627+0000 mon.a (mon.0) 468 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T05:55:55.852 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:55:55 vm05 bash[43541]: audit 2026-03-10T05:55:54.665448+0000 mon.a (mon.0) 469 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T05:55:55.852 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:55:55 vm05 bash[43541]: audit 2026-03-10T05:55:54.665448+0000 mon.a (mon.0) 469 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T05:55:55.852 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:55:55 vm05 bash[43541]: audit 2026-03-10T05:55:54.665977+0000 mon.a (mon.0) 470 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T05:55:55.852 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:55:55 vm05 bash[43541]: audit 2026-03-10T05:55:54.665977+0000 mon.a (mon.0) 470 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T05:55:55.852 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:55:55 vm05 bash[43541]: audit 2026-03-10T05:55:54.666707+0000 mon.a (mon.0) 471 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "osd ok-to-stop", "ids": ["7"], "max": 16}]: dispatch 2026-03-10T05:55:55.852 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:55:55 vm05 bash[43541]: audit 2026-03-10T05:55:54.666707+0000 mon.a (mon.0) 471 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "osd ok-to-stop", "ids": ["7"], "max": 16}]: dispatch 2026-03-10T05:55:55.852 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:55:55 vm05 bash[43541]: audit 2026-03-10T05:55:54.666863+0000 mgr.y (mgr.24992) 201 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "osd ok-to-stop", "ids": ["7"], "max": 16}]: dispatch 2026-03-10T05:55:55.852 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:55:55 vm05 bash[43541]: audit 2026-03-10T05:55:54.666863+0000 mgr.y (mgr.24992) 201 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "osd ok-to-stop", "ids": ["7"], "max": 16}]: dispatch 2026-03-10T05:55:55.852 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:55:55 vm05 bash[43541]: cephadm 2026-03-10T05:55:54.667359+0000 mgr.y (mgr.24992) 202 : cephadm [INF] Upgrade: osd.7 is safe to restart 2026-03-10T05:55:55.852 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:55:55 vm05 bash[43541]: cephadm 2026-03-10T05:55:54.667359+0000 mgr.y (mgr.24992) 202 : cephadm [INF] Upgrade: osd.7 is safe to restart 2026-03-10T05:55:55.852 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:55:55 vm05 bash[43541]: audit 2026-03-10T05:55:55.060988+0000 mon.a (mon.0) 472 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:55:55.852 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:55:55 vm05 bash[43541]: audit 2026-03-10T05:55:55.060988+0000 mon.a (mon.0) 472 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:55:55.852 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:55:55 vm05 bash[43541]: audit 2026-03-10T05:55:55.063716+0000 mon.a (mon.0) 473 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.7"}]: dispatch 2026-03-10T05:55:55.852 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:55:55 vm05 bash[43541]: audit 2026-03-10T05:55:55.063716+0000 mon.a (mon.0) 473 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.7"}]: dispatch 2026-03-10T05:55:55.853 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:55:55 vm05 bash[43541]: audit 2026-03-10T05:55:55.064104+0000 mon.a (mon.0) 474 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:55:55.853 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:55:55 vm05 bash[43541]: audit 2026-03-10T05:55:55.064104+0000 mon.a (mon.0) 474 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:55:55.853 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:55:55 vm05 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:55:55.853 INFO:journalctl@ceph.mgr.x.vm05.stdout:Mar 10 05:55:55 vm05 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:55:55.853 INFO:journalctl@ceph.osd.4.vm05.stdout:Mar 10 05:55:55 vm05 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:55:55.853 INFO:journalctl@ceph.osd.5.vm05.stdout:Mar 10 05:55:55 vm05 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:55:55.853 INFO:journalctl@ceph.osd.6.vm05.stdout:Mar 10 05:55:55 vm05 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:55:55.853 INFO:journalctl@ceph.osd.7.vm05.stdout:Mar 10 05:55:55 vm05 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:55:55.853 INFO:journalctl@ceph.osd.7.vm05.stdout:Mar 10 05:55:55 vm05 systemd[1]: Stopping Ceph osd.7 for 107483ae-1c44-11f1-b530-c1172cd6122a... 2026-03-10T05:55:55.853 INFO:journalctl@ceph.prometheus.a.vm05.stdout:Mar 10 05:55:55 vm05 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:55:55.853 INFO:journalctl@ceph.node-exporter.b.vm05.stdout:Mar 10 05:55:55 vm05 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:55:55.853 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:55:55 vm05 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:55:56.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:55:55 vm02 bash[56371]: audit 2026-03-10T05:55:54.603899+0000 mon.a (mon.0) 462 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:55:56.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:55:55 vm02 bash[56371]: audit 2026-03-10T05:55:54.603899+0000 mon.a (mon.0) 462 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:55:56.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:55:55 vm02 bash[56371]: audit 2026-03-10T05:55:54.610607+0000 mon.a (mon.0) 463 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:55:56.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:55:55 vm02 bash[56371]: audit 2026-03-10T05:55:54.610607+0000 mon.a (mon.0) 463 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:55:56.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:55:55 vm02 bash[56371]: audit 2026-03-10T05:55:54.612294+0000 mon.a (mon.0) 464 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:55:56.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:55:55 vm02 bash[56371]: audit 2026-03-10T05:55:54.612294+0000 mon.a (mon.0) 464 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:55:56.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:55:55 vm02 bash[56371]: audit 2026-03-10T05:55:54.612879+0000 mon.a (mon.0) 465 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T05:55:56.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:55:55 vm02 bash[56371]: audit 2026-03-10T05:55:54.612879+0000 mon.a (mon.0) 465 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T05:55:56.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:55:55 vm02 bash[56371]: audit 2026-03-10T05:55:54.619153+0000 mon.a (mon.0) 466 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:55:56.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:55:55 vm02 bash[56371]: audit 2026-03-10T05:55:54.619153+0000 mon.a (mon.0) 466 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:55:56.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:55:55 vm02 bash[56371]: audit 2026-03-10T05:55:54.663470+0000 mon.a (mon.0) 467 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T05:55:56.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:55:55 vm02 bash[56371]: audit 2026-03-10T05:55:54.663470+0000 mon.a (mon.0) 467 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T05:55:56.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:55:55 vm02 bash[56371]: audit 2026-03-10T05:55:54.664627+0000 mon.a (mon.0) 468 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T05:55:56.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:55:55 vm02 bash[56371]: audit 2026-03-10T05:55:54.664627+0000 mon.a (mon.0) 468 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T05:55:56.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:55:55 vm02 bash[56371]: audit 2026-03-10T05:55:54.665448+0000 mon.a (mon.0) 469 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T05:55:56.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:55:55 vm02 bash[56371]: audit 2026-03-10T05:55:54.665448+0000 mon.a (mon.0) 469 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T05:55:56.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:55:55 vm02 bash[56371]: audit 2026-03-10T05:55:54.665977+0000 mon.a (mon.0) 470 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T05:55:56.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:55:55 vm02 bash[56371]: audit 2026-03-10T05:55:54.665977+0000 mon.a (mon.0) 470 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T05:55:56.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:55:55 vm02 bash[56371]: audit 2026-03-10T05:55:54.666707+0000 mon.a (mon.0) 471 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "osd ok-to-stop", "ids": ["7"], "max": 16}]: dispatch 2026-03-10T05:55:56.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:55:55 vm02 bash[56371]: audit 2026-03-10T05:55:54.666707+0000 mon.a (mon.0) 471 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "osd ok-to-stop", "ids": ["7"], "max": 16}]: dispatch 2026-03-10T05:55:56.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:55:55 vm02 bash[56371]: audit 2026-03-10T05:55:54.666863+0000 mgr.y (mgr.24992) 201 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "osd ok-to-stop", "ids": ["7"], "max": 16}]: dispatch 2026-03-10T05:55:56.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:55:55 vm02 bash[56371]: audit 2026-03-10T05:55:54.666863+0000 mgr.y (mgr.24992) 201 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "osd ok-to-stop", "ids": ["7"], "max": 16}]: dispatch 2026-03-10T05:55:56.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:55:55 vm02 bash[56371]: cephadm 2026-03-10T05:55:54.667359+0000 mgr.y (mgr.24992) 202 : cephadm [INF] Upgrade: osd.7 is safe to restart 2026-03-10T05:55:56.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:55:55 vm02 bash[56371]: cephadm 2026-03-10T05:55:54.667359+0000 mgr.y (mgr.24992) 202 : cephadm [INF] Upgrade: osd.7 is safe to restart 2026-03-10T05:55:56.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:55:55 vm02 bash[56371]: audit 2026-03-10T05:55:55.060988+0000 mon.a (mon.0) 472 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:55:56.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:55:55 vm02 bash[56371]: audit 2026-03-10T05:55:55.060988+0000 mon.a (mon.0) 472 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:55:56.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:55:55 vm02 bash[56371]: audit 2026-03-10T05:55:55.063716+0000 mon.a (mon.0) 473 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.7"}]: dispatch 2026-03-10T05:55:56.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:55:55 vm02 bash[56371]: audit 2026-03-10T05:55:55.063716+0000 mon.a (mon.0) 473 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.7"}]: dispatch 2026-03-10T05:55:56.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:55:55 vm02 bash[56371]: audit 2026-03-10T05:55:55.064104+0000 mon.a (mon.0) 474 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:55:56.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:55:55 vm02 bash[56371]: audit 2026-03-10T05:55:55.064104+0000 mon.a (mon.0) 474 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:55:56.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:55:55 vm02 bash[55303]: audit 2026-03-10T05:55:54.603899+0000 mon.a (mon.0) 462 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:55:56.086 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:55:55 vm02 bash[55303]: audit 2026-03-10T05:55:54.603899+0000 mon.a (mon.0) 462 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:55:56.086 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:55:55 vm02 bash[55303]: audit 2026-03-10T05:55:54.610607+0000 mon.a (mon.0) 463 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:55:56.086 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:55:55 vm02 bash[55303]: audit 2026-03-10T05:55:54.610607+0000 mon.a (mon.0) 463 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:55:56.086 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:55:55 vm02 bash[55303]: audit 2026-03-10T05:55:54.612294+0000 mon.a (mon.0) 464 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:55:56.086 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:55:55 vm02 bash[55303]: audit 2026-03-10T05:55:54.612294+0000 mon.a (mon.0) 464 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:55:56.086 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:55:55 vm02 bash[55303]: audit 2026-03-10T05:55:54.612879+0000 mon.a (mon.0) 465 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T05:55:56.086 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:55:55 vm02 bash[55303]: audit 2026-03-10T05:55:54.612879+0000 mon.a (mon.0) 465 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T05:55:56.086 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:55:55 vm02 bash[55303]: audit 2026-03-10T05:55:54.619153+0000 mon.a (mon.0) 466 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:55:56.086 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:55:55 vm02 bash[55303]: audit 2026-03-10T05:55:54.619153+0000 mon.a (mon.0) 466 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:55:56.086 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:55:55 vm02 bash[55303]: audit 2026-03-10T05:55:54.663470+0000 mon.a (mon.0) 467 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T05:55:56.086 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:55:55 vm02 bash[55303]: audit 2026-03-10T05:55:54.663470+0000 mon.a (mon.0) 467 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T05:55:56.086 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:55:55 vm02 bash[55303]: audit 2026-03-10T05:55:54.664627+0000 mon.a (mon.0) 468 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T05:55:56.086 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:55:55 vm02 bash[55303]: audit 2026-03-10T05:55:54.664627+0000 mon.a (mon.0) 468 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T05:55:56.086 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:55:55 vm02 bash[55303]: audit 2026-03-10T05:55:54.665448+0000 mon.a (mon.0) 469 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T05:55:56.086 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:55:55 vm02 bash[55303]: audit 2026-03-10T05:55:54.665448+0000 mon.a (mon.0) 469 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T05:55:56.086 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:55:55 vm02 bash[55303]: audit 2026-03-10T05:55:54.665977+0000 mon.a (mon.0) 470 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T05:55:56.086 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:55:55 vm02 bash[55303]: audit 2026-03-10T05:55:54.665977+0000 mon.a (mon.0) 470 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T05:55:56.086 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:55:55 vm02 bash[55303]: audit 2026-03-10T05:55:54.666707+0000 mon.a (mon.0) 471 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "osd ok-to-stop", "ids": ["7"], "max": 16}]: dispatch 2026-03-10T05:55:56.086 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:55:55 vm02 bash[55303]: audit 2026-03-10T05:55:54.666707+0000 mon.a (mon.0) 471 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "osd ok-to-stop", "ids": ["7"], "max": 16}]: dispatch 2026-03-10T05:55:56.086 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:55:55 vm02 bash[55303]: audit 2026-03-10T05:55:54.666863+0000 mgr.y (mgr.24992) 201 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "osd ok-to-stop", "ids": ["7"], "max": 16}]: dispatch 2026-03-10T05:55:56.086 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:55:55 vm02 bash[55303]: audit 2026-03-10T05:55:54.666863+0000 mgr.y (mgr.24992) 201 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "osd ok-to-stop", "ids": ["7"], "max": 16}]: dispatch 2026-03-10T05:55:56.086 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:55:55 vm02 bash[55303]: cephadm 2026-03-10T05:55:54.667359+0000 mgr.y (mgr.24992) 202 : cephadm [INF] Upgrade: osd.7 is safe to restart 2026-03-10T05:55:56.086 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:55:55 vm02 bash[55303]: cephadm 2026-03-10T05:55:54.667359+0000 mgr.y (mgr.24992) 202 : cephadm [INF] Upgrade: osd.7 is safe to restart 2026-03-10T05:55:56.086 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:55:55 vm02 bash[55303]: audit 2026-03-10T05:55:55.060988+0000 mon.a (mon.0) 472 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:55:56.086 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:55:55 vm02 bash[55303]: audit 2026-03-10T05:55:55.060988+0000 mon.a (mon.0) 472 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:55:56.086 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:55:55 vm02 bash[55303]: audit 2026-03-10T05:55:55.063716+0000 mon.a (mon.0) 473 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.7"}]: dispatch 2026-03-10T05:55:56.086 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:55:55 vm02 bash[55303]: audit 2026-03-10T05:55:55.063716+0000 mon.a (mon.0) 473 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.7"}]: dispatch 2026-03-10T05:55:56.086 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:55:55 vm02 bash[55303]: audit 2026-03-10T05:55:55.064104+0000 mon.a (mon.0) 474 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:55:56.086 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:55:55 vm02 bash[55303]: audit 2026-03-10T05:55:55.064104+0000 mon.a (mon.0) 474 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:55:56.104 INFO:teuthology.orchestra.run.vm02.stdout:{ 2026-03-10T05:55:56.104 INFO:teuthology.orchestra.run.vm02.stdout: "mon": { 2026-03-10T05:55:56.104 INFO:teuthology.orchestra.run.vm02.stdout: "ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable)": 3 2026-03-10T05:55:56.104 INFO:teuthology.orchestra.run.vm02.stdout: }, 2026-03-10T05:55:56.104 INFO:teuthology.orchestra.run.vm02.stdout: "mgr": { 2026-03-10T05:55:56.105 INFO:teuthology.orchestra.run.vm02.stdout: "ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable)": 2 2026-03-10T05:55:56.105 INFO:teuthology.orchestra.run.vm02.stdout: }, 2026-03-10T05:55:56.105 INFO:teuthology.orchestra.run.vm02.stdout: "osd": { 2026-03-10T05:55:56.105 INFO:teuthology.orchestra.run.vm02.stdout: "ceph version 17.2.0 (43e2e60a7559d3f46c9d53f1ca875fd499a1e35e) quincy (stable)": 1, 2026-03-10T05:55:56.105 INFO:teuthology.orchestra.run.vm02.stdout: "ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable)": 7 2026-03-10T05:55:56.105 INFO:teuthology.orchestra.run.vm02.stdout: }, 2026-03-10T05:55:56.105 INFO:teuthology.orchestra.run.vm02.stdout: "rgw": { 2026-03-10T05:55:56.105 INFO:teuthology.orchestra.run.vm02.stdout: "ceph version 17.2.0 (43e2e60a7559d3f46c9d53f1ca875fd499a1e35e) quincy (stable)": 4 2026-03-10T05:55:56.105 INFO:teuthology.orchestra.run.vm02.stdout: }, 2026-03-10T05:55:56.105 INFO:teuthology.orchestra.run.vm02.stdout: "overall": { 2026-03-10T05:55:56.105 INFO:teuthology.orchestra.run.vm02.stdout: "ceph version 17.2.0 (43e2e60a7559d3f46c9d53f1ca875fd499a1e35e) quincy (stable)": 5, 2026-03-10T05:55:56.105 INFO:teuthology.orchestra.run.vm02.stdout: "ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable)": 12 2026-03-10T05:55:56.105 INFO:teuthology.orchestra.run.vm02.stdout: } 2026-03-10T05:55:56.105 INFO:teuthology.orchestra.run.vm02.stdout:} 2026-03-10T05:55:56.249 INFO:journalctl@ceph.osd.7.vm05.stdout:Mar 10 05:55:55 vm05 bash[30264]: debug 2026-03-10T05:55:55.903+0000 7f9ba0886700 -1 received signal: Terminated from /sbin/docker-init -- /usr/bin/ceph-osd -n osd.7 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-stderr=true --default-log-stderr-prefix=debug (PID: 1) UID: 0 2026-03-10T05:55:56.249 INFO:journalctl@ceph.osd.7.vm05.stdout:Mar 10 05:55:55 vm05 bash[30264]: debug 2026-03-10T05:55:55.903+0000 7f9ba0886700 -1 osd.7 126 *** Got signal Terminated *** 2026-03-10T05:55:56.249 INFO:journalctl@ceph.osd.7.vm05.stdout:Mar 10 05:55:55 vm05 bash[30264]: debug 2026-03-10T05:55:55.903+0000 7f9ba0886700 -1 osd.7 126 *** Immediate shutdown (osd_fast_shutdown=true) *** 2026-03-10T05:55:56.299 INFO:teuthology.orchestra.run.vm02.stdout:{ 2026-03-10T05:55:56.299 INFO:teuthology.orchestra.run.vm02.stdout: "target_image": "quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df", 2026-03-10T05:55:56.299 INFO:teuthology.orchestra.run.vm02.stdout: "in_progress": true, 2026-03-10T05:55:56.299 INFO:teuthology.orchestra.run.vm02.stdout: "which": "Upgrading all daemon types on all hosts", 2026-03-10T05:55:56.299 INFO:teuthology.orchestra.run.vm02.stdout: "services_complete": [ 2026-03-10T05:55:56.299 INFO:teuthology.orchestra.run.vm02.stdout: "mgr", 2026-03-10T05:55:56.299 INFO:teuthology.orchestra.run.vm02.stdout: "mon" 2026-03-10T05:55:56.299 INFO:teuthology.orchestra.run.vm02.stdout: ], 2026-03-10T05:55:56.299 INFO:teuthology.orchestra.run.vm02.stdout: "progress": "12/23 daemons upgraded", 2026-03-10T05:55:56.299 INFO:teuthology.orchestra.run.vm02.stdout: "message": "Currently upgrading osd daemons", 2026-03-10T05:55:56.299 INFO:teuthology.orchestra.run.vm02.stdout: "is_paused": false 2026-03-10T05:55:56.299 INFO:teuthology.orchestra.run.vm02.stdout:} 2026-03-10T05:55:56.527 INFO:teuthology.orchestra.run.vm02.stdout:HEALTH_OK 2026-03-10T05:55:56.942 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:55:56 vm05 bash[43541]: cluster 2026-03-10T05:55:54.857263+0000 mgr.y (mgr.24992) 203 : cluster [DBG] pgmap v120: 161 pgs: 161 active+clean; 457 KiB data, 256 MiB used, 160 GiB / 160 GiB avail; 662 B/s rd, 0 op/s 2026-03-10T05:55:56.942 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:55:56 vm05 bash[43541]: cluster 2026-03-10T05:55:54.857263+0000 mgr.y (mgr.24992) 203 : cluster [DBG] pgmap v120: 161 pgs: 161 active+clean; 457 KiB data, 256 MiB used, 160 GiB / 160 GiB avail; 662 B/s rd, 0 op/s 2026-03-10T05:55:56.942 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:55:56 vm05 bash[43541]: cephadm 2026-03-10T05:55:55.056268+0000 mgr.y (mgr.24992) 204 : cephadm [INF] Upgrade: Updating osd.7 2026-03-10T05:55:56.942 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:55:56 vm05 bash[43541]: cephadm 2026-03-10T05:55:55.056268+0000 mgr.y (mgr.24992) 204 : cephadm [INF] Upgrade: Updating osd.7 2026-03-10T05:55:56.942 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:55:56 vm05 bash[43541]: cephadm 2026-03-10T05:55:55.065376+0000 mgr.y (mgr.24992) 205 : cephadm [INF] Deploying daemon osd.7 on vm05 2026-03-10T05:55:56.942 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:55:56 vm05 bash[43541]: cephadm 2026-03-10T05:55:55.065376+0000 mgr.y (mgr.24992) 205 : cephadm [INF] Deploying daemon osd.7 on vm05 2026-03-10T05:55:56.942 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:55:56 vm05 bash[43541]: audit 2026-03-10T05:55:55.413687+0000 mgr.y (mgr.24992) 206 : audit [DBG] from='client.34324 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T05:55:56.942 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:55:56 vm05 bash[43541]: audit 2026-03-10T05:55:55.413687+0000 mgr.y (mgr.24992) 206 : audit [DBG] from='client.34324 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T05:55:56.942 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:55:56 vm05 bash[43541]: audit 2026-03-10T05:55:55.614521+0000 mgr.y (mgr.24992) 207 : audit [DBG] from='client.44316 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T05:55:56.942 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:55:56 vm05 bash[43541]: audit 2026-03-10T05:55:55.614521+0000 mgr.y (mgr.24992) 207 : audit [DBG] from='client.44316 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T05:55:56.942 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:55:56 vm05 bash[43541]: cluster 2026-03-10T05:55:55.923202+0000 mon.a (mon.0) 475 : cluster [INF] osd.7 marked itself down and dead 2026-03-10T05:55:56.942 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:55:56 vm05 bash[43541]: cluster 2026-03-10T05:55:55.923202+0000 mon.a (mon.0) 475 : cluster [INF] osd.7 marked itself down and dead 2026-03-10T05:55:56.942 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:55:56 vm05 bash[43541]: audit 2026-03-10T05:55:55.926622+0000 mon.a (mon.0) 476 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:55:56.942 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:55:56 vm05 bash[43541]: audit 2026-03-10T05:55:55.926622+0000 mon.a (mon.0) 476 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:55:56.942 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:55:56 vm05 bash[43541]: audit 2026-03-10T05:55:55.928477+0000 mon.a (mon.0) 477 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T05:55:56.942 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:55:56 vm05 bash[43541]: audit 2026-03-10T05:55:55.928477+0000 mon.a (mon.0) 477 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T05:55:56.942 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:55:56 vm05 bash[43541]: audit 2026-03-10T05:55:56.103448+0000 mon.c (mon.1) 12 : audit [DBG] from='client.? 192.168.123.102:0/1133971765' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T05:55:56.942 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:55:56 vm05 bash[43541]: audit 2026-03-10T05:55:56.103448+0000 mon.c (mon.1) 12 : audit [DBG] from='client.? 192.168.123.102:0/1133971765' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T05:55:56.942 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:55:56 vm05 bash[43541]: audit 2026-03-10T05:55:56.526422+0000 mon.a (mon.0) 478 : audit [DBG] from='client.? 192.168.123.102:0/2649949075' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch 2026-03-10T05:55:56.942 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:55:56 vm05 bash[43541]: audit 2026-03-10T05:55:56.526422+0000 mon.a (mon.0) 478 : audit [DBG] from='client.? 192.168.123.102:0/2649949075' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch 2026-03-10T05:55:56.942 INFO:journalctl@ceph.osd.7.vm05.stdout:Mar 10 05:55:56 vm05 bash[51307]: ceph-107483ae-1c44-11f1-b530-c1172cd6122a-osd-7 2026-03-10T05:55:56.992 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:55:56 vm02 bash[56371]: cluster 2026-03-10T05:55:54.857263+0000 mgr.y (mgr.24992) 203 : cluster [DBG] pgmap v120: 161 pgs: 161 active+clean; 457 KiB data, 256 MiB used, 160 GiB / 160 GiB avail; 662 B/s rd, 0 op/s 2026-03-10T05:55:56.992 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:55:56 vm02 bash[56371]: cluster 2026-03-10T05:55:54.857263+0000 mgr.y (mgr.24992) 203 : cluster [DBG] pgmap v120: 161 pgs: 161 active+clean; 457 KiB data, 256 MiB used, 160 GiB / 160 GiB avail; 662 B/s rd, 0 op/s 2026-03-10T05:55:56.992 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:55:56 vm02 bash[56371]: cephadm 2026-03-10T05:55:55.056268+0000 mgr.y (mgr.24992) 204 : cephadm [INF] Upgrade: Updating osd.7 2026-03-10T05:55:56.992 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:55:56 vm02 bash[56371]: cephadm 2026-03-10T05:55:55.056268+0000 mgr.y (mgr.24992) 204 : cephadm [INF] Upgrade: Updating osd.7 2026-03-10T05:55:56.992 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:55:56 vm02 bash[56371]: cephadm 2026-03-10T05:55:55.065376+0000 mgr.y (mgr.24992) 205 : cephadm [INF] Deploying daemon osd.7 on vm05 2026-03-10T05:55:56.992 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:55:56 vm02 bash[56371]: cephadm 2026-03-10T05:55:55.065376+0000 mgr.y (mgr.24992) 205 : cephadm [INF] Deploying daemon osd.7 on vm05 2026-03-10T05:55:56.992 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:55:56 vm02 bash[56371]: audit 2026-03-10T05:55:55.413687+0000 mgr.y (mgr.24992) 206 : audit [DBG] from='client.34324 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T05:55:56.992 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:55:56 vm02 bash[56371]: audit 2026-03-10T05:55:55.413687+0000 mgr.y (mgr.24992) 206 : audit [DBG] from='client.34324 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T05:55:56.992 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:55:56 vm02 bash[56371]: audit 2026-03-10T05:55:55.614521+0000 mgr.y (mgr.24992) 207 : audit [DBG] from='client.44316 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T05:55:56.992 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:55:56 vm02 bash[56371]: audit 2026-03-10T05:55:55.614521+0000 mgr.y (mgr.24992) 207 : audit [DBG] from='client.44316 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T05:55:56.992 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:55:56 vm02 bash[56371]: cluster 2026-03-10T05:55:55.923202+0000 mon.a (mon.0) 475 : cluster [INF] osd.7 marked itself down and dead 2026-03-10T05:55:56.992 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:55:56 vm02 bash[56371]: cluster 2026-03-10T05:55:55.923202+0000 mon.a (mon.0) 475 : cluster [INF] osd.7 marked itself down and dead 2026-03-10T05:55:56.992 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:55:56 vm02 bash[56371]: audit 2026-03-10T05:55:55.926622+0000 mon.a (mon.0) 476 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:55:56.992 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:55:56 vm02 bash[56371]: audit 2026-03-10T05:55:55.926622+0000 mon.a (mon.0) 476 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:55:56.992 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:55:56 vm02 bash[56371]: audit 2026-03-10T05:55:55.928477+0000 mon.a (mon.0) 477 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T05:55:56.992 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:55:56 vm02 bash[56371]: audit 2026-03-10T05:55:55.928477+0000 mon.a (mon.0) 477 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T05:55:56.992 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:55:56 vm02 bash[56371]: audit 2026-03-10T05:55:56.103448+0000 mon.c (mon.1) 12 : audit [DBG] from='client.? 192.168.123.102:0/1133971765' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T05:55:56.992 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:55:56 vm02 bash[56371]: audit 2026-03-10T05:55:56.103448+0000 mon.c (mon.1) 12 : audit [DBG] from='client.? 192.168.123.102:0/1133971765' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T05:55:56.993 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:55:56 vm02 bash[56371]: audit 2026-03-10T05:55:56.526422+0000 mon.a (mon.0) 478 : audit [DBG] from='client.? 192.168.123.102:0/2649949075' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch 2026-03-10T05:55:56.993 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:55:56 vm02 bash[56371]: audit 2026-03-10T05:55:56.526422+0000 mon.a (mon.0) 478 : audit [DBG] from='client.? 192.168.123.102:0/2649949075' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch 2026-03-10T05:55:56.993 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:55:56 vm02 bash[55303]: cluster 2026-03-10T05:55:54.857263+0000 mgr.y (mgr.24992) 203 : cluster [DBG] pgmap v120: 161 pgs: 161 active+clean; 457 KiB data, 256 MiB used, 160 GiB / 160 GiB avail; 662 B/s rd, 0 op/s 2026-03-10T05:55:56.993 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:55:56 vm02 bash[55303]: cluster 2026-03-10T05:55:54.857263+0000 mgr.y (mgr.24992) 203 : cluster [DBG] pgmap v120: 161 pgs: 161 active+clean; 457 KiB data, 256 MiB used, 160 GiB / 160 GiB avail; 662 B/s rd, 0 op/s 2026-03-10T05:55:56.993 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:55:56 vm02 bash[55303]: cephadm 2026-03-10T05:55:55.056268+0000 mgr.y (mgr.24992) 204 : cephadm [INF] Upgrade: Updating osd.7 2026-03-10T05:55:56.993 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:55:56 vm02 bash[55303]: cephadm 2026-03-10T05:55:55.056268+0000 mgr.y (mgr.24992) 204 : cephadm [INF] Upgrade: Updating osd.7 2026-03-10T05:55:56.993 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:55:56 vm02 bash[55303]: cephadm 2026-03-10T05:55:55.065376+0000 mgr.y (mgr.24992) 205 : cephadm [INF] Deploying daemon osd.7 on vm05 2026-03-10T05:55:56.993 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:55:56 vm02 bash[55303]: cephadm 2026-03-10T05:55:55.065376+0000 mgr.y (mgr.24992) 205 : cephadm [INF] Deploying daemon osd.7 on vm05 2026-03-10T05:55:56.993 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:55:56 vm02 bash[55303]: audit 2026-03-10T05:55:55.413687+0000 mgr.y (mgr.24992) 206 : audit [DBG] from='client.34324 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T05:55:56.993 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:55:56 vm02 bash[55303]: audit 2026-03-10T05:55:55.413687+0000 mgr.y (mgr.24992) 206 : audit [DBG] from='client.34324 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T05:55:56.993 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:55:56 vm02 bash[55303]: audit 2026-03-10T05:55:55.614521+0000 mgr.y (mgr.24992) 207 : audit [DBG] from='client.44316 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T05:55:56.993 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:55:56 vm02 bash[55303]: audit 2026-03-10T05:55:55.614521+0000 mgr.y (mgr.24992) 207 : audit [DBG] from='client.44316 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T05:55:56.993 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:55:56 vm02 bash[55303]: cluster 2026-03-10T05:55:55.923202+0000 mon.a (mon.0) 475 : cluster [INF] osd.7 marked itself down and dead 2026-03-10T05:55:56.993 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:55:56 vm02 bash[55303]: cluster 2026-03-10T05:55:55.923202+0000 mon.a (mon.0) 475 : cluster [INF] osd.7 marked itself down and dead 2026-03-10T05:55:56.993 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:55:56 vm02 bash[55303]: audit 2026-03-10T05:55:55.926622+0000 mon.a (mon.0) 476 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:55:56.993 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:55:56 vm02 bash[55303]: audit 2026-03-10T05:55:55.926622+0000 mon.a (mon.0) 476 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:55:56.993 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:55:56 vm02 bash[55303]: audit 2026-03-10T05:55:55.928477+0000 mon.a (mon.0) 477 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T05:55:56.993 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:55:56 vm02 bash[55303]: audit 2026-03-10T05:55:55.928477+0000 mon.a (mon.0) 477 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T05:55:56.993 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:55:56 vm02 bash[55303]: audit 2026-03-10T05:55:56.103448+0000 mon.c (mon.1) 12 : audit [DBG] from='client.? 192.168.123.102:0/1133971765' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T05:55:56.993 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:55:56 vm02 bash[55303]: audit 2026-03-10T05:55:56.103448+0000 mon.c (mon.1) 12 : audit [DBG] from='client.? 192.168.123.102:0/1133971765' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T05:55:56.993 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:55:56 vm02 bash[55303]: audit 2026-03-10T05:55:56.526422+0000 mon.a (mon.0) 478 : audit [DBG] from='client.? 192.168.123.102:0/2649949075' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch 2026-03-10T05:55:56.993 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:55:56 vm02 bash[55303]: audit 2026-03-10T05:55:56.526422+0000 mon.a (mon.0) 478 : audit [DBG] from='client.? 192.168.123.102:0/2649949075' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch 2026-03-10T05:55:57.204 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:55:57 vm05 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:55:57.204 INFO:journalctl@ceph.mgr.x.vm05.stdout:Mar 10 05:55:57 vm05 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:55:57.205 INFO:journalctl@ceph.osd.4.vm05.stdout:Mar 10 05:55:57 vm05 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:55:57.205 INFO:journalctl@ceph.osd.5.vm05.stdout:Mar 10 05:55:57 vm05 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:55:57.205 INFO:journalctl@ceph.osd.6.vm05.stdout:Mar 10 05:55:57 vm05 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:55:57.205 INFO:journalctl@ceph.osd.7.vm05.stdout:Mar 10 05:55:56 vm05 systemd[1]: ceph-107483ae-1c44-11f1-b530-c1172cd6122a@osd.7.service: Deactivated successfully. 2026-03-10T05:55:57.205 INFO:journalctl@ceph.osd.7.vm05.stdout:Mar 10 05:55:56 vm05 systemd[1]: Stopped Ceph osd.7 for 107483ae-1c44-11f1-b530-c1172cd6122a. 2026-03-10T05:55:57.205 INFO:journalctl@ceph.osd.7.vm05.stdout:Mar 10 05:55:57 vm05 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:55:57.205 INFO:journalctl@ceph.osd.7.vm05.stdout:Mar 10 05:55:57 vm05 systemd[1]: Started Ceph osd.7 for 107483ae-1c44-11f1-b530-c1172cd6122a. 2026-03-10T05:55:57.205 INFO:journalctl@ceph.prometheus.a.vm05.stdout:Mar 10 05:55:56 vm05 bash[41269]: ts=2026-03-10T05:55:56.948Z caller=group.go:483 level=warn name=CephNodeDiskspaceWarning index=4 component="rule manager" file=/etc/prometheus/alerting/ceph_alerts.yml group=nodes msg="Evaluating rule failed" rule="alert: CephNodeDiskspaceWarning\nexpr: predict_linear(node_filesystem_free_bytes{device=~\"/.*\"}[2d], 3600 * 24 * 5)\n * on (instance) group_left (nodename) node_uname_info < 0\nlabels:\n oid: 1.3.6.1.4.1.50495.1.2.1.8.4\n severity: warning\n type: ceph_default\nannotations:\n description: Mountpoint {{ $labels.mountpoint }} on {{ $labels.nodename }} will\n be full in less than 5 days based on the 48 hour trailing fill rate.\n summary: Host filesystem free space is getting low\n" err="found duplicate series for the match group {instance=\"vm05\"} on the right hand-side of the operation: [{__name__=\"node_uname_info\", cluster=\"107483ae-1c44-11f1-b530-c1172cd6122a\", domainname=\"(none)\", instance=\"vm05\", job=\"node\", machine=\"x86_64\", nodename=\"vm05\", release=\"5.15.0-1092-kvm\", sysname=\"Linux\", version=\"#97-Ubuntu SMP Fri Jan 23 15:00:24 UTC 2026\"}, {__name__=\"node_uname_info\", domainname=\"(none)\", instance=\"vm05\", job=\"node\", machine=\"x86_64\", nodename=\"vm05\", release=\"5.15.0-1092-kvm\", sysname=\"Linux\", version=\"#97-Ubuntu SMP Fri Jan 23 15:00:24 UTC 2026\"}];many-to-many matching not allowed: matching labels must be unique on one side" 2026-03-10T05:55:57.205 INFO:journalctl@ceph.prometheus.a.vm05.stdout:Mar 10 05:55:57 vm05 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:55:57.205 INFO:journalctl@ceph.node-exporter.b.vm05.stdout:Mar 10 05:55:57 vm05 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:55:57.205 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:55:57 vm05 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:55:57.499 INFO:journalctl@ceph.osd.7.vm05.stdout:Mar 10 05:55:57 vm05 bash[51514]: Running command: /usr/bin/ceph-authtool --gen-print-key 2026-03-10T05:55:57.499 INFO:journalctl@ceph.osd.7.vm05.stdout:Mar 10 05:55:57 vm05 bash[51514]: Running command: /usr/bin/ceph-authtool --gen-print-key 2026-03-10T05:55:57.999 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:55:57 vm05 bash[43541]: audit 2026-03-10T05:55:55.809344+0000 mgr.y (mgr.24992) 208 : audit [DBG] from='client.44322 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T05:55:57.999 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:55:57 vm05 bash[43541]: audit 2026-03-10T05:55:55.809344+0000 mgr.y (mgr.24992) 208 : audit [DBG] from='client.44322 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T05:55:57.999 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:55:57 vm05 bash[43541]: audit 2026-03-10T05:55:56.298141+0000 mgr.y (mgr.24992) 209 : audit [DBG] from='client.44331 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T05:55:57.999 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:55:57 vm05 bash[43541]: audit 2026-03-10T05:55:56.298141+0000 mgr.y (mgr.24992) 209 : audit [DBG] from='client.44331 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T05:55:57.999 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:55:57 vm05 bash[43541]: cluster 2026-03-10T05:55:56.622509+0000 mon.a (mon.0) 479 : cluster [WRN] Health check failed: 1 osds down (OSD_DOWN) 2026-03-10T05:55:57.999 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:55:57 vm05 bash[43541]: cluster 2026-03-10T05:55:56.622509+0000 mon.a (mon.0) 479 : cluster [WRN] Health check failed: 1 osds down (OSD_DOWN) 2026-03-10T05:55:57.999 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:55:57 vm05 bash[43541]: cluster 2026-03-10T05:55:56.622553+0000 mon.a (mon.0) 480 : cluster [WRN] Health check failed: all OSDs are running squid or later but require_osd_release < squid (OSD_UPGRADE_FINISHED) 2026-03-10T05:55:57.999 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:55:57 vm05 bash[43541]: cluster 2026-03-10T05:55:56.622553+0000 mon.a (mon.0) 480 : cluster [WRN] Health check failed: all OSDs are running squid or later but require_osd_release < squid (OSD_UPGRADE_FINISHED) 2026-03-10T05:55:57.999 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:55:57 vm05 bash[43541]: cluster 2026-03-10T05:55:56.631161+0000 mon.a (mon.0) 481 : cluster [DBG] osdmap e127: 8 total, 7 up, 8 in 2026-03-10T05:55:57.999 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:55:57 vm05 bash[43541]: cluster 2026-03-10T05:55:56.631161+0000 mon.a (mon.0) 481 : cluster [DBG] osdmap e127: 8 total, 7 up, 8 in 2026-03-10T05:55:57.999 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:55:57 vm05 bash[43541]: audit 2026-03-10T05:55:57.178809+0000 mon.a (mon.0) 482 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:55:57.999 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:55:57 vm05 bash[43541]: audit 2026-03-10T05:55:57.178809+0000 mon.a (mon.0) 482 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:55:57.999 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:55:57 vm05 bash[43541]: audit 2026-03-10T05:55:57.184739+0000 mon.a (mon.0) 483 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:55:57.999 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:55:57 vm05 bash[43541]: audit 2026-03-10T05:55:57.184739+0000 mon.a (mon.0) 483 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:55:58.084 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:55:57 vm02 bash[56371]: audit 2026-03-10T05:55:55.809344+0000 mgr.y (mgr.24992) 208 : audit [DBG] from='client.44322 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T05:55:58.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:55:57 vm02 bash[56371]: audit 2026-03-10T05:55:55.809344+0000 mgr.y (mgr.24992) 208 : audit [DBG] from='client.44322 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T05:55:58.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:55:57 vm02 bash[56371]: audit 2026-03-10T05:55:56.298141+0000 mgr.y (mgr.24992) 209 : audit [DBG] from='client.44331 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T05:55:58.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:55:57 vm02 bash[56371]: audit 2026-03-10T05:55:56.298141+0000 mgr.y (mgr.24992) 209 : audit [DBG] from='client.44331 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T05:55:58.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:55:57 vm02 bash[56371]: cluster 2026-03-10T05:55:56.622509+0000 mon.a (mon.0) 479 : cluster [WRN] Health check failed: 1 osds down (OSD_DOWN) 2026-03-10T05:55:58.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:55:57 vm02 bash[56371]: cluster 2026-03-10T05:55:56.622509+0000 mon.a (mon.0) 479 : cluster [WRN] Health check failed: 1 osds down (OSD_DOWN) 2026-03-10T05:55:58.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:55:57 vm02 bash[56371]: cluster 2026-03-10T05:55:56.622553+0000 mon.a (mon.0) 480 : cluster [WRN] Health check failed: all OSDs are running squid or later but require_osd_release < squid (OSD_UPGRADE_FINISHED) 2026-03-10T05:55:58.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:55:57 vm02 bash[56371]: cluster 2026-03-10T05:55:56.622553+0000 mon.a (mon.0) 480 : cluster [WRN] Health check failed: all OSDs are running squid or later but require_osd_release < squid (OSD_UPGRADE_FINISHED) 2026-03-10T05:55:58.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:55:57 vm02 bash[56371]: cluster 2026-03-10T05:55:56.631161+0000 mon.a (mon.0) 481 : cluster [DBG] osdmap e127: 8 total, 7 up, 8 in 2026-03-10T05:55:58.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:55:57 vm02 bash[56371]: cluster 2026-03-10T05:55:56.631161+0000 mon.a (mon.0) 481 : cluster [DBG] osdmap e127: 8 total, 7 up, 8 in 2026-03-10T05:55:58.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:55:57 vm02 bash[56371]: audit 2026-03-10T05:55:57.178809+0000 mon.a (mon.0) 482 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:55:58.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:55:57 vm02 bash[56371]: audit 2026-03-10T05:55:57.178809+0000 mon.a (mon.0) 482 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:55:58.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:55:57 vm02 bash[56371]: audit 2026-03-10T05:55:57.184739+0000 mon.a (mon.0) 483 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:55:58.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:55:57 vm02 bash[56371]: audit 2026-03-10T05:55:57.184739+0000 mon.a (mon.0) 483 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:55:58.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:55:57 vm02 bash[55303]: audit 2026-03-10T05:55:55.809344+0000 mgr.y (mgr.24992) 208 : audit [DBG] from='client.44322 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T05:55:58.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:55:57 vm02 bash[55303]: audit 2026-03-10T05:55:55.809344+0000 mgr.y (mgr.24992) 208 : audit [DBG] from='client.44322 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T05:55:58.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:55:57 vm02 bash[55303]: audit 2026-03-10T05:55:56.298141+0000 mgr.y (mgr.24992) 209 : audit [DBG] from='client.44331 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T05:55:58.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:55:57 vm02 bash[55303]: audit 2026-03-10T05:55:56.298141+0000 mgr.y (mgr.24992) 209 : audit [DBG] from='client.44331 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T05:55:58.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:55:57 vm02 bash[55303]: cluster 2026-03-10T05:55:56.622509+0000 mon.a (mon.0) 479 : cluster [WRN] Health check failed: 1 osds down (OSD_DOWN) 2026-03-10T05:55:58.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:55:57 vm02 bash[55303]: cluster 2026-03-10T05:55:56.622509+0000 mon.a (mon.0) 479 : cluster [WRN] Health check failed: 1 osds down (OSD_DOWN) 2026-03-10T05:55:58.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:55:57 vm02 bash[55303]: cluster 2026-03-10T05:55:56.622553+0000 mon.a (mon.0) 480 : cluster [WRN] Health check failed: all OSDs are running squid or later but require_osd_release < squid (OSD_UPGRADE_FINISHED) 2026-03-10T05:55:58.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:55:57 vm02 bash[55303]: cluster 2026-03-10T05:55:56.622553+0000 mon.a (mon.0) 480 : cluster [WRN] Health check failed: all OSDs are running squid or later but require_osd_release < squid (OSD_UPGRADE_FINISHED) 2026-03-10T05:55:58.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:55:57 vm02 bash[55303]: cluster 2026-03-10T05:55:56.631161+0000 mon.a (mon.0) 481 : cluster [DBG] osdmap e127: 8 total, 7 up, 8 in 2026-03-10T05:55:58.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:55:57 vm02 bash[55303]: cluster 2026-03-10T05:55:56.631161+0000 mon.a (mon.0) 481 : cluster [DBG] osdmap e127: 8 total, 7 up, 8 in 2026-03-10T05:55:58.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:55:57 vm02 bash[55303]: audit 2026-03-10T05:55:57.178809+0000 mon.a (mon.0) 482 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:55:58.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:55:57 vm02 bash[55303]: audit 2026-03-10T05:55:57.178809+0000 mon.a (mon.0) 482 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:55:58.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:55:57 vm02 bash[55303]: audit 2026-03-10T05:55:57.184739+0000 mon.a (mon.0) 483 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:55:58.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:55:57 vm02 bash[55303]: audit 2026-03-10T05:55:57.184739+0000 mon.a (mon.0) 483 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:55:58.499 INFO:journalctl@ceph.osd.7.vm05.stdout:Mar 10 05:55:58 vm05 bash[51514]: --> Failed to activate via raw: did not find any matching OSD to activate 2026-03-10T05:55:58.499 INFO:journalctl@ceph.osd.7.vm05.stdout:Mar 10 05:55:58 vm05 bash[51514]: Running command: /usr/bin/ceph-authtool --gen-print-key 2026-03-10T05:55:58.499 INFO:journalctl@ceph.osd.7.vm05.stdout:Mar 10 05:55:58 vm05 bash[51514]: Running command: /usr/bin/ceph-authtool --gen-print-key 2026-03-10T05:55:58.499 INFO:journalctl@ceph.osd.7.vm05.stdout:Mar 10 05:55:58 vm05 bash[51514]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-7 2026-03-10T05:55:58.499 INFO:journalctl@ceph.osd.7.vm05.stdout:Mar 10 05:55:58 vm05 bash[51514]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph-ec148095-2666-4d05-8e07-c1a1f82afc83/osd-block-2d1f3ab7-28e5-424b-a95a-4d9947f78095 --path /var/lib/ceph/osd/ceph-7 --no-mon-config 2026-03-10T05:55:58.999 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:55:58 vm05 bash[43541]: cluster 2026-03-10T05:55:56.857616+0000 mgr.y (mgr.24992) 210 : cluster [DBG] pgmap v122: 161 pgs: 12 peering, 22 stale+active+clean, 127 active+clean; 457 KiB data, 256 MiB used, 160 GiB / 160 GiB avail; 819 B/s rd, 0 op/s 2026-03-10T05:55:58.999 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:55:58 vm05 bash[43541]: cluster 2026-03-10T05:55:56.857616+0000 mgr.y (mgr.24992) 210 : cluster [DBG] pgmap v122: 161 pgs: 12 peering, 22 stale+active+clean, 127 active+clean; 457 KiB data, 256 MiB used, 160 GiB / 160 GiB avail; 819 B/s rd, 0 op/s 2026-03-10T05:55:58.999 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:55:58 vm05 bash[43541]: audit 2026-03-10T05:55:56.991132+0000 mgr.y (mgr.24992) 211 : audit [DBG] from='client.25048 -' entity='client.iscsi.foo.vm02.mxbwmh' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T05:55:58.999 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:55:58 vm05 bash[43541]: audit 2026-03-10T05:55:56.991132+0000 mgr.y (mgr.24992) 211 : audit [DBG] from='client.25048 -' entity='client.iscsi.foo.vm02.mxbwmh' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T05:55:58.999 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:55:58 vm05 bash[43541]: cluster 2026-03-10T05:55:57.658150+0000 mon.a (mon.0) 484 : cluster [DBG] osdmap e128: 8 total, 7 up, 8 in 2026-03-10T05:55:58.999 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:55:58 vm05 bash[43541]: cluster 2026-03-10T05:55:57.658150+0000 mon.a (mon.0) 484 : cluster [DBG] osdmap e128: 8 total, 7 up, 8 in 2026-03-10T05:55:58.999 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:55:58 vm05 bash[43541]: cluster 2026-03-10T05:55:57.658709+0000 mon.a (mon.0) 485 : cluster [WRN] Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY) 2026-03-10T05:55:58.999 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:55:58 vm05 bash[43541]: cluster 2026-03-10T05:55:57.658709+0000 mon.a (mon.0) 485 : cluster [WRN] Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY) 2026-03-10T05:55:58.999 INFO:journalctl@ceph.osd.7.vm05.stdout:Mar 10 05:55:58 vm05 bash[51514]: Running command: /usr/bin/ln -snf /dev/ceph-ec148095-2666-4d05-8e07-c1a1f82afc83/osd-block-2d1f3ab7-28e5-424b-a95a-4d9947f78095 /var/lib/ceph/osd/ceph-7/block 2026-03-10T05:55:58.999 INFO:journalctl@ceph.osd.7.vm05.stdout:Mar 10 05:55:58 vm05 bash[51514]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-7/block 2026-03-10T05:55:58.999 INFO:journalctl@ceph.osd.7.vm05.stdout:Mar 10 05:55:58 vm05 bash[51514]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-3 2026-03-10T05:55:58.999 INFO:journalctl@ceph.osd.7.vm05.stdout:Mar 10 05:55:58 vm05 bash[51514]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-7 2026-03-10T05:55:58.999 INFO:journalctl@ceph.osd.7.vm05.stdout:Mar 10 05:55:58 vm05 bash[51514]: --> ceph-volume lvm activate successful for osd ID: 7 2026-03-10T05:55:59.084 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:55:58 vm02 bash[56371]: cluster 2026-03-10T05:55:56.857616+0000 mgr.y (mgr.24992) 210 : cluster [DBG] pgmap v122: 161 pgs: 12 peering, 22 stale+active+clean, 127 active+clean; 457 KiB data, 256 MiB used, 160 GiB / 160 GiB avail; 819 B/s rd, 0 op/s 2026-03-10T05:55:59.084 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:55:58 vm02 bash[56371]: cluster 2026-03-10T05:55:56.857616+0000 mgr.y (mgr.24992) 210 : cluster [DBG] pgmap v122: 161 pgs: 12 peering, 22 stale+active+clean, 127 active+clean; 457 KiB data, 256 MiB used, 160 GiB / 160 GiB avail; 819 B/s rd, 0 op/s 2026-03-10T05:55:59.084 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:55:58 vm02 bash[56371]: audit 2026-03-10T05:55:56.991132+0000 mgr.y (mgr.24992) 211 : audit [DBG] from='client.25048 -' entity='client.iscsi.foo.vm02.mxbwmh' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T05:55:59.084 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:55:58 vm02 bash[56371]: audit 2026-03-10T05:55:56.991132+0000 mgr.y (mgr.24992) 211 : audit [DBG] from='client.25048 -' entity='client.iscsi.foo.vm02.mxbwmh' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T05:55:59.084 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:55:58 vm02 bash[56371]: cluster 2026-03-10T05:55:57.658150+0000 mon.a (mon.0) 484 : cluster [DBG] osdmap e128: 8 total, 7 up, 8 in 2026-03-10T05:55:59.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:55:58 vm02 bash[56371]: cluster 2026-03-10T05:55:57.658150+0000 mon.a (mon.0) 484 : cluster [DBG] osdmap e128: 8 total, 7 up, 8 in 2026-03-10T05:55:59.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:55:58 vm02 bash[56371]: cluster 2026-03-10T05:55:57.658709+0000 mon.a (mon.0) 485 : cluster [WRN] Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY) 2026-03-10T05:55:59.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:55:58 vm02 bash[56371]: cluster 2026-03-10T05:55:57.658709+0000 mon.a (mon.0) 485 : cluster [WRN] Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY) 2026-03-10T05:55:59.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:55:58 vm02 bash[55303]: cluster 2026-03-10T05:55:56.857616+0000 mgr.y (mgr.24992) 210 : cluster [DBG] pgmap v122: 161 pgs: 12 peering, 22 stale+active+clean, 127 active+clean; 457 KiB data, 256 MiB used, 160 GiB / 160 GiB avail; 819 B/s rd, 0 op/s 2026-03-10T05:55:59.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:55:58 vm02 bash[55303]: cluster 2026-03-10T05:55:56.857616+0000 mgr.y (mgr.24992) 210 : cluster [DBG] pgmap v122: 161 pgs: 12 peering, 22 stale+active+clean, 127 active+clean; 457 KiB data, 256 MiB used, 160 GiB / 160 GiB avail; 819 B/s rd, 0 op/s 2026-03-10T05:55:59.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:55:58 vm02 bash[55303]: audit 2026-03-10T05:55:56.991132+0000 mgr.y (mgr.24992) 211 : audit [DBG] from='client.25048 -' entity='client.iscsi.foo.vm02.mxbwmh' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T05:55:59.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:55:58 vm02 bash[55303]: audit 2026-03-10T05:55:56.991132+0000 mgr.y (mgr.24992) 211 : audit [DBG] from='client.25048 -' entity='client.iscsi.foo.vm02.mxbwmh' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T05:55:59.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:55:58 vm02 bash[55303]: cluster 2026-03-10T05:55:57.658150+0000 mon.a (mon.0) 484 : cluster [DBG] osdmap e128: 8 total, 7 up, 8 in 2026-03-10T05:55:59.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:55:58 vm02 bash[55303]: cluster 2026-03-10T05:55:57.658150+0000 mon.a (mon.0) 484 : cluster [DBG] osdmap e128: 8 total, 7 up, 8 in 2026-03-10T05:55:59.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:55:58 vm02 bash[55303]: cluster 2026-03-10T05:55:57.658709+0000 mon.a (mon.0) 485 : cluster [WRN] Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY) 2026-03-10T05:55:59.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:55:58 vm02 bash[55303]: cluster 2026-03-10T05:55:57.658709+0000 mon.a (mon.0) 485 : cluster [WRN] Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY) 2026-03-10T05:55:59.727 INFO:journalctl@ceph.osd.7.vm05.stdout:Mar 10 05:55:59 vm05 bash[51877]: debug 2026-03-10T05:55:59.363+0000 7f51573fc740 -1 Falling back to public interface 2026-03-10T05:55:59.999 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:55:59 vm05 bash[43541]: cluster 2026-03-10T05:55:58.858054+0000 mgr.y (mgr.24992) 212 : cluster [DBG] pgmap v124: 161 pgs: 11 active+undersized, 23 peering, 9 stale+active+clean, 6 active+undersized+degraded, 112 active+clean; 457 KiB data, 260 MiB used, 160 GiB / 160 GiB avail; 18/723 objects degraded (2.490%) 2026-03-10T05:55:59.999 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:55:59 vm05 bash[43541]: cluster 2026-03-10T05:55:58.858054+0000 mgr.y (mgr.24992) 212 : cluster [DBG] pgmap v124: 161 pgs: 11 active+undersized, 23 peering, 9 stale+active+clean, 6 active+undersized+degraded, 112 active+clean; 457 KiB data, 260 MiB used, 160 GiB / 160 GiB avail; 18/723 objects degraded (2.490%) 2026-03-10T05:55:59.999 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:55:59 vm05 bash[43541]: cluster 2026-03-10T05:55:59.672496+0000 mon.a (mon.0) 486 : cluster [WRN] Health check failed: Degraded data redundancy: 18/723 objects degraded (2.490%), 6 pgs degraded (PG_DEGRADED) 2026-03-10T05:55:59.999 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:55:59 vm05 bash[43541]: cluster 2026-03-10T05:55:59.672496+0000 mon.a (mon.0) 486 : cluster [WRN] Health check failed: Degraded data redundancy: 18/723 objects degraded (2.490%), 6 pgs degraded (PG_DEGRADED) 2026-03-10T05:56:00.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:55:59 vm02 bash[56371]: cluster 2026-03-10T05:55:58.858054+0000 mgr.y (mgr.24992) 212 : cluster [DBG] pgmap v124: 161 pgs: 11 active+undersized, 23 peering, 9 stale+active+clean, 6 active+undersized+degraded, 112 active+clean; 457 KiB data, 260 MiB used, 160 GiB / 160 GiB avail; 18/723 objects degraded (2.490%) 2026-03-10T05:56:00.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:55:59 vm02 bash[56371]: cluster 2026-03-10T05:55:58.858054+0000 mgr.y (mgr.24992) 212 : cluster [DBG] pgmap v124: 161 pgs: 11 active+undersized, 23 peering, 9 stale+active+clean, 6 active+undersized+degraded, 112 active+clean; 457 KiB data, 260 MiB used, 160 GiB / 160 GiB avail; 18/723 objects degraded (2.490%) 2026-03-10T05:56:00.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:55:59 vm02 bash[56371]: cluster 2026-03-10T05:55:59.672496+0000 mon.a (mon.0) 486 : cluster [WRN] Health check failed: Degraded data redundancy: 18/723 objects degraded (2.490%), 6 pgs degraded (PG_DEGRADED) 2026-03-10T05:56:00.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:55:59 vm02 bash[56371]: cluster 2026-03-10T05:55:59.672496+0000 mon.a (mon.0) 486 : cluster [WRN] Health check failed: Degraded data redundancy: 18/723 objects degraded (2.490%), 6 pgs degraded (PG_DEGRADED) 2026-03-10T05:56:00.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:55:59 vm02 bash[55303]: cluster 2026-03-10T05:55:58.858054+0000 mgr.y (mgr.24992) 212 : cluster [DBG] pgmap v124: 161 pgs: 11 active+undersized, 23 peering, 9 stale+active+clean, 6 active+undersized+degraded, 112 active+clean; 457 KiB data, 260 MiB used, 160 GiB / 160 GiB avail; 18/723 objects degraded (2.490%) 2026-03-10T05:56:00.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:55:59 vm02 bash[55303]: cluster 2026-03-10T05:55:58.858054+0000 mgr.y (mgr.24992) 212 : cluster [DBG] pgmap v124: 161 pgs: 11 active+undersized, 23 peering, 9 stale+active+clean, 6 active+undersized+degraded, 112 active+clean; 457 KiB data, 260 MiB used, 160 GiB / 160 GiB avail; 18/723 objects degraded (2.490%) 2026-03-10T05:56:00.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:55:59 vm02 bash[55303]: cluster 2026-03-10T05:55:59.672496+0000 mon.a (mon.0) 486 : cluster [WRN] Health check failed: Degraded data redundancy: 18/723 objects degraded (2.490%), 6 pgs degraded (PG_DEGRADED) 2026-03-10T05:56:00.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:55:59 vm02 bash[55303]: cluster 2026-03-10T05:55:59.672496+0000 mon.a (mon.0) 486 : cluster [WRN] Health check failed: Degraded data redundancy: 18/723 objects degraded (2.490%), 6 pgs degraded (PG_DEGRADED) 2026-03-10T05:56:00.731 INFO:journalctl@ceph.osd.7.vm05.stdout:Mar 10 05:56:00 vm05 bash[51877]: debug 2026-03-10T05:56:00.347+0000 7f51573fc740 -1 osd.7 0 read_superblock omap replica is missing. 2026-03-10T05:56:00.731 INFO:journalctl@ceph.osd.7.vm05.stdout:Mar 10 05:56:00 vm05 bash[51877]: debug 2026-03-10T05:56:00.363+0000 7f51573fc740 -1 osd.7 126 log_to_monitors true 2026-03-10T05:56:00.999 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:56:00 vm05 bash[43541]: audit 2026-03-10T05:56:00.366973+0000 mon.a (mon.0) 487 : audit [INF] from='osd.7 ' entity='osd.7' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["7"]}]: dispatch 2026-03-10T05:56:00.999 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:56:00 vm05 bash[43541]: audit 2026-03-10T05:56:00.366973+0000 mon.a (mon.0) 487 : audit [INF] from='osd.7 ' entity='osd.7' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["7"]}]: dispatch 2026-03-10T05:56:00.999 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:56:00 vm05 bash[43541]: audit 2026-03-10T05:56:00.371268+0000 mon.b (mon.2) 13 : audit [INF] from='osd.7 [v2:192.168.123.105:6824/2307383287,v1:192.168.123.105:6825/2307383287]' entity='osd.7' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["7"]}]: dispatch 2026-03-10T05:56:00.999 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:56:00 vm05 bash[43541]: audit 2026-03-10T05:56:00.371268+0000 mon.b (mon.2) 13 : audit [INF] from='osd.7 [v2:192.168.123.105:6824/2307383287,v1:192.168.123.105:6825/2307383287]' entity='osd.7' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["7"]}]: dispatch 2026-03-10T05:56:01.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:56:00 vm02 bash[56371]: audit 2026-03-10T05:56:00.366973+0000 mon.a (mon.0) 487 : audit [INF] from='osd.7 ' entity='osd.7' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["7"]}]: dispatch 2026-03-10T05:56:01.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:56:00 vm02 bash[56371]: audit 2026-03-10T05:56:00.366973+0000 mon.a (mon.0) 487 : audit [INF] from='osd.7 ' entity='osd.7' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["7"]}]: dispatch 2026-03-10T05:56:01.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:56:00 vm02 bash[56371]: audit 2026-03-10T05:56:00.371268+0000 mon.b (mon.2) 13 : audit [INF] from='osd.7 [v2:192.168.123.105:6824/2307383287,v1:192.168.123.105:6825/2307383287]' entity='osd.7' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["7"]}]: dispatch 2026-03-10T05:56:01.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:56:00 vm02 bash[56371]: audit 2026-03-10T05:56:00.371268+0000 mon.b (mon.2) 13 : audit [INF] from='osd.7 [v2:192.168.123.105:6824/2307383287,v1:192.168.123.105:6825/2307383287]' entity='osd.7' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["7"]}]: dispatch 2026-03-10T05:56:01.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:56:00 vm02 bash[55303]: audit 2026-03-10T05:56:00.366973+0000 mon.a (mon.0) 487 : audit [INF] from='osd.7 ' entity='osd.7' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["7"]}]: dispatch 2026-03-10T05:56:01.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:56:00 vm02 bash[55303]: audit 2026-03-10T05:56:00.366973+0000 mon.a (mon.0) 487 : audit [INF] from='osd.7 ' entity='osd.7' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["7"]}]: dispatch 2026-03-10T05:56:01.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:56:00 vm02 bash[55303]: audit 2026-03-10T05:56:00.371268+0000 mon.b (mon.2) 13 : audit [INF] from='osd.7 [v2:192.168.123.105:6824/2307383287,v1:192.168.123.105:6825/2307383287]' entity='osd.7' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["7"]}]: dispatch 2026-03-10T05:56:01.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:56:00 vm02 bash[55303]: audit 2026-03-10T05:56:00.371268+0000 mon.b (mon.2) 13 : audit [INF] from='osd.7 [v2:192.168.123.105:6824/2307383287,v1:192.168.123.105:6825/2307383287]' entity='osd.7' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["7"]}]: dispatch 2026-03-10T05:56:01.999 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:56:01 vm05 bash[43541]: audit 2026-03-10T05:56:00.738003+0000 mon.a (mon.0) 488 : audit [INF] from='osd.7 ' entity='osd.7' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["7"]}]': finished 2026-03-10T05:56:01.999 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:56:01 vm05 bash[43541]: audit 2026-03-10T05:56:00.738003+0000 mon.a (mon.0) 488 : audit [INF] from='osd.7 ' entity='osd.7' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["7"]}]': finished 2026-03-10T05:56:01.999 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:56:01 vm05 bash[43541]: cluster 2026-03-10T05:56:00.744803+0000 mon.a (mon.0) 489 : cluster [DBG] osdmap e129: 8 total, 7 up, 8 in 2026-03-10T05:56:01.999 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:56:01 vm05 bash[43541]: cluster 2026-03-10T05:56:00.744803+0000 mon.a (mon.0) 489 : cluster [DBG] osdmap e129: 8 total, 7 up, 8 in 2026-03-10T05:56:01.999 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:56:01 vm05 bash[43541]: audit 2026-03-10T05:56:00.745377+0000 mon.a (mon.0) 490 : audit [INF] from='osd.7 ' entity='osd.7' cmd=[{"prefix": "osd crush create-or-move", "id": 7, "weight":0.0195, "args": ["host=vm05", "root=default"]}]: dispatch 2026-03-10T05:56:01.999 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:56:01 vm05 bash[43541]: audit 2026-03-10T05:56:00.745377+0000 mon.a (mon.0) 490 : audit [INF] from='osd.7 ' entity='osd.7' cmd=[{"prefix": "osd crush create-or-move", "id": 7, "weight":0.0195, "args": ["host=vm05", "root=default"]}]: dispatch 2026-03-10T05:56:01.999 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:56:01 vm05 bash[43541]: audit 2026-03-10T05:56:00.748988+0000 mon.b (mon.2) 14 : audit [INF] from='osd.7 [v2:192.168.123.105:6824/2307383287,v1:192.168.123.105:6825/2307383287]' entity='osd.7' cmd=[{"prefix": "osd crush create-or-move", "id": 7, "weight":0.0195, "args": ["host=vm05", "root=default"]}]: dispatch 2026-03-10T05:56:01.999 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:56:01 vm05 bash[43541]: audit 2026-03-10T05:56:00.748988+0000 mon.b (mon.2) 14 : audit [INF] from='osd.7 [v2:192.168.123.105:6824/2307383287,v1:192.168.123.105:6825/2307383287]' entity='osd.7' cmd=[{"prefix": "osd crush create-or-move", "id": 7, "weight":0.0195, "args": ["host=vm05", "root=default"]}]: dispatch 2026-03-10T05:56:01.999 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:56:01 vm05 bash[43541]: cluster 2026-03-10T05:56:00.858386+0000 mgr.y (mgr.24992) 213 : cluster [DBG] pgmap v126: 161 pgs: 31 active+undersized, 24 peering, 10 active+undersized+degraded, 96 active+clean; 457 KiB data, 278 MiB used, 160 GiB / 160 GiB avail; 28/723 objects degraded (3.873%) 2026-03-10T05:56:01.999 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:56:01 vm05 bash[43541]: cluster 2026-03-10T05:56:00.858386+0000 mgr.y (mgr.24992) 213 : cluster [DBG] pgmap v126: 161 pgs: 31 active+undersized, 24 peering, 10 active+undersized+degraded, 96 active+clean; 457 KiB data, 278 MiB used, 160 GiB / 160 GiB avail; 28/723 objects degraded (3.873%) 2026-03-10T05:56:01.999 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:56:01 vm05 bash[43541]: audit 2026-03-10T05:56:00.963629+0000 mon.a (mon.0) 491 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:56:01.999 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:56:01 vm05 bash[43541]: audit 2026-03-10T05:56:00.963629+0000 mon.a (mon.0) 491 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:56:01.999 INFO:journalctl@ceph.osd.7.vm05.stdout:Mar 10 05:56:01 vm05 bash[51877]: debug 2026-03-10T05:56:01.675+0000 7f514e9a6640 -1 osd.7 126 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory 2026-03-10T05:56:02.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:56:01 vm02 bash[56371]: audit 2026-03-10T05:56:00.738003+0000 mon.a (mon.0) 488 : audit [INF] from='osd.7 ' entity='osd.7' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["7"]}]': finished 2026-03-10T05:56:02.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:56:01 vm02 bash[56371]: audit 2026-03-10T05:56:00.738003+0000 mon.a (mon.0) 488 : audit [INF] from='osd.7 ' entity='osd.7' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["7"]}]': finished 2026-03-10T05:56:02.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:56:01 vm02 bash[56371]: cluster 2026-03-10T05:56:00.744803+0000 mon.a (mon.0) 489 : cluster [DBG] osdmap e129: 8 total, 7 up, 8 in 2026-03-10T05:56:02.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:56:01 vm02 bash[56371]: cluster 2026-03-10T05:56:00.744803+0000 mon.a (mon.0) 489 : cluster [DBG] osdmap e129: 8 total, 7 up, 8 in 2026-03-10T05:56:02.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:56:01 vm02 bash[56371]: audit 2026-03-10T05:56:00.745377+0000 mon.a (mon.0) 490 : audit [INF] from='osd.7 ' entity='osd.7' cmd=[{"prefix": "osd crush create-or-move", "id": 7, "weight":0.0195, "args": ["host=vm05", "root=default"]}]: dispatch 2026-03-10T05:56:02.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:56:01 vm02 bash[56371]: audit 2026-03-10T05:56:00.745377+0000 mon.a (mon.0) 490 : audit [INF] from='osd.7 ' entity='osd.7' cmd=[{"prefix": "osd crush create-or-move", "id": 7, "weight":0.0195, "args": ["host=vm05", "root=default"]}]: dispatch 2026-03-10T05:56:02.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:56:01 vm02 bash[56371]: audit 2026-03-10T05:56:00.748988+0000 mon.b (mon.2) 14 : audit [INF] from='osd.7 [v2:192.168.123.105:6824/2307383287,v1:192.168.123.105:6825/2307383287]' entity='osd.7' cmd=[{"prefix": "osd crush create-or-move", "id": 7, "weight":0.0195, "args": ["host=vm05", "root=default"]}]: dispatch 2026-03-10T05:56:02.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:56:01 vm02 bash[56371]: audit 2026-03-10T05:56:00.748988+0000 mon.b (mon.2) 14 : audit [INF] from='osd.7 [v2:192.168.123.105:6824/2307383287,v1:192.168.123.105:6825/2307383287]' entity='osd.7' cmd=[{"prefix": "osd crush create-or-move", "id": 7, "weight":0.0195, "args": ["host=vm05", "root=default"]}]: dispatch 2026-03-10T05:56:02.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:56:01 vm02 bash[56371]: cluster 2026-03-10T05:56:00.858386+0000 mgr.y (mgr.24992) 213 : cluster [DBG] pgmap v126: 161 pgs: 31 active+undersized, 24 peering, 10 active+undersized+degraded, 96 active+clean; 457 KiB data, 278 MiB used, 160 GiB / 160 GiB avail; 28/723 objects degraded (3.873%) 2026-03-10T05:56:02.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:56:01 vm02 bash[56371]: cluster 2026-03-10T05:56:00.858386+0000 mgr.y (mgr.24992) 213 : cluster [DBG] pgmap v126: 161 pgs: 31 active+undersized, 24 peering, 10 active+undersized+degraded, 96 active+clean; 457 KiB data, 278 MiB used, 160 GiB / 160 GiB avail; 28/723 objects degraded (3.873%) 2026-03-10T05:56:02.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:56:01 vm02 bash[56371]: audit 2026-03-10T05:56:00.963629+0000 mon.a (mon.0) 491 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:56:02.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:56:01 vm02 bash[56371]: audit 2026-03-10T05:56:00.963629+0000 mon.a (mon.0) 491 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:56:02.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:56:01 vm02 bash[55303]: audit 2026-03-10T05:56:00.738003+0000 mon.a (mon.0) 488 : audit [INF] from='osd.7 ' entity='osd.7' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["7"]}]': finished 2026-03-10T05:56:02.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:56:01 vm02 bash[55303]: audit 2026-03-10T05:56:00.738003+0000 mon.a (mon.0) 488 : audit [INF] from='osd.7 ' entity='osd.7' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["7"]}]': finished 2026-03-10T05:56:02.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:56:01 vm02 bash[55303]: cluster 2026-03-10T05:56:00.744803+0000 mon.a (mon.0) 489 : cluster [DBG] osdmap e129: 8 total, 7 up, 8 in 2026-03-10T05:56:02.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:56:01 vm02 bash[55303]: cluster 2026-03-10T05:56:00.744803+0000 mon.a (mon.0) 489 : cluster [DBG] osdmap e129: 8 total, 7 up, 8 in 2026-03-10T05:56:02.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:56:01 vm02 bash[55303]: audit 2026-03-10T05:56:00.745377+0000 mon.a (mon.0) 490 : audit [INF] from='osd.7 ' entity='osd.7' cmd=[{"prefix": "osd crush create-or-move", "id": 7, "weight":0.0195, "args": ["host=vm05", "root=default"]}]: dispatch 2026-03-10T05:56:02.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:56:01 vm02 bash[55303]: audit 2026-03-10T05:56:00.745377+0000 mon.a (mon.0) 490 : audit [INF] from='osd.7 ' entity='osd.7' cmd=[{"prefix": "osd crush create-or-move", "id": 7, "weight":0.0195, "args": ["host=vm05", "root=default"]}]: dispatch 2026-03-10T05:56:02.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:56:01 vm02 bash[55303]: audit 2026-03-10T05:56:00.748988+0000 mon.b (mon.2) 14 : audit [INF] from='osd.7 [v2:192.168.123.105:6824/2307383287,v1:192.168.123.105:6825/2307383287]' entity='osd.7' cmd=[{"prefix": "osd crush create-or-move", "id": 7, "weight":0.0195, "args": ["host=vm05", "root=default"]}]: dispatch 2026-03-10T05:56:02.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:56:01 vm02 bash[55303]: audit 2026-03-10T05:56:00.748988+0000 mon.b (mon.2) 14 : audit [INF] from='osd.7 [v2:192.168.123.105:6824/2307383287,v1:192.168.123.105:6825/2307383287]' entity='osd.7' cmd=[{"prefix": "osd crush create-or-move", "id": 7, "weight":0.0195, "args": ["host=vm05", "root=default"]}]: dispatch 2026-03-10T05:56:02.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:56:01 vm02 bash[55303]: cluster 2026-03-10T05:56:00.858386+0000 mgr.y (mgr.24992) 213 : cluster [DBG] pgmap v126: 161 pgs: 31 active+undersized, 24 peering, 10 active+undersized+degraded, 96 active+clean; 457 KiB data, 278 MiB used, 160 GiB / 160 GiB avail; 28/723 objects degraded (3.873%) 2026-03-10T05:56:02.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:56:01 vm02 bash[55303]: cluster 2026-03-10T05:56:00.858386+0000 mgr.y (mgr.24992) 213 : cluster [DBG] pgmap v126: 161 pgs: 31 active+undersized, 24 peering, 10 active+undersized+degraded, 96 active+clean; 457 KiB data, 278 MiB used, 160 GiB / 160 GiB avail; 28/723 objects degraded (3.873%) 2026-03-10T05:56:02.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:56:01 vm02 bash[55303]: audit 2026-03-10T05:56:00.963629+0000 mon.a (mon.0) 491 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:56:02.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:56:01 vm02 bash[55303]: audit 2026-03-10T05:56:00.963629+0000 mon.a (mon.0) 491 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:56:03.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:56:02 vm02 bash[56371]: cluster 2026-03-10T05:56:01.960868+0000 mon.a (mon.0) 492 : cluster [INF] Health check cleared: OSD_DOWN (was: 1 osds down) 2026-03-10T05:56:03.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:56:02 vm02 bash[56371]: cluster 2026-03-10T05:56:01.960868+0000 mon.a (mon.0) 492 : cluster [INF] Health check cleared: OSD_DOWN (was: 1 osds down) 2026-03-10T05:56:03.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:56:02 vm02 bash[56371]: cluster 2026-03-10T05:56:01.980966+0000 mon.a (mon.0) 493 : cluster [INF] osd.7 [v2:192.168.123.105:6824/2307383287,v1:192.168.123.105:6825/2307383287] boot 2026-03-10T05:56:03.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:56:02 vm02 bash[56371]: cluster 2026-03-10T05:56:01.980966+0000 mon.a (mon.0) 493 : cluster [INF] osd.7 [v2:192.168.123.105:6824/2307383287,v1:192.168.123.105:6825/2307383287] boot 2026-03-10T05:56:03.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:56:02 vm02 bash[56371]: cluster 2026-03-10T05:56:01.981064+0000 mon.a (mon.0) 494 : cluster [DBG] osdmap e130: 8 total, 8 up, 8 in 2026-03-10T05:56:03.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:56:02 vm02 bash[56371]: cluster 2026-03-10T05:56:01.981064+0000 mon.a (mon.0) 494 : cluster [DBG] osdmap e130: 8 total, 8 up, 8 in 2026-03-10T05:56:03.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:56:02 vm02 bash[56371]: audit 2026-03-10T05:56:01.986249+0000 mon.a (mon.0) 495 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-10T05:56:03.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:56:02 vm02 bash[56371]: audit 2026-03-10T05:56:01.986249+0000 mon.a (mon.0) 495 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-10T05:56:03.085 INFO:journalctl@ceph.mgr.y.vm02.stdout:Mar 10 05:56:02 vm02 bash[52264]: ::ffff:192.168.123.105 - - [10/Mar/2026:05:56:02] "GET /metrics HTTP/1.1" 200 38104 "" "Prometheus/2.51.0" 2026-03-10T05:56:03.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:56:02 vm02 bash[55303]: cluster 2026-03-10T05:56:01.960868+0000 mon.a (mon.0) 492 : cluster [INF] Health check cleared: OSD_DOWN (was: 1 osds down) 2026-03-10T05:56:03.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:56:02 vm02 bash[55303]: cluster 2026-03-10T05:56:01.960868+0000 mon.a (mon.0) 492 : cluster [INF] Health check cleared: OSD_DOWN (was: 1 osds down) 2026-03-10T05:56:03.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:56:02 vm02 bash[55303]: cluster 2026-03-10T05:56:01.980966+0000 mon.a (mon.0) 493 : cluster [INF] osd.7 [v2:192.168.123.105:6824/2307383287,v1:192.168.123.105:6825/2307383287] boot 2026-03-10T05:56:03.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:56:02 vm02 bash[55303]: cluster 2026-03-10T05:56:01.980966+0000 mon.a (mon.0) 493 : cluster [INF] osd.7 [v2:192.168.123.105:6824/2307383287,v1:192.168.123.105:6825/2307383287] boot 2026-03-10T05:56:03.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:56:02 vm02 bash[55303]: cluster 2026-03-10T05:56:01.981064+0000 mon.a (mon.0) 494 : cluster [DBG] osdmap e130: 8 total, 8 up, 8 in 2026-03-10T05:56:03.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:56:02 vm02 bash[55303]: cluster 2026-03-10T05:56:01.981064+0000 mon.a (mon.0) 494 : cluster [DBG] osdmap e130: 8 total, 8 up, 8 in 2026-03-10T05:56:03.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:56:02 vm02 bash[55303]: audit 2026-03-10T05:56:01.986249+0000 mon.a (mon.0) 495 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-10T05:56:03.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:56:02 vm02 bash[55303]: audit 2026-03-10T05:56:01.986249+0000 mon.a (mon.0) 495 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-10T05:56:03.249 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:56:02 vm05 bash[43541]: cluster 2026-03-10T05:56:01.960868+0000 mon.a (mon.0) 492 : cluster [INF] Health check cleared: OSD_DOWN (was: 1 osds down) 2026-03-10T05:56:03.249 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:56:02 vm05 bash[43541]: cluster 2026-03-10T05:56:01.960868+0000 mon.a (mon.0) 492 : cluster [INF] Health check cleared: OSD_DOWN (was: 1 osds down) 2026-03-10T05:56:03.249 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:56:02 vm05 bash[43541]: cluster 2026-03-10T05:56:01.980966+0000 mon.a (mon.0) 493 : cluster [INF] osd.7 [v2:192.168.123.105:6824/2307383287,v1:192.168.123.105:6825/2307383287] boot 2026-03-10T05:56:03.249 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:56:02 vm05 bash[43541]: cluster 2026-03-10T05:56:01.980966+0000 mon.a (mon.0) 493 : cluster [INF] osd.7 [v2:192.168.123.105:6824/2307383287,v1:192.168.123.105:6825/2307383287] boot 2026-03-10T05:56:03.249 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:56:02 vm05 bash[43541]: cluster 2026-03-10T05:56:01.981064+0000 mon.a (mon.0) 494 : cluster [DBG] osdmap e130: 8 total, 8 up, 8 in 2026-03-10T05:56:03.249 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:56:02 vm05 bash[43541]: cluster 2026-03-10T05:56:01.981064+0000 mon.a (mon.0) 494 : cluster [DBG] osdmap e130: 8 total, 8 up, 8 in 2026-03-10T05:56:03.249 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:56:02 vm05 bash[43541]: audit 2026-03-10T05:56:01.986249+0000 mon.a (mon.0) 495 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-10T05:56:03.249 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:56:02 vm05 bash[43541]: audit 2026-03-10T05:56:01.986249+0000 mon.a (mon.0) 495 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-10T05:56:04.249 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:56:03 vm05 bash[43541]: cluster 2026-03-10T05:56:01.661583+0000 osd.7 (osd.7) 1 : cluster [WRN] OSD bench result of 33245.269938 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.7. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd]. 2026-03-10T05:56:04.249 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:56:03 vm05 bash[43541]: cluster 2026-03-10T05:56:01.661583+0000 osd.7 (osd.7) 1 : cluster [WRN] OSD bench result of 33245.269938 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.7. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd]. 2026-03-10T05:56:04.249 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:56:03 vm05 bash[43541]: cluster 2026-03-10T05:56:02.858668+0000 mgr.y (mgr.24992) 214 : cluster [DBG] pgmap v128: 161 pgs: 44 active+undersized, 25 active+undersized+degraded, 92 active+clean; 457 KiB data, 278 MiB used, 160 GiB / 160 GiB avail; 95/723 objects degraded (13.140%) 2026-03-10T05:56:04.249 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:56:03 vm05 bash[43541]: cluster 2026-03-10T05:56:02.858668+0000 mgr.y (mgr.24992) 214 : cluster [DBG] pgmap v128: 161 pgs: 44 active+undersized, 25 active+undersized+degraded, 92 active+clean; 457 KiB data, 278 MiB used, 160 GiB / 160 GiB avail; 95/723 objects degraded (13.140%) 2026-03-10T05:56:04.249 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:56:03 vm05 bash[43541]: cluster 2026-03-10T05:56:02.978732+0000 mon.a (mon.0) 496 : cluster [DBG] osdmap e131: 8 total, 8 up, 8 in 2026-03-10T05:56:04.249 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:56:03 vm05 bash[43541]: cluster 2026-03-10T05:56:02.978732+0000 mon.a (mon.0) 496 : cluster [DBG] osdmap e131: 8 total, 8 up, 8 in 2026-03-10T05:56:04.249 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:56:03 vm05 bash[43541]: audit 2026-03-10T05:56:03.467768+0000 mon.a (mon.0) 497 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:56:04.249 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:56:03 vm05 bash[43541]: audit 2026-03-10T05:56:03.467768+0000 mon.a (mon.0) 497 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:56:04.249 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:56:03 vm05 bash[43541]: audit 2026-03-10T05:56:03.474126+0000 mon.a (mon.0) 498 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:56:04.249 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:56:03 vm05 bash[43541]: audit 2026-03-10T05:56:03.474126+0000 mon.a (mon.0) 498 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:56:04.249 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:56:03 vm05 bash[43541]: cluster 2026-03-10T05:56:03.748712+0000 mon.a (mon.0) 499 : cluster [INF] Health check cleared: PG_AVAILABILITY (was: Reduced data availability: 6 pgs peering) 2026-03-10T05:56:04.249 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:56:03 vm05 bash[43541]: cluster 2026-03-10T05:56:03.748712+0000 mon.a (mon.0) 499 : cluster [INF] Health check cleared: PG_AVAILABILITY (was: Reduced data availability: 6 pgs peering) 2026-03-10T05:56:04.249 INFO:journalctl@ceph.prometheus.a.vm05.stdout:Mar 10 05:56:04 vm05 bash[41269]: ts=2026-03-10T05:56:04.147Z caller=group.go:483 level=warn name=CephOSDFlapping index=13 component="rule manager" file=/etc/prometheus/alerting/ceph_alerts.yml group=osd msg="Evaluating rule failed" rule="alert: CephOSDFlapping\nexpr: (rate(ceph_osd_up[5m]) * on (ceph_daemon) group_left (hostname) ceph_osd_metadata)\n * 60 > 1\nlabels:\n oid: 1.3.6.1.4.1.50495.1.2.1.4.4\n severity: warning\n type: ceph_default\nannotations:\n description: OSD {{ $labels.ceph_daemon }} on {{ $labels.hostname }} was marked\n down and back up {{ $value | humanize }} times once a minute for 5 minutes. This\n may indicate a network issue (latency, packet loss, MTU mismatch) on the cluster\n network, or the public network if no cluster network is deployed. Check the network\n stats on the listed host(s).\n documentation: https://docs.ceph.com/en/latest/rados/troubleshooting/troubleshooting-osd#flapping-osds\n summary: Network issues are causing OSDs to flap (mark each other down)\n" err="found duplicate series for the match group {ceph_daemon=\"osd.7\"} on the right hand-side of the operation: [{__name__=\"ceph_osd_metadata\", ceph_daemon=\"osd.7\", ceph_version=\"ceph version 17.2.0 (43e2e60a7559d3f46c9d53f1ca875fd499a1e35e) quincy (stable)\", cluster=\"107483ae-1c44-11f1-b530-c1172cd6122a\", cluster_addr=\"192.168.123.105\", device_class=\"hdd\", hostname=\"vm05\", instance=\"ceph_cluster\", job=\"ceph\", objectstore=\"bluestore\", public_addr=\"192.168.123.105\"}, {__name__=\"ceph_osd_metadata\", ceph_daemon=\"osd.7\", ceph_version=\"ceph version 17.2.0 (43e2e60a7559d3f46c9d53f1ca875fd499a1e35e) quincy (stable)\", cluster_addr=\"192.168.123.105\", device_class=\"hdd\", hostname=\"vm05\", instance=\"192.168.123.105:9283\", job=\"ceph\", objectstore=\"bluestore\", public_addr=\"192.168.123.105\"}];many-to-many matching not allowed: matching labels must be unique on one side" 2026-03-10T05:56:04.334 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:56:03 vm02 bash[56371]: cluster 2026-03-10T05:56:01.661583+0000 osd.7 (osd.7) 1 : cluster [WRN] OSD bench result of 33245.269938 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.7. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd]. 2026-03-10T05:56:04.334 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:56:03 vm02 bash[56371]: cluster 2026-03-10T05:56:01.661583+0000 osd.7 (osd.7) 1 : cluster [WRN] OSD bench result of 33245.269938 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.7. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd]. 2026-03-10T05:56:04.334 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:56:03 vm02 bash[56371]: cluster 2026-03-10T05:56:02.858668+0000 mgr.y (mgr.24992) 214 : cluster [DBG] pgmap v128: 161 pgs: 44 active+undersized, 25 active+undersized+degraded, 92 active+clean; 457 KiB data, 278 MiB used, 160 GiB / 160 GiB avail; 95/723 objects degraded (13.140%) 2026-03-10T05:56:04.334 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:56:03 vm02 bash[56371]: cluster 2026-03-10T05:56:02.858668+0000 mgr.y (mgr.24992) 214 : cluster [DBG] pgmap v128: 161 pgs: 44 active+undersized, 25 active+undersized+degraded, 92 active+clean; 457 KiB data, 278 MiB used, 160 GiB / 160 GiB avail; 95/723 objects degraded (13.140%) 2026-03-10T05:56:04.334 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:56:03 vm02 bash[56371]: cluster 2026-03-10T05:56:02.978732+0000 mon.a (mon.0) 496 : cluster [DBG] osdmap e131: 8 total, 8 up, 8 in 2026-03-10T05:56:04.334 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:56:03 vm02 bash[56371]: cluster 2026-03-10T05:56:02.978732+0000 mon.a (mon.0) 496 : cluster [DBG] osdmap e131: 8 total, 8 up, 8 in 2026-03-10T05:56:04.334 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:56:03 vm02 bash[56371]: audit 2026-03-10T05:56:03.467768+0000 mon.a (mon.0) 497 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:56:04.335 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:56:03 vm02 bash[56371]: audit 2026-03-10T05:56:03.467768+0000 mon.a (mon.0) 497 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:56:04.335 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:56:03 vm02 bash[56371]: audit 2026-03-10T05:56:03.474126+0000 mon.a (mon.0) 498 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:56:04.335 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:56:03 vm02 bash[56371]: audit 2026-03-10T05:56:03.474126+0000 mon.a (mon.0) 498 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:56:04.335 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:56:03 vm02 bash[56371]: cluster 2026-03-10T05:56:03.748712+0000 mon.a (mon.0) 499 : cluster [INF] Health check cleared: PG_AVAILABILITY (was: Reduced data availability: 6 pgs peering) 2026-03-10T05:56:04.335 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:56:03 vm02 bash[56371]: cluster 2026-03-10T05:56:03.748712+0000 mon.a (mon.0) 499 : cluster [INF] Health check cleared: PG_AVAILABILITY (was: Reduced data availability: 6 pgs peering) 2026-03-10T05:56:04.335 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:56:03 vm02 bash[55303]: cluster 2026-03-10T05:56:01.661583+0000 osd.7 (osd.7) 1 : cluster [WRN] OSD bench result of 33245.269938 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.7. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd]. 2026-03-10T05:56:04.335 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:56:03 vm02 bash[55303]: cluster 2026-03-10T05:56:01.661583+0000 osd.7 (osd.7) 1 : cluster [WRN] OSD bench result of 33245.269938 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.7. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd]. 2026-03-10T05:56:04.335 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:56:03 vm02 bash[55303]: cluster 2026-03-10T05:56:02.858668+0000 mgr.y (mgr.24992) 214 : cluster [DBG] pgmap v128: 161 pgs: 44 active+undersized, 25 active+undersized+degraded, 92 active+clean; 457 KiB data, 278 MiB used, 160 GiB / 160 GiB avail; 95/723 objects degraded (13.140%) 2026-03-10T05:56:04.335 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:56:03 vm02 bash[55303]: cluster 2026-03-10T05:56:02.858668+0000 mgr.y (mgr.24992) 214 : cluster [DBG] pgmap v128: 161 pgs: 44 active+undersized, 25 active+undersized+degraded, 92 active+clean; 457 KiB data, 278 MiB used, 160 GiB / 160 GiB avail; 95/723 objects degraded (13.140%) 2026-03-10T05:56:04.335 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:56:03 vm02 bash[55303]: cluster 2026-03-10T05:56:02.978732+0000 mon.a (mon.0) 496 : cluster [DBG] osdmap e131: 8 total, 8 up, 8 in 2026-03-10T05:56:04.335 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:56:03 vm02 bash[55303]: cluster 2026-03-10T05:56:02.978732+0000 mon.a (mon.0) 496 : cluster [DBG] osdmap e131: 8 total, 8 up, 8 in 2026-03-10T05:56:04.335 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:56:03 vm02 bash[55303]: audit 2026-03-10T05:56:03.467768+0000 mon.a (mon.0) 497 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:56:04.335 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:56:03 vm02 bash[55303]: audit 2026-03-10T05:56:03.467768+0000 mon.a (mon.0) 497 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:56:04.335 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:56:03 vm02 bash[55303]: audit 2026-03-10T05:56:03.474126+0000 mon.a (mon.0) 498 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:56:04.335 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:56:03 vm02 bash[55303]: audit 2026-03-10T05:56:03.474126+0000 mon.a (mon.0) 498 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:56:04.335 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:56:03 vm02 bash[55303]: cluster 2026-03-10T05:56:03.748712+0000 mon.a (mon.0) 499 : cluster [INF] Health check cleared: PG_AVAILABILITY (was: Reduced data availability: 6 pgs peering) 2026-03-10T05:56:04.335 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:56:03 vm02 bash[55303]: cluster 2026-03-10T05:56:03.748712+0000 mon.a (mon.0) 499 : cluster [INF] Health check cleared: PG_AVAILABILITY (was: Reduced data availability: 6 pgs peering) 2026-03-10T05:56:05.249 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:56:04 vm05 bash[43541]: audit 2026-03-10T05:56:03.992490+0000 mon.a (mon.0) 500 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:56:05.249 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:56:04 vm05 bash[43541]: audit 2026-03-10T05:56:03.992490+0000 mon.a (mon.0) 500 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:56:05.249 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:56:04 vm05 bash[43541]: audit 2026-03-10T05:56:04.000499+0000 mon.a (mon.0) 501 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:56:05.249 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:56:04 vm05 bash[43541]: audit 2026-03-10T05:56:04.000499+0000 mon.a (mon.0) 501 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:56:05.334 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:56:04 vm02 bash[56371]: audit 2026-03-10T05:56:03.992490+0000 mon.a (mon.0) 500 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:56:05.334 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:56:04 vm02 bash[56371]: audit 2026-03-10T05:56:03.992490+0000 mon.a (mon.0) 500 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:56:05.334 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:56:04 vm02 bash[56371]: audit 2026-03-10T05:56:04.000499+0000 mon.a (mon.0) 501 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:56:05.334 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:56:04 vm02 bash[56371]: audit 2026-03-10T05:56:04.000499+0000 mon.a (mon.0) 501 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:56:05.335 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:56:04 vm02 bash[55303]: audit 2026-03-10T05:56:03.992490+0000 mon.a (mon.0) 500 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:56:05.335 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:56:04 vm02 bash[55303]: audit 2026-03-10T05:56:03.992490+0000 mon.a (mon.0) 500 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:56:05.335 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:56:04 vm02 bash[55303]: audit 2026-03-10T05:56:04.000499+0000 mon.a (mon.0) 501 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:56:05.335 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:56:04 vm02 bash[55303]: audit 2026-03-10T05:56:04.000499+0000 mon.a (mon.0) 501 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:56:06.249 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:56:06 vm05 bash[43541]: cluster 2026-03-10T05:56:04.859001+0000 mgr.y (mgr.24992) 215 : cluster [DBG] pgmap v130: 161 pgs: 4 peering, 31 active+undersized, 21 active+undersized+degraded, 105 active+clean; 457 KiB data, 278 MiB used, 160 GiB / 160 GiB avail; 86/723 objects degraded (11.895%) 2026-03-10T05:56:06.249 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:56:06 vm05 bash[43541]: cluster 2026-03-10T05:56:04.859001+0000 mgr.y (mgr.24992) 215 : cluster [DBG] pgmap v130: 161 pgs: 4 peering, 31 active+undersized, 21 active+undersized+degraded, 105 active+clean; 457 KiB data, 278 MiB used, 160 GiB / 160 GiB avail; 86/723 objects degraded (11.895%) 2026-03-10T05:56:06.249 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:56:06 vm05 bash[43541]: cluster 2026-03-10T05:56:04.998177+0000 mon.a (mon.0) 502 : cluster [WRN] Health check update: Degraded data redundancy: 86/723 objects degraded (11.895%), 21 pgs degraded (PG_DEGRADED) 2026-03-10T05:56:06.249 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:56:06 vm05 bash[43541]: cluster 2026-03-10T05:56:04.998177+0000 mon.a (mon.0) 502 : cluster [WRN] Health check update: Degraded data redundancy: 86/723 objects degraded (11.895%), 21 pgs degraded (PG_DEGRADED) 2026-03-10T05:56:06.334 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:56:05 vm02 bash[56371]: cluster 2026-03-10T05:56:04.859001+0000 mgr.y (mgr.24992) 215 : cluster [DBG] pgmap v130: 161 pgs: 4 peering, 31 active+undersized, 21 active+undersized+degraded, 105 active+clean; 457 KiB data, 278 MiB used, 160 GiB / 160 GiB avail; 86/723 objects degraded (11.895%) 2026-03-10T05:56:06.335 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:56:05 vm02 bash[56371]: cluster 2026-03-10T05:56:04.859001+0000 mgr.y (mgr.24992) 215 : cluster [DBG] pgmap v130: 161 pgs: 4 peering, 31 active+undersized, 21 active+undersized+degraded, 105 active+clean; 457 KiB data, 278 MiB used, 160 GiB / 160 GiB avail; 86/723 objects degraded (11.895%) 2026-03-10T05:56:06.335 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:56:05 vm02 bash[56371]: cluster 2026-03-10T05:56:04.998177+0000 mon.a (mon.0) 502 : cluster [WRN] Health check update: Degraded data redundancy: 86/723 objects degraded (11.895%), 21 pgs degraded (PG_DEGRADED) 2026-03-10T05:56:06.335 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:56:05 vm02 bash[56371]: cluster 2026-03-10T05:56:04.998177+0000 mon.a (mon.0) 502 : cluster [WRN] Health check update: Degraded data redundancy: 86/723 objects degraded (11.895%), 21 pgs degraded (PG_DEGRADED) 2026-03-10T05:56:06.335 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:56:05 vm02 bash[55303]: cluster 2026-03-10T05:56:04.859001+0000 mgr.y (mgr.24992) 215 : cluster [DBG] pgmap v130: 161 pgs: 4 peering, 31 active+undersized, 21 active+undersized+degraded, 105 active+clean; 457 KiB data, 278 MiB used, 160 GiB / 160 GiB avail; 86/723 objects degraded (11.895%) 2026-03-10T05:56:06.335 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:56:05 vm02 bash[55303]: cluster 2026-03-10T05:56:04.859001+0000 mgr.y (mgr.24992) 215 : cluster [DBG] pgmap v130: 161 pgs: 4 peering, 31 active+undersized, 21 active+undersized+degraded, 105 active+clean; 457 KiB data, 278 MiB used, 160 GiB / 160 GiB avail; 86/723 objects degraded (11.895%) 2026-03-10T05:56:06.335 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:56:05 vm02 bash[55303]: cluster 2026-03-10T05:56:04.998177+0000 mon.a (mon.0) 502 : cluster [WRN] Health check update: Degraded data redundancy: 86/723 objects degraded (11.895%), 21 pgs degraded (PG_DEGRADED) 2026-03-10T05:56:06.335 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:56:05 vm02 bash[55303]: cluster 2026-03-10T05:56:04.998177+0000 mon.a (mon.0) 502 : cluster [WRN] Health check update: Degraded data redundancy: 86/723 objects degraded (11.895%), 21 pgs degraded (PG_DEGRADED) 2026-03-10T05:56:07.249 INFO:journalctl@ceph.prometheus.a.vm05.stdout:Mar 10 05:56:06 vm05 bash[41269]: ts=2026-03-10T05:56:06.948Z caller=group.go:483 level=warn name=CephNodeDiskspaceWarning index=4 component="rule manager" file=/etc/prometheus/alerting/ceph_alerts.yml group=nodes msg="Evaluating rule failed" rule="alert: CephNodeDiskspaceWarning\nexpr: predict_linear(node_filesystem_free_bytes{device=~\"/.*\"}[2d], 3600 * 24 * 5)\n * on (instance) group_left (nodename) node_uname_info < 0\nlabels:\n oid: 1.3.6.1.4.1.50495.1.2.1.8.4\n severity: warning\n type: ceph_default\nannotations:\n description: Mountpoint {{ $labels.mountpoint }} on {{ $labels.nodename }} will\n be full in less than 5 days based on the 48 hour trailing fill rate.\n summary: Host filesystem free space is getting low\n" err="found duplicate series for the match group {instance=\"vm05\"} on the right hand-side of the operation: [{__name__=\"node_uname_info\", cluster=\"107483ae-1c44-11f1-b530-c1172cd6122a\", domainname=\"(none)\", instance=\"vm05\", job=\"node\", machine=\"x86_64\", nodename=\"vm05\", release=\"5.15.0-1092-kvm\", sysname=\"Linux\", version=\"#97-Ubuntu SMP Fri Jan 23 15:00:24 UTC 2026\"}, {__name__=\"node_uname_info\", domainname=\"(none)\", instance=\"vm05\", job=\"node\", machine=\"x86_64\", nodename=\"vm05\", release=\"5.15.0-1092-kvm\", sysname=\"Linux\", version=\"#97-Ubuntu SMP Fri Jan 23 15:00:24 UTC 2026\"}];many-to-many matching not allowed: matching labels must be unique on one side" 2026-03-10T05:56:08.335 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:56:08 vm02 bash[56371]: cluster 2026-03-10T05:56:06.859394+0000 mgr.y (mgr.24992) 216 : cluster [DBG] pgmap v131: 161 pgs: 4 peering, 1 active+undersized, 3 active+undersized+degraded, 153 active+clean; 457 KiB data, 282 MiB used, 160 GiB / 160 GiB avail; 1.6 KiB/s rd, 1 op/s; 15/723 objects degraded (2.075%) 2026-03-10T05:56:08.335 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:56:08 vm02 bash[56371]: cluster 2026-03-10T05:56:06.859394+0000 mgr.y (mgr.24992) 216 : cluster [DBG] pgmap v131: 161 pgs: 4 peering, 1 active+undersized, 3 active+undersized+degraded, 153 active+clean; 457 KiB data, 282 MiB used, 160 GiB / 160 GiB avail; 1.6 KiB/s rd, 1 op/s; 15/723 objects degraded (2.075%) 2026-03-10T05:56:08.335 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:56:08 vm02 bash[56371]: audit 2026-03-10T05:56:07.000541+0000 mgr.y (mgr.24992) 217 : audit [DBG] from='client.25048 -' entity='client.iscsi.foo.vm02.mxbwmh' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T05:56:08.335 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:56:08 vm02 bash[56371]: audit 2026-03-10T05:56:07.000541+0000 mgr.y (mgr.24992) 217 : audit [DBG] from='client.25048 -' entity='client.iscsi.foo.vm02.mxbwmh' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T05:56:08.335 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:56:08 vm02 bash[55303]: cluster 2026-03-10T05:56:06.859394+0000 mgr.y (mgr.24992) 216 : cluster [DBG] pgmap v131: 161 pgs: 4 peering, 1 active+undersized, 3 active+undersized+degraded, 153 active+clean; 457 KiB data, 282 MiB used, 160 GiB / 160 GiB avail; 1.6 KiB/s rd, 1 op/s; 15/723 objects degraded (2.075%) 2026-03-10T05:56:08.335 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:56:08 vm02 bash[55303]: cluster 2026-03-10T05:56:06.859394+0000 mgr.y (mgr.24992) 216 : cluster [DBG] pgmap v131: 161 pgs: 4 peering, 1 active+undersized, 3 active+undersized+degraded, 153 active+clean; 457 KiB data, 282 MiB used, 160 GiB / 160 GiB avail; 1.6 KiB/s rd, 1 op/s; 15/723 objects degraded (2.075%) 2026-03-10T05:56:08.335 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:56:08 vm02 bash[55303]: audit 2026-03-10T05:56:07.000541+0000 mgr.y (mgr.24992) 217 : audit [DBG] from='client.25048 -' entity='client.iscsi.foo.vm02.mxbwmh' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T05:56:08.335 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:56:08 vm02 bash[55303]: audit 2026-03-10T05:56:07.000541+0000 mgr.y (mgr.24992) 217 : audit [DBG] from='client.25048 -' entity='client.iscsi.foo.vm02.mxbwmh' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T05:56:08.499 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:56:08 vm05 bash[43541]: cluster 2026-03-10T05:56:06.859394+0000 mgr.y (mgr.24992) 216 : cluster [DBG] pgmap v131: 161 pgs: 4 peering, 1 active+undersized, 3 active+undersized+degraded, 153 active+clean; 457 KiB data, 282 MiB used, 160 GiB / 160 GiB avail; 1.6 KiB/s rd, 1 op/s; 15/723 objects degraded (2.075%) 2026-03-10T05:56:08.499 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:56:08 vm05 bash[43541]: cluster 2026-03-10T05:56:06.859394+0000 mgr.y (mgr.24992) 216 : cluster [DBG] pgmap v131: 161 pgs: 4 peering, 1 active+undersized, 3 active+undersized+degraded, 153 active+clean; 457 KiB data, 282 MiB used, 160 GiB / 160 GiB avail; 1.6 KiB/s rd, 1 op/s; 15/723 objects degraded (2.075%) 2026-03-10T05:56:08.499 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:56:08 vm05 bash[43541]: audit 2026-03-10T05:56:07.000541+0000 mgr.y (mgr.24992) 217 : audit [DBG] from='client.25048 -' entity='client.iscsi.foo.vm02.mxbwmh' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T05:56:08.499 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:56:08 vm05 bash[43541]: audit 2026-03-10T05:56:07.000541+0000 mgr.y (mgr.24992) 217 : audit [DBG] from='client.25048 -' entity='client.iscsi.foo.vm02.mxbwmh' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T05:56:09.334 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:56:09 vm02 bash[56371]: cluster 2026-03-10T05:56:09.000140+0000 mon.a (mon.0) 503 : cluster [INF] Health check cleared: PG_DEGRADED (was: Degraded data redundancy: 15/723 objects degraded (2.075%), 3 pgs degraded) 2026-03-10T05:56:09.335 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:56:09 vm02 bash[56371]: cluster 2026-03-10T05:56:09.000140+0000 mon.a (mon.0) 503 : cluster [INF] Health check cleared: PG_DEGRADED (was: Degraded data redundancy: 15/723 objects degraded (2.075%), 3 pgs degraded) 2026-03-10T05:56:09.335 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:56:09 vm02 bash[55303]: cluster 2026-03-10T05:56:09.000140+0000 mon.a (mon.0) 503 : cluster [INF] Health check cleared: PG_DEGRADED (was: Degraded data redundancy: 15/723 objects degraded (2.075%), 3 pgs degraded) 2026-03-10T05:56:09.335 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:56:09 vm02 bash[55303]: cluster 2026-03-10T05:56:09.000140+0000 mon.a (mon.0) 503 : cluster [INF] Health check cleared: PG_DEGRADED (was: Degraded data redundancy: 15/723 objects degraded (2.075%), 3 pgs degraded) 2026-03-10T05:56:09.499 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:56:09 vm05 bash[43541]: cluster 2026-03-10T05:56:09.000140+0000 mon.a (mon.0) 503 : cluster [INF] Health check cleared: PG_DEGRADED (was: Degraded data redundancy: 15/723 objects degraded (2.075%), 3 pgs degraded) 2026-03-10T05:56:09.499 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:56:09 vm05 bash[43541]: cluster 2026-03-10T05:56:09.000140+0000 mon.a (mon.0) 503 : cluster [INF] Health check cleared: PG_DEGRADED (was: Degraded data redundancy: 15/723 objects degraded (2.075%), 3 pgs degraded) 2026-03-10T05:56:10.334 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:56:10 vm02 bash[56371]: cluster 2026-03-10T05:56:08.859725+0000 mgr.y (mgr.24992) 218 : cluster [DBG] pgmap v132: 161 pgs: 161 active+clean; 457 KiB data, 282 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T05:56:10.335 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:56:10 vm02 bash[56371]: cluster 2026-03-10T05:56:08.859725+0000 mgr.y (mgr.24992) 218 : cluster [DBG] pgmap v132: 161 pgs: 161 active+clean; 457 KiB data, 282 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T05:56:10.335 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:56:10 vm02 bash[55303]: cluster 2026-03-10T05:56:08.859725+0000 mgr.y (mgr.24992) 218 : cluster [DBG] pgmap v132: 161 pgs: 161 active+clean; 457 KiB data, 282 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T05:56:10.335 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:56:10 vm02 bash[55303]: cluster 2026-03-10T05:56:08.859725+0000 mgr.y (mgr.24992) 218 : cluster [DBG] pgmap v132: 161 pgs: 161 active+clean; 457 KiB data, 282 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T05:56:10.402 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:56:10 vm05 bash[43541]: cluster 2026-03-10T05:56:08.859725+0000 mgr.y (mgr.24992) 218 : cluster [DBG] pgmap v132: 161 pgs: 161 active+clean; 457 KiB data, 282 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T05:56:10.402 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:56:10 vm05 bash[43541]: cluster 2026-03-10T05:56:08.859725+0000 mgr.y (mgr.24992) 218 : cluster [DBG] pgmap v132: 161 pgs: 161 active+clean; 457 KiB data, 282 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T05:56:11.749 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:56:11 vm05 bash[43541]: audit 2026-03-10T05:56:10.471025+0000 mon.a (mon.0) 504 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:56:11.750 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:56:11 vm05 bash[43541]: audit 2026-03-10T05:56:10.471025+0000 mon.a (mon.0) 504 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:56:11.750 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:56:11 vm05 bash[43541]: audit 2026-03-10T05:56:10.477660+0000 mon.a (mon.0) 505 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:56:11.750 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:56:11 vm05 bash[43541]: audit 2026-03-10T05:56:10.477660+0000 mon.a (mon.0) 505 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:56:11.750 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:56:11 vm05 bash[43541]: audit 2026-03-10T05:56:10.479026+0000 mon.a (mon.0) 506 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:56:11.750 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:56:11 vm05 bash[43541]: audit 2026-03-10T05:56:10.479026+0000 mon.a (mon.0) 506 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:56:11.750 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:56:11 vm05 bash[43541]: audit 2026-03-10T05:56:10.479558+0000 mon.a (mon.0) 507 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T05:56:11.750 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:56:11 vm05 bash[43541]: audit 2026-03-10T05:56:10.479558+0000 mon.a (mon.0) 507 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T05:56:11.750 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:56:11 vm05 bash[43541]: audit 2026-03-10T05:56:10.484432+0000 mon.a (mon.0) 508 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:56:11.750 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:56:11 vm05 bash[43541]: audit 2026-03-10T05:56:10.484432+0000 mon.a (mon.0) 508 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:56:11.750 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:56:11 vm05 bash[43541]: audit 2026-03-10T05:56:10.524068+0000 mon.a (mon.0) 509 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T05:56:11.750 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:56:11 vm05 bash[43541]: audit 2026-03-10T05:56:10.524068+0000 mon.a (mon.0) 509 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T05:56:11.750 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:56:11 vm05 bash[43541]: audit 2026-03-10T05:56:10.525297+0000 mon.a (mon.0) 510 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T05:56:11.750 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:56:11 vm05 bash[43541]: audit 2026-03-10T05:56:10.525297+0000 mon.a (mon.0) 510 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T05:56:11.750 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:56:11 vm05 bash[43541]: audit 2026-03-10T05:56:10.526276+0000 mon.a (mon.0) 511 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T05:56:11.750 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:56:11 vm05 bash[43541]: audit 2026-03-10T05:56:10.526276+0000 mon.a (mon.0) 511 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T05:56:11.750 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:56:11 vm05 bash[43541]: audit 2026-03-10T05:56:10.526985+0000 mon.a (mon.0) 512 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T05:56:11.750 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:56:11 vm05 bash[43541]: audit 2026-03-10T05:56:10.526985+0000 mon.a (mon.0) 512 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T05:56:11.750 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:56:11 vm05 bash[43541]: audit 2026-03-10T05:56:10.528179+0000 mon.a (mon.0) 513 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T05:56:11.750 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:56:11 vm05 bash[43541]: audit 2026-03-10T05:56:10.528179+0000 mon.a (mon.0) 513 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T05:56:11.750 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:56:11 vm05 bash[43541]: cephadm 2026-03-10T05:56:10.528596+0000 mgr.y (mgr.24992) 219 : cephadm [INF] Upgrade: Setting container_image for all osd 2026-03-10T05:56:11.750 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:56:11 vm05 bash[43541]: cephadm 2026-03-10T05:56:10.528596+0000 mgr.y (mgr.24992) 219 : cephadm [INF] Upgrade: Setting container_image for all osd 2026-03-10T05:56:11.750 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:56:11 vm05 bash[43541]: audit 2026-03-10T05:56:10.534131+0000 mon.a (mon.0) 514 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:56:11.750 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:56:11 vm05 bash[43541]: audit 2026-03-10T05:56:10.534131+0000 mon.a (mon.0) 514 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:56:11.750 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:56:11 vm05 bash[43541]: audit 2026-03-10T05:56:10.536637+0000 mon.a (mon.0) 515 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "osd.0"}]: dispatch 2026-03-10T05:56:11.750 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:56:11 vm05 bash[43541]: audit 2026-03-10T05:56:10.536637+0000 mon.a (mon.0) 515 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "osd.0"}]: dispatch 2026-03-10T05:56:11.750 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:56:11 vm05 bash[43541]: audit 2026-03-10T05:56:10.541657+0000 mon.a (mon.0) 516 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "osd.0"}]': finished 2026-03-10T05:56:11.750 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:56:11 vm05 bash[43541]: audit 2026-03-10T05:56:10.541657+0000 mon.a (mon.0) 516 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "osd.0"}]': finished 2026-03-10T05:56:11.750 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:56:11 vm05 bash[43541]: audit 2026-03-10T05:56:10.543996+0000 mon.a (mon.0) 517 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "osd.1"}]: dispatch 2026-03-10T05:56:11.750 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:56:11 vm05 bash[43541]: audit 2026-03-10T05:56:10.543996+0000 mon.a (mon.0) 517 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "osd.1"}]: dispatch 2026-03-10T05:56:11.750 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:56:11 vm05 bash[43541]: audit 2026-03-10T05:56:10.548926+0000 mon.a (mon.0) 518 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "osd.1"}]': finished 2026-03-10T05:56:11.750 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:56:11 vm05 bash[43541]: audit 2026-03-10T05:56:10.548926+0000 mon.a (mon.0) 518 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "osd.1"}]': finished 2026-03-10T05:56:11.750 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:56:11 vm05 bash[43541]: audit 2026-03-10T05:56:10.551188+0000 mon.a (mon.0) 519 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "osd.2"}]: dispatch 2026-03-10T05:56:11.750 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:56:11 vm05 bash[43541]: audit 2026-03-10T05:56:10.551188+0000 mon.a (mon.0) 519 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "osd.2"}]: dispatch 2026-03-10T05:56:11.750 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:56:11 vm05 bash[43541]: audit 2026-03-10T05:56:10.555371+0000 mon.a (mon.0) 520 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "osd.2"}]': finished 2026-03-10T05:56:11.750 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:56:11 vm05 bash[43541]: audit 2026-03-10T05:56:10.555371+0000 mon.a (mon.0) 520 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "osd.2"}]': finished 2026-03-10T05:56:11.750 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:56:11 vm05 bash[43541]: audit 2026-03-10T05:56:10.556811+0000 mon.a (mon.0) 521 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "osd.3"}]: dispatch 2026-03-10T05:56:11.750 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:56:11 vm05 bash[43541]: audit 2026-03-10T05:56:10.556811+0000 mon.a (mon.0) 521 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "osd.3"}]: dispatch 2026-03-10T05:56:11.751 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:56:11 vm05 bash[43541]: audit 2026-03-10T05:56:10.559227+0000 mon.a (mon.0) 522 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "osd.3"}]': finished 2026-03-10T05:56:11.751 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:56:11 vm05 bash[43541]: audit 2026-03-10T05:56:10.559227+0000 mon.a (mon.0) 522 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "osd.3"}]': finished 2026-03-10T05:56:11.751 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:56:11 vm05 bash[43541]: audit 2026-03-10T05:56:10.560503+0000 mon.a (mon.0) 523 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "osd.4"}]: dispatch 2026-03-10T05:56:11.751 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:56:11 vm05 bash[43541]: audit 2026-03-10T05:56:10.560503+0000 mon.a (mon.0) 523 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "osd.4"}]: dispatch 2026-03-10T05:56:11.751 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:56:11 vm05 bash[43541]: audit 2026-03-10T05:56:10.563156+0000 mon.a (mon.0) 524 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "osd.4"}]': finished 2026-03-10T05:56:11.751 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:56:11 vm05 bash[43541]: audit 2026-03-10T05:56:10.563156+0000 mon.a (mon.0) 524 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "osd.4"}]': finished 2026-03-10T05:56:11.751 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:56:11 vm05 bash[43541]: audit 2026-03-10T05:56:10.564080+0000 mon.a (mon.0) 525 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "osd.5"}]: dispatch 2026-03-10T05:56:11.751 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:56:11 vm05 bash[43541]: audit 2026-03-10T05:56:10.564080+0000 mon.a (mon.0) 525 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "osd.5"}]: dispatch 2026-03-10T05:56:11.751 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:56:11 vm05 bash[43541]: audit 2026-03-10T05:56:10.567315+0000 mon.a (mon.0) 526 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "osd.5"}]': finished 2026-03-10T05:56:11.751 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:56:11 vm05 bash[43541]: audit 2026-03-10T05:56:10.567315+0000 mon.a (mon.0) 526 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "osd.5"}]': finished 2026-03-10T05:56:11.751 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:56:11 vm05 bash[43541]: audit 2026-03-10T05:56:10.568273+0000 mon.a (mon.0) 527 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "osd.6"}]: dispatch 2026-03-10T05:56:11.751 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:56:11 vm05 bash[43541]: audit 2026-03-10T05:56:10.568273+0000 mon.a (mon.0) 527 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "osd.6"}]: dispatch 2026-03-10T05:56:11.751 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:56:11 vm05 bash[43541]: audit 2026-03-10T05:56:10.571501+0000 mon.a (mon.0) 528 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "osd.6"}]': finished 2026-03-10T05:56:11.751 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:56:11 vm05 bash[43541]: audit 2026-03-10T05:56:10.571501+0000 mon.a (mon.0) 528 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "osd.6"}]': finished 2026-03-10T05:56:11.751 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:56:11 vm05 bash[43541]: audit 2026-03-10T05:56:10.572534+0000 mon.a (mon.0) 529 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "osd.7"}]: dispatch 2026-03-10T05:56:11.751 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:56:11 vm05 bash[43541]: audit 2026-03-10T05:56:10.572534+0000 mon.a (mon.0) 529 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "osd.7"}]: dispatch 2026-03-10T05:56:11.751 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:56:11 vm05 bash[43541]: audit 2026-03-10T05:56:10.575793+0000 mon.a (mon.0) 530 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "osd.7"}]': finished 2026-03-10T05:56:11.751 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:56:11 vm05 bash[43541]: audit 2026-03-10T05:56:10.575793+0000 mon.a (mon.0) 530 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "osd.7"}]': finished 2026-03-10T05:56:11.751 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:56:11 vm05 bash[43541]: cephadm 2026-03-10T05:56:10.577427+0000 mgr.y (mgr.24992) 220 : cephadm [INF] Upgrade: Setting require_osd_release to 19 squid 2026-03-10T05:56:11.751 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:56:11 vm05 bash[43541]: cephadm 2026-03-10T05:56:10.577427+0000 mgr.y (mgr.24992) 220 : cephadm [INF] Upgrade: Setting require_osd_release to 19 squid 2026-03-10T05:56:11.751 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:56:11 vm05 bash[43541]: audit 2026-03-10T05:56:10.577593+0000 mon.a (mon.0) 531 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "osd require-osd-release", "release": "squid"}]: dispatch 2026-03-10T05:56:11.751 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:56:11 vm05 bash[43541]: audit 2026-03-10T05:56:10.577593+0000 mon.a (mon.0) 531 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "osd require-osd-release", "release": "squid"}]: dispatch 2026-03-10T05:56:11.751 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:56:11 vm05 bash[43541]: audit 2026-03-10T05:56:10.885552+0000 mon.a (mon.0) 532 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:56:11.751 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:56:11 vm05 bash[43541]: audit 2026-03-10T05:56:10.885552+0000 mon.a (mon.0) 532 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:56:11.751 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:56:11 vm05 bash[43541]: audit 2026-03-10T05:56:10.886151+0000 mon.a (mon.0) 533 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T05:56:11.751 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:56:11 vm05 bash[43541]: audit 2026-03-10T05:56:10.886151+0000 mon.a (mon.0) 533 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T05:56:11.835 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:56:11 vm02 bash[56371]: audit 2026-03-10T05:56:10.471025+0000 mon.a (mon.0) 504 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:56:11.836 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:56:11 vm02 bash[56371]: audit 2026-03-10T05:56:10.471025+0000 mon.a (mon.0) 504 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:56:11.836 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:56:11 vm02 bash[56371]: audit 2026-03-10T05:56:10.477660+0000 mon.a (mon.0) 505 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:56:11.836 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:56:11 vm02 bash[56371]: audit 2026-03-10T05:56:10.477660+0000 mon.a (mon.0) 505 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:56:11.836 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:56:11 vm02 bash[56371]: audit 2026-03-10T05:56:10.479026+0000 mon.a (mon.0) 506 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:56:11.836 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:56:11 vm02 bash[56371]: audit 2026-03-10T05:56:10.479026+0000 mon.a (mon.0) 506 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:56:11.836 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:56:11 vm02 bash[56371]: audit 2026-03-10T05:56:10.479558+0000 mon.a (mon.0) 507 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T05:56:11.836 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:56:11 vm02 bash[56371]: audit 2026-03-10T05:56:10.479558+0000 mon.a (mon.0) 507 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T05:56:11.836 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:56:11 vm02 bash[56371]: audit 2026-03-10T05:56:10.484432+0000 mon.a (mon.0) 508 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:56:11.836 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:56:11 vm02 bash[56371]: audit 2026-03-10T05:56:10.484432+0000 mon.a (mon.0) 508 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:56:11.836 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:56:11 vm02 bash[56371]: audit 2026-03-10T05:56:10.524068+0000 mon.a (mon.0) 509 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T05:56:11.836 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:56:11 vm02 bash[56371]: audit 2026-03-10T05:56:10.524068+0000 mon.a (mon.0) 509 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T05:56:11.836 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:56:11 vm02 bash[56371]: audit 2026-03-10T05:56:10.525297+0000 mon.a (mon.0) 510 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T05:56:11.836 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:56:11 vm02 bash[56371]: audit 2026-03-10T05:56:10.525297+0000 mon.a (mon.0) 510 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T05:56:11.836 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:56:11 vm02 bash[56371]: audit 2026-03-10T05:56:10.526276+0000 mon.a (mon.0) 511 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T05:56:11.836 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:56:11 vm02 bash[56371]: audit 2026-03-10T05:56:10.526276+0000 mon.a (mon.0) 511 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T05:56:11.836 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:56:11 vm02 bash[56371]: audit 2026-03-10T05:56:10.526985+0000 mon.a (mon.0) 512 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T05:56:11.836 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:56:11 vm02 bash[56371]: audit 2026-03-10T05:56:10.526985+0000 mon.a (mon.0) 512 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T05:56:11.836 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:56:11 vm02 bash[56371]: audit 2026-03-10T05:56:10.528179+0000 mon.a (mon.0) 513 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T05:56:11.836 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:56:11 vm02 bash[56371]: audit 2026-03-10T05:56:10.528179+0000 mon.a (mon.0) 513 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T05:56:11.836 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:56:11 vm02 bash[56371]: cephadm 2026-03-10T05:56:10.528596+0000 mgr.y (mgr.24992) 219 : cephadm [INF] Upgrade: Setting container_image for all osd 2026-03-10T05:56:11.836 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:56:11 vm02 bash[56371]: cephadm 2026-03-10T05:56:10.528596+0000 mgr.y (mgr.24992) 219 : cephadm [INF] Upgrade: Setting container_image for all osd 2026-03-10T05:56:11.836 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:56:11 vm02 bash[56371]: audit 2026-03-10T05:56:10.534131+0000 mon.a (mon.0) 514 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:56:11.836 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:56:11 vm02 bash[56371]: audit 2026-03-10T05:56:10.534131+0000 mon.a (mon.0) 514 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:56:11.836 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:56:11 vm02 bash[56371]: audit 2026-03-10T05:56:10.536637+0000 mon.a (mon.0) 515 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "osd.0"}]: dispatch 2026-03-10T05:56:11.836 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:56:11 vm02 bash[56371]: audit 2026-03-10T05:56:10.536637+0000 mon.a (mon.0) 515 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "osd.0"}]: dispatch 2026-03-10T05:56:11.836 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:56:11 vm02 bash[56371]: audit 2026-03-10T05:56:10.541657+0000 mon.a (mon.0) 516 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "osd.0"}]': finished 2026-03-10T05:56:11.836 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:56:11 vm02 bash[56371]: audit 2026-03-10T05:56:10.541657+0000 mon.a (mon.0) 516 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "osd.0"}]': finished 2026-03-10T05:56:11.836 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:56:11 vm02 bash[56371]: audit 2026-03-10T05:56:10.543996+0000 mon.a (mon.0) 517 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "osd.1"}]: dispatch 2026-03-10T05:56:11.836 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:56:11 vm02 bash[56371]: audit 2026-03-10T05:56:10.543996+0000 mon.a (mon.0) 517 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "osd.1"}]: dispatch 2026-03-10T05:56:11.836 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:56:11 vm02 bash[56371]: audit 2026-03-10T05:56:10.548926+0000 mon.a (mon.0) 518 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "osd.1"}]': finished 2026-03-10T05:56:11.836 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:56:11 vm02 bash[56371]: audit 2026-03-10T05:56:10.548926+0000 mon.a (mon.0) 518 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "osd.1"}]': finished 2026-03-10T05:56:11.836 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:56:11 vm02 bash[56371]: audit 2026-03-10T05:56:10.551188+0000 mon.a (mon.0) 519 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "osd.2"}]: dispatch 2026-03-10T05:56:11.836 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:56:11 vm02 bash[56371]: audit 2026-03-10T05:56:10.551188+0000 mon.a (mon.0) 519 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "osd.2"}]: dispatch 2026-03-10T05:56:11.836 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:56:11 vm02 bash[56371]: audit 2026-03-10T05:56:10.555371+0000 mon.a (mon.0) 520 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "osd.2"}]': finished 2026-03-10T05:56:11.836 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:56:11 vm02 bash[56371]: audit 2026-03-10T05:56:10.555371+0000 mon.a (mon.0) 520 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "osd.2"}]': finished 2026-03-10T05:56:11.836 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:56:11 vm02 bash[56371]: audit 2026-03-10T05:56:10.556811+0000 mon.a (mon.0) 521 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "osd.3"}]: dispatch 2026-03-10T05:56:11.836 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:56:11 vm02 bash[56371]: audit 2026-03-10T05:56:10.556811+0000 mon.a (mon.0) 521 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "osd.3"}]: dispatch 2026-03-10T05:56:11.836 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:56:11 vm02 bash[56371]: audit 2026-03-10T05:56:10.559227+0000 mon.a (mon.0) 522 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "osd.3"}]': finished 2026-03-10T05:56:11.836 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:56:11 vm02 bash[56371]: audit 2026-03-10T05:56:10.559227+0000 mon.a (mon.0) 522 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "osd.3"}]': finished 2026-03-10T05:56:11.836 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:56:11 vm02 bash[56371]: audit 2026-03-10T05:56:10.560503+0000 mon.a (mon.0) 523 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "osd.4"}]: dispatch 2026-03-10T05:56:11.836 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:56:11 vm02 bash[56371]: audit 2026-03-10T05:56:10.560503+0000 mon.a (mon.0) 523 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "osd.4"}]: dispatch 2026-03-10T05:56:11.836 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:56:11 vm02 bash[56371]: audit 2026-03-10T05:56:10.563156+0000 mon.a (mon.0) 524 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "osd.4"}]': finished 2026-03-10T05:56:11.836 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:56:11 vm02 bash[56371]: audit 2026-03-10T05:56:10.563156+0000 mon.a (mon.0) 524 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "osd.4"}]': finished 2026-03-10T05:56:11.836 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:56:11 vm02 bash[56371]: audit 2026-03-10T05:56:10.564080+0000 mon.a (mon.0) 525 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "osd.5"}]: dispatch 2026-03-10T05:56:11.836 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:56:11 vm02 bash[56371]: audit 2026-03-10T05:56:10.564080+0000 mon.a (mon.0) 525 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "osd.5"}]: dispatch 2026-03-10T05:56:11.836 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:56:11 vm02 bash[56371]: audit 2026-03-10T05:56:10.567315+0000 mon.a (mon.0) 526 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "osd.5"}]': finished 2026-03-10T05:56:11.836 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:56:11 vm02 bash[56371]: audit 2026-03-10T05:56:10.567315+0000 mon.a (mon.0) 526 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "osd.5"}]': finished 2026-03-10T05:56:11.836 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:56:11 vm02 bash[56371]: audit 2026-03-10T05:56:10.568273+0000 mon.a (mon.0) 527 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "osd.6"}]: dispatch 2026-03-10T05:56:11.836 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:56:11 vm02 bash[56371]: audit 2026-03-10T05:56:10.568273+0000 mon.a (mon.0) 527 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "osd.6"}]: dispatch 2026-03-10T05:56:11.836 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:56:11 vm02 bash[56371]: audit 2026-03-10T05:56:10.571501+0000 mon.a (mon.0) 528 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "osd.6"}]': finished 2026-03-10T05:56:11.836 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:56:11 vm02 bash[56371]: audit 2026-03-10T05:56:10.571501+0000 mon.a (mon.0) 528 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "osd.6"}]': finished 2026-03-10T05:56:11.836 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:56:11 vm02 bash[56371]: audit 2026-03-10T05:56:10.572534+0000 mon.a (mon.0) 529 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "osd.7"}]: dispatch 2026-03-10T05:56:11.836 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:56:11 vm02 bash[56371]: audit 2026-03-10T05:56:10.572534+0000 mon.a (mon.0) 529 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "osd.7"}]: dispatch 2026-03-10T05:56:11.836 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:56:11 vm02 bash[56371]: audit 2026-03-10T05:56:10.575793+0000 mon.a (mon.0) 530 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "osd.7"}]': finished 2026-03-10T05:56:11.836 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:56:11 vm02 bash[56371]: audit 2026-03-10T05:56:10.575793+0000 mon.a (mon.0) 530 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "osd.7"}]': finished 2026-03-10T05:56:11.836 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:56:11 vm02 bash[56371]: cephadm 2026-03-10T05:56:10.577427+0000 mgr.y (mgr.24992) 220 : cephadm [INF] Upgrade: Setting require_osd_release to 19 squid 2026-03-10T05:56:11.836 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:56:11 vm02 bash[56371]: cephadm 2026-03-10T05:56:10.577427+0000 mgr.y (mgr.24992) 220 : cephadm [INF] Upgrade: Setting require_osd_release to 19 squid 2026-03-10T05:56:11.836 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:56:11 vm02 bash[56371]: audit 2026-03-10T05:56:10.577593+0000 mon.a (mon.0) 531 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "osd require-osd-release", "release": "squid"}]: dispatch 2026-03-10T05:56:11.836 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:56:11 vm02 bash[56371]: audit 2026-03-10T05:56:10.577593+0000 mon.a (mon.0) 531 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "osd require-osd-release", "release": "squid"}]: dispatch 2026-03-10T05:56:11.836 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:56:11 vm02 bash[56371]: audit 2026-03-10T05:56:10.885552+0000 mon.a (mon.0) 532 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:56:11.836 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:56:11 vm02 bash[56371]: audit 2026-03-10T05:56:10.885552+0000 mon.a (mon.0) 532 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:56:11.836 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:56:11 vm02 bash[56371]: audit 2026-03-10T05:56:10.886151+0000 mon.a (mon.0) 533 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T05:56:11.836 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:56:11 vm02 bash[56371]: audit 2026-03-10T05:56:10.886151+0000 mon.a (mon.0) 533 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T05:56:11.837 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:56:11 vm02 bash[55303]: audit 2026-03-10T05:56:10.471025+0000 mon.a (mon.0) 504 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:56:11.837 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:56:11 vm02 bash[55303]: audit 2026-03-10T05:56:10.471025+0000 mon.a (mon.0) 504 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:56:11.837 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:56:11 vm02 bash[55303]: audit 2026-03-10T05:56:10.477660+0000 mon.a (mon.0) 505 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:56:11.837 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:56:11 vm02 bash[55303]: audit 2026-03-10T05:56:10.477660+0000 mon.a (mon.0) 505 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:56:11.837 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:56:11 vm02 bash[55303]: audit 2026-03-10T05:56:10.479026+0000 mon.a (mon.0) 506 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:56:11.837 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:56:11 vm02 bash[55303]: audit 2026-03-10T05:56:10.479026+0000 mon.a (mon.0) 506 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:56:11.837 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:56:11 vm02 bash[55303]: audit 2026-03-10T05:56:10.479558+0000 mon.a (mon.0) 507 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T05:56:11.837 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:56:11 vm02 bash[55303]: audit 2026-03-10T05:56:10.479558+0000 mon.a (mon.0) 507 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T05:56:11.837 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:56:11 vm02 bash[55303]: audit 2026-03-10T05:56:10.484432+0000 mon.a (mon.0) 508 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:56:11.837 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:56:11 vm02 bash[55303]: audit 2026-03-10T05:56:10.484432+0000 mon.a (mon.0) 508 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:56:11.837 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:56:11 vm02 bash[55303]: audit 2026-03-10T05:56:10.524068+0000 mon.a (mon.0) 509 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T05:56:11.837 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:56:11 vm02 bash[55303]: audit 2026-03-10T05:56:10.524068+0000 mon.a (mon.0) 509 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T05:56:11.837 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:56:11 vm02 bash[55303]: audit 2026-03-10T05:56:10.525297+0000 mon.a (mon.0) 510 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T05:56:11.837 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:56:11 vm02 bash[55303]: audit 2026-03-10T05:56:10.525297+0000 mon.a (mon.0) 510 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T05:56:11.837 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:56:11 vm02 bash[55303]: audit 2026-03-10T05:56:10.526276+0000 mon.a (mon.0) 511 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T05:56:11.837 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:56:11 vm02 bash[55303]: audit 2026-03-10T05:56:10.526276+0000 mon.a (mon.0) 511 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T05:56:11.837 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:56:11 vm02 bash[55303]: audit 2026-03-10T05:56:10.526985+0000 mon.a (mon.0) 512 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T05:56:11.837 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:56:11 vm02 bash[55303]: audit 2026-03-10T05:56:10.526985+0000 mon.a (mon.0) 512 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T05:56:11.837 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:56:11 vm02 bash[55303]: audit 2026-03-10T05:56:10.528179+0000 mon.a (mon.0) 513 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T05:56:11.837 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:56:11 vm02 bash[55303]: audit 2026-03-10T05:56:10.528179+0000 mon.a (mon.0) 513 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T05:56:11.837 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:56:11 vm02 bash[55303]: cephadm 2026-03-10T05:56:10.528596+0000 mgr.y (mgr.24992) 219 : cephadm [INF] Upgrade: Setting container_image for all osd 2026-03-10T05:56:11.837 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:56:11 vm02 bash[55303]: cephadm 2026-03-10T05:56:10.528596+0000 mgr.y (mgr.24992) 219 : cephadm [INF] Upgrade: Setting container_image for all osd 2026-03-10T05:56:11.837 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:56:11 vm02 bash[55303]: audit 2026-03-10T05:56:10.534131+0000 mon.a (mon.0) 514 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:56:11.837 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:56:11 vm02 bash[55303]: audit 2026-03-10T05:56:10.534131+0000 mon.a (mon.0) 514 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:56:11.837 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:56:11 vm02 bash[55303]: audit 2026-03-10T05:56:10.536637+0000 mon.a (mon.0) 515 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "osd.0"}]: dispatch 2026-03-10T05:56:11.837 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:56:11 vm02 bash[55303]: audit 2026-03-10T05:56:10.536637+0000 mon.a (mon.0) 515 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "osd.0"}]: dispatch 2026-03-10T05:56:11.837 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:56:11 vm02 bash[55303]: audit 2026-03-10T05:56:10.541657+0000 mon.a (mon.0) 516 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "osd.0"}]': finished 2026-03-10T05:56:11.837 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:56:11 vm02 bash[55303]: audit 2026-03-10T05:56:10.541657+0000 mon.a (mon.0) 516 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "osd.0"}]': finished 2026-03-10T05:56:11.837 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:56:11 vm02 bash[55303]: audit 2026-03-10T05:56:10.543996+0000 mon.a (mon.0) 517 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "osd.1"}]: dispatch 2026-03-10T05:56:11.837 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:56:11 vm02 bash[55303]: audit 2026-03-10T05:56:10.543996+0000 mon.a (mon.0) 517 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "osd.1"}]: dispatch 2026-03-10T05:56:11.837 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:56:11 vm02 bash[55303]: audit 2026-03-10T05:56:10.548926+0000 mon.a (mon.0) 518 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "osd.1"}]': finished 2026-03-10T05:56:11.837 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:56:11 vm02 bash[55303]: audit 2026-03-10T05:56:10.548926+0000 mon.a (mon.0) 518 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "osd.1"}]': finished 2026-03-10T05:56:11.837 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:56:11 vm02 bash[55303]: audit 2026-03-10T05:56:10.551188+0000 mon.a (mon.0) 519 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "osd.2"}]: dispatch 2026-03-10T05:56:11.837 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:56:11 vm02 bash[55303]: audit 2026-03-10T05:56:10.551188+0000 mon.a (mon.0) 519 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "osd.2"}]: dispatch 2026-03-10T05:56:11.837 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:56:11 vm02 bash[55303]: audit 2026-03-10T05:56:10.555371+0000 mon.a (mon.0) 520 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "osd.2"}]': finished 2026-03-10T05:56:11.837 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:56:11 vm02 bash[55303]: audit 2026-03-10T05:56:10.555371+0000 mon.a (mon.0) 520 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "osd.2"}]': finished 2026-03-10T05:56:11.837 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:56:11 vm02 bash[55303]: audit 2026-03-10T05:56:10.556811+0000 mon.a (mon.0) 521 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "osd.3"}]: dispatch 2026-03-10T05:56:11.837 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:56:11 vm02 bash[55303]: audit 2026-03-10T05:56:10.556811+0000 mon.a (mon.0) 521 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "osd.3"}]: dispatch 2026-03-10T05:56:11.837 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:56:11 vm02 bash[55303]: audit 2026-03-10T05:56:10.559227+0000 mon.a (mon.0) 522 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "osd.3"}]': finished 2026-03-10T05:56:11.837 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:56:11 vm02 bash[55303]: audit 2026-03-10T05:56:10.559227+0000 mon.a (mon.0) 522 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "osd.3"}]': finished 2026-03-10T05:56:11.837 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:56:11 vm02 bash[55303]: audit 2026-03-10T05:56:10.560503+0000 mon.a (mon.0) 523 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "osd.4"}]: dispatch 2026-03-10T05:56:11.837 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:56:11 vm02 bash[55303]: audit 2026-03-10T05:56:10.560503+0000 mon.a (mon.0) 523 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "osd.4"}]: dispatch 2026-03-10T05:56:11.837 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:56:11 vm02 bash[55303]: audit 2026-03-10T05:56:10.563156+0000 mon.a (mon.0) 524 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "osd.4"}]': finished 2026-03-10T05:56:11.837 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:56:11 vm02 bash[55303]: audit 2026-03-10T05:56:10.563156+0000 mon.a (mon.0) 524 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "osd.4"}]': finished 2026-03-10T05:56:11.837 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:56:11 vm02 bash[55303]: audit 2026-03-10T05:56:10.564080+0000 mon.a (mon.0) 525 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "osd.5"}]: dispatch 2026-03-10T05:56:11.837 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:56:11 vm02 bash[55303]: audit 2026-03-10T05:56:10.564080+0000 mon.a (mon.0) 525 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "osd.5"}]: dispatch 2026-03-10T05:56:11.837 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:56:11 vm02 bash[55303]: audit 2026-03-10T05:56:10.567315+0000 mon.a (mon.0) 526 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "osd.5"}]': finished 2026-03-10T05:56:11.837 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:56:11 vm02 bash[55303]: audit 2026-03-10T05:56:10.567315+0000 mon.a (mon.0) 526 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "osd.5"}]': finished 2026-03-10T05:56:11.837 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:56:11 vm02 bash[55303]: audit 2026-03-10T05:56:10.568273+0000 mon.a (mon.0) 527 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "osd.6"}]: dispatch 2026-03-10T05:56:11.837 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:56:11 vm02 bash[55303]: audit 2026-03-10T05:56:10.568273+0000 mon.a (mon.0) 527 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "osd.6"}]: dispatch 2026-03-10T05:56:11.837 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:56:11 vm02 bash[55303]: audit 2026-03-10T05:56:10.571501+0000 mon.a (mon.0) 528 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "osd.6"}]': finished 2026-03-10T05:56:11.837 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:56:11 vm02 bash[55303]: audit 2026-03-10T05:56:10.571501+0000 mon.a (mon.0) 528 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "osd.6"}]': finished 2026-03-10T05:56:11.837 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:56:11 vm02 bash[55303]: audit 2026-03-10T05:56:10.572534+0000 mon.a (mon.0) 529 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "osd.7"}]: dispatch 2026-03-10T05:56:11.837 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:56:11 vm02 bash[55303]: audit 2026-03-10T05:56:10.572534+0000 mon.a (mon.0) 529 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "osd.7"}]: dispatch 2026-03-10T05:56:11.837 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:56:11 vm02 bash[55303]: audit 2026-03-10T05:56:10.575793+0000 mon.a (mon.0) 530 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "osd.7"}]': finished 2026-03-10T05:56:11.837 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:56:11 vm02 bash[55303]: audit 2026-03-10T05:56:10.575793+0000 mon.a (mon.0) 530 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "osd.7"}]': finished 2026-03-10T05:56:11.837 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:56:11 vm02 bash[55303]: cephadm 2026-03-10T05:56:10.577427+0000 mgr.y (mgr.24992) 220 : cephadm [INF] Upgrade: Setting require_osd_release to 19 squid 2026-03-10T05:56:11.837 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:56:11 vm02 bash[55303]: cephadm 2026-03-10T05:56:10.577427+0000 mgr.y (mgr.24992) 220 : cephadm [INF] Upgrade: Setting require_osd_release to 19 squid 2026-03-10T05:56:11.837 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:56:11 vm02 bash[55303]: audit 2026-03-10T05:56:10.577593+0000 mon.a (mon.0) 531 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "osd require-osd-release", "release": "squid"}]: dispatch 2026-03-10T05:56:11.837 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:56:11 vm02 bash[55303]: audit 2026-03-10T05:56:10.577593+0000 mon.a (mon.0) 531 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "osd require-osd-release", "release": "squid"}]: dispatch 2026-03-10T05:56:11.837 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:56:11 vm02 bash[55303]: audit 2026-03-10T05:56:10.885552+0000 mon.a (mon.0) 532 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:56:11.837 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:56:11 vm02 bash[55303]: audit 2026-03-10T05:56:10.885552+0000 mon.a (mon.0) 532 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:56:11.837 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:56:11 vm02 bash[55303]: audit 2026-03-10T05:56:10.886151+0000 mon.a (mon.0) 533 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T05:56:11.837 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:56:11 vm02 bash[55303]: audit 2026-03-10T05:56:10.886151+0000 mon.a (mon.0) 533 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T05:56:12.584 INFO:journalctl@ceph.osd.1.vm02.stdout:Mar 10 05:56:12 vm02 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:56:12.585 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:56:12 vm02 bash[56371]: cluster 2026-03-10T05:56:10.860025+0000 mgr.y (mgr.24992) 221 : cluster [DBG] pgmap v133: 161 pgs: 161 active+clean; 457 KiB data, 282 MiB used, 160 GiB / 160 GiB avail; 575 B/s rd, 0 op/s 2026-03-10T05:56:12.585 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:56:12 vm02 bash[56371]: cluster 2026-03-10T05:56:10.860025+0000 mgr.y (mgr.24992) 221 : cluster [DBG] pgmap v133: 161 pgs: 161 active+clean; 457 KiB data, 282 MiB used, 160 GiB / 160 GiB avail; 575 B/s rd, 0 op/s 2026-03-10T05:56:12.585 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:56:12 vm02 bash[56371]: cluster 2026-03-10T05:56:11.575980+0000 mon.a (mon.0) 534 : cluster [INF] Health check cleared: OSD_UPGRADE_FINISHED (was: all OSDs are running squid or later but require_osd_release < squid) 2026-03-10T05:56:12.585 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:56:12 vm02 bash[56371]: cluster 2026-03-10T05:56:11.575980+0000 mon.a (mon.0) 534 : cluster [INF] Health check cleared: OSD_UPGRADE_FINISHED (was: all OSDs are running squid or later but require_osd_release < squid) 2026-03-10T05:56:12.585 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:56:12 vm02 bash[56371]: cluster 2026-03-10T05:56:11.575993+0000 mon.a (mon.0) 535 : cluster [INF] Cluster is now healthy 2026-03-10T05:56:12.585 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:56:12 vm02 bash[56371]: cluster 2026-03-10T05:56:11.575993+0000 mon.a (mon.0) 535 : cluster [INF] Cluster is now healthy 2026-03-10T05:56:12.585 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:56:12 vm02 bash[56371]: audit 2026-03-10T05:56:11.585975+0000 mon.a (mon.0) 536 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd='[{"prefix": "osd require-osd-release", "release": "squid"}]': finished 2026-03-10T05:56:12.585 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:56:12 vm02 bash[56371]: audit 2026-03-10T05:56:11.585975+0000 mon.a (mon.0) 536 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd='[{"prefix": "osd require-osd-release", "release": "squid"}]': finished 2026-03-10T05:56:12.585 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:56:12 vm02 bash[56371]: cluster 2026-03-10T05:56:11.591960+0000 mon.a (mon.0) 537 : cluster [DBG] osdmap e132: 8 total, 8 up, 8 in 2026-03-10T05:56:12.585 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:56:12 vm02 bash[56371]: cluster 2026-03-10T05:56:11.591960+0000 mon.a (mon.0) 537 : cluster [DBG] osdmap e132: 8 total, 8 up, 8 in 2026-03-10T05:56:12.585 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:56:12 vm02 bash[56371]: audit 2026-03-10T05:56:11.595221+0000 mon.a (mon.0) 538 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T05:56:12.585 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:56:12 vm02 bash[56371]: audit 2026-03-10T05:56:11.595221+0000 mon.a (mon.0) 538 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T05:56:12.585 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:56:12 vm02 bash[56371]: cephadm 2026-03-10T05:56:11.595677+0000 mgr.y (mgr.24992) 222 : cephadm [INF] Upgrade: Setting container_image for all mds 2026-03-10T05:56:12.585 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:56:12 vm02 bash[56371]: cephadm 2026-03-10T05:56:11.595677+0000 mgr.y (mgr.24992) 222 : cephadm [INF] Upgrade: Setting container_image for all mds 2026-03-10T05:56:12.585 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:56:12 vm02 bash[56371]: audit 2026-03-10T05:56:11.599834+0000 mon.a (mon.0) 539 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:56:12.585 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:56:12 vm02 bash[56371]: audit 2026-03-10T05:56:11.599834+0000 mon.a (mon.0) 539 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:56:12.585 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:56:12 vm02 bash[56371]: audit 2026-03-10T05:56:12.040008+0000 mon.a (mon.0) 540 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:56:12.585 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:56:12 vm02 bash[56371]: audit 2026-03-10T05:56:12.040008+0000 mon.a (mon.0) 540 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:56:12.585 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:56:12 vm02 bash[56371]: audit 2026-03-10T05:56:12.044954+0000 mon.a (mon.0) 541 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.foo.vm02.pbogjd", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch 2026-03-10T05:56:12.585 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:56:12 vm02 bash[56371]: audit 2026-03-10T05:56:12.044954+0000 mon.a (mon.0) 541 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.foo.vm02.pbogjd", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch 2026-03-10T05:56:12.585 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:56:12 vm02 bash[56371]: audit 2026-03-10T05:56:12.045670+0000 mon.a (mon.0) 542 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:56:12.585 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:56:12 vm02 bash[56371]: audit 2026-03-10T05:56:12.045670+0000 mon.a (mon.0) 542 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:56:12.585 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:56:12 vm02 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:56:12.585 INFO:journalctl@ceph.osd.0.vm02.stdout:Mar 10 05:56:12 vm02 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:56:12.585 INFO:journalctl@ceph.osd.2.vm02.stdout:Mar 10 05:56:12 vm02 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:56:12.585 INFO:journalctl@ceph.osd.3.vm02.stdout:Mar 10 05:56:12 vm02 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:56:12.585 INFO:journalctl@ceph.mgr.y.vm02.stdout:Mar 10 05:56:12 vm02 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:56:12.585 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:56:12 vm02 bash[55303]: cluster 2026-03-10T05:56:10.860025+0000 mgr.y (mgr.24992) 221 : cluster [DBG] pgmap v133: 161 pgs: 161 active+clean; 457 KiB data, 282 MiB used, 160 GiB / 160 GiB avail; 575 B/s rd, 0 op/s 2026-03-10T05:56:12.585 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:56:12 vm02 bash[55303]: cluster 2026-03-10T05:56:10.860025+0000 mgr.y (mgr.24992) 221 : cluster [DBG] pgmap v133: 161 pgs: 161 active+clean; 457 KiB data, 282 MiB used, 160 GiB / 160 GiB avail; 575 B/s rd, 0 op/s 2026-03-10T05:56:12.585 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:56:12 vm02 bash[55303]: cluster 2026-03-10T05:56:11.575980+0000 mon.a (mon.0) 534 : cluster [INF] Health check cleared: OSD_UPGRADE_FINISHED (was: all OSDs are running squid or later but require_osd_release < squid) 2026-03-10T05:56:12.585 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:56:12 vm02 bash[55303]: cluster 2026-03-10T05:56:11.575980+0000 mon.a (mon.0) 534 : cluster [INF] Health check cleared: OSD_UPGRADE_FINISHED (was: all OSDs are running squid or later but require_osd_release < squid) 2026-03-10T05:56:12.585 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:56:12 vm02 bash[55303]: cluster 2026-03-10T05:56:11.575993+0000 mon.a (mon.0) 535 : cluster [INF] Cluster is now healthy 2026-03-10T05:56:12.585 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:56:12 vm02 bash[55303]: cluster 2026-03-10T05:56:11.575993+0000 mon.a (mon.0) 535 : cluster [INF] Cluster is now healthy 2026-03-10T05:56:12.585 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:56:12 vm02 bash[55303]: audit 2026-03-10T05:56:11.585975+0000 mon.a (mon.0) 536 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd='[{"prefix": "osd require-osd-release", "release": "squid"}]': finished 2026-03-10T05:56:12.585 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:56:12 vm02 bash[55303]: audit 2026-03-10T05:56:11.585975+0000 mon.a (mon.0) 536 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd='[{"prefix": "osd require-osd-release", "release": "squid"}]': finished 2026-03-10T05:56:12.585 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:56:12 vm02 bash[55303]: cluster 2026-03-10T05:56:11.591960+0000 mon.a (mon.0) 537 : cluster [DBG] osdmap e132: 8 total, 8 up, 8 in 2026-03-10T05:56:12.586 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:56:12 vm02 bash[55303]: cluster 2026-03-10T05:56:11.591960+0000 mon.a (mon.0) 537 : cluster [DBG] osdmap e132: 8 total, 8 up, 8 in 2026-03-10T05:56:12.586 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:56:12 vm02 bash[55303]: audit 2026-03-10T05:56:11.595221+0000 mon.a (mon.0) 538 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T05:56:12.586 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:56:12 vm02 bash[55303]: audit 2026-03-10T05:56:11.595221+0000 mon.a (mon.0) 538 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T05:56:12.586 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:56:12 vm02 bash[55303]: cephadm 2026-03-10T05:56:11.595677+0000 mgr.y (mgr.24992) 222 : cephadm [INF] Upgrade: Setting container_image for all mds 2026-03-10T05:56:12.586 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:56:12 vm02 bash[55303]: cephadm 2026-03-10T05:56:11.595677+0000 mgr.y (mgr.24992) 222 : cephadm [INF] Upgrade: Setting container_image for all mds 2026-03-10T05:56:12.586 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:56:12 vm02 bash[55303]: audit 2026-03-10T05:56:11.599834+0000 mon.a (mon.0) 539 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:56:12.586 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:56:12 vm02 bash[55303]: audit 2026-03-10T05:56:11.599834+0000 mon.a (mon.0) 539 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:56:12.586 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:56:12 vm02 bash[55303]: audit 2026-03-10T05:56:12.040008+0000 mon.a (mon.0) 540 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:56:12.586 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:56:12 vm02 bash[55303]: audit 2026-03-10T05:56:12.040008+0000 mon.a (mon.0) 540 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:56:12.586 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:56:12 vm02 bash[55303]: audit 2026-03-10T05:56:12.044954+0000 mon.a (mon.0) 541 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.foo.vm02.pbogjd", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch 2026-03-10T05:56:12.586 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:56:12 vm02 bash[55303]: audit 2026-03-10T05:56:12.044954+0000 mon.a (mon.0) 541 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.foo.vm02.pbogjd", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch 2026-03-10T05:56:12.586 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:56:12 vm02 bash[55303]: audit 2026-03-10T05:56:12.045670+0000 mon.a (mon.0) 542 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:56:12.586 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:56:12 vm02 bash[55303]: audit 2026-03-10T05:56:12.045670+0000 mon.a (mon.0) 542 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:56:12.586 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:56:12 vm02 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:56:12.586 INFO:journalctl@ceph.alertmanager.a.vm02.stdout:Mar 10 05:56:12 vm02 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:56:12.586 INFO:journalctl@ceph.node-exporter.a.vm02.stdout:Mar 10 05:56:12 vm02 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:56:12.749 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:56:12 vm05 bash[43541]: cluster 2026-03-10T05:56:10.860025+0000 mgr.y (mgr.24992) 221 : cluster [DBG] pgmap v133: 161 pgs: 161 active+clean; 457 KiB data, 282 MiB used, 160 GiB / 160 GiB avail; 575 B/s rd, 0 op/s 2026-03-10T05:56:12.749 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:56:12 vm05 bash[43541]: cluster 2026-03-10T05:56:10.860025+0000 mgr.y (mgr.24992) 221 : cluster [DBG] pgmap v133: 161 pgs: 161 active+clean; 457 KiB data, 282 MiB used, 160 GiB / 160 GiB avail; 575 B/s rd, 0 op/s 2026-03-10T05:56:12.749 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:56:12 vm05 bash[43541]: cluster 2026-03-10T05:56:11.575980+0000 mon.a (mon.0) 534 : cluster [INF] Health check cleared: OSD_UPGRADE_FINISHED (was: all OSDs are running squid or later but require_osd_release < squid) 2026-03-10T05:56:12.749 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:56:12 vm05 bash[43541]: cluster 2026-03-10T05:56:11.575980+0000 mon.a (mon.0) 534 : cluster [INF] Health check cleared: OSD_UPGRADE_FINISHED (was: all OSDs are running squid or later but require_osd_release < squid) 2026-03-10T05:56:12.749 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:56:12 vm05 bash[43541]: cluster 2026-03-10T05:56:11.575993+0000 mon.a (mon.0) 535 : cluster [INF] Cluster is now healthy 2026-03-10T05:56:12.749 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:56:12 vm05 bash[43541]: cluster 2026-03-10T05:56:11.575993+0000 mon.a (mon.0) 535 : cluster [INF] Cluster is now healthy 2026-03-10T05:56:12.749 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:56:12 vm05 bash[43541]: audit 2026-03-10T05:56:11.585975+0000 mon.a (mon.0) 536 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd='[{"prefix": "osd require-osd-release", "release": "squid"}]': finished 2026-03-10T05:56:12.749 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:56:12 vm05 bash[43541]: audit 2026-03-10T05:56:11.585975+0000 mon.a (mon.0) 536 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd='[{"prefix": "osd require-osd-release", "release": "squid"}]': finished 2026-03-10T05:56:12.749 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:56:12 vm05 bash[43541]: cluster 2026-03-10T05:56:11.591960+0000 mon.a (mon.0) 537 : cluster [DBG] osdmap e132: 8 total, 8 up, 8 in 2026-03-10T05:56:12.749 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:56:12 vm05 bash[43541]: cluster 2026-03-10T05:56:11.591960+0000 mon.a (mon.0) 537 : cluster [DBG] osdmap e132: 8 total, 8 up, 8 in 2026-03-10T05:56:12.749 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:56:12 vm05 bash[43541]: audit 2026-03-10T05:56:11.595221+0000 mon.a (mon.0) 538 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T05:56:12.749 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:56:12 vm05 bash[43541]: audit 2026-03-10T05:56:11.595221+0000 mon.a (mon.0) 538 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T05:56:12.749 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:56:12 vm05 bash[43541]: cephadm 2026-03-10T05:56:11.595677+0000 mgr.y (mgr.24992) 222 : cephadm [INF] Upgrade: Setting container_image for all mds 2026-03-10T05:56:12.749 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:56:12 vm05 bash[43541]: cephadm 2026-03-10T05:56:11.595677+0000 mgr.y (mgr.24992) 222 : cephadm [INF] Upgrade: Setting container_image for all mds 2026-03-10T05:56:12.749 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:56:12 vm05 bash[43541]: audit 2026-03-10T05:56:11.599834+0000 mon.a (mon.0) 539 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:56:12.749 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:56:12 vm05 bash[43541]: audit 2026-03-10T05:56:11.599834+0000 mon.a (mon.0) 539 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:56:12.749 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:56:12 vm05 bash[43541]: audit 2026-03-10T05:56:12.040008+0000 mon.a (mon.0) 540 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:56:12.749 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:56:12 vm05 bash[43541]: audit 2026-03-10T05:56:12.040008+0000 mon.a (mon.0) 540 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:56:12.749 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:56:12 vm05 bash[43541]: audit 2026-03-10T05:56:12.044954+0000 mon.a (mon.0) 541 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.foo.vm02.pbogjd", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch 2026-03-10T05:56:12.749 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:56:12 vm05 bash[43541]: audit 2026-03-10T05:56:12.044954+0000 mon.a (mon.0) 541 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.foo.vm02.pbogjd", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch 2026-03-10T05:56:12.749 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:56:12 vm05 bash[43541]: audit 2026-03-10T05:56:12.045670+0000 mon.a (mon.0) 542 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:56:12.749 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:56:12 vm05 bash[43541]: audit 2026-03-10T05:56:12.045670+0000 mon.a (mon.0) 542 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:56:13.139 INFO:journalctl@ceph.mgr.y.vm02.stdout:Mar 10 05:56:12 vm02 bash[52264]: ::ffff:192.168.123.105 - - [10/Mar/2026:05:56:12] "GET /metrics HTTP/1.1" 200 38195 "" "Prometheus/2.51.0" 2026-03-10T05:56:13.491 INFO:journalctl@ceph.osd.0.vm02.stdout:Mar 10 05:56:13 vm02 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:56:13.491 INFO:journalctl@ceph.osd.1.vm02.stdout:Mar 10 05:56:13 vm02 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:56:13.491 INFO:journalctl@ceph.osd.2.vm02.stdout:Mar 10 05:56:13 vm02 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:56:13.491 INFO:journalctl@ceph.osd.3.vm02.stdout:Mar 10 05:56:13 vm02 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:56:13.492 INFO:journalctl@ceph.alertmanager.a.vm02.stdout:Mar 10 05:56:13 vm02 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:56:13.492 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:56:13 vm02 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:56:13.492 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:56:13 vm02 bash[56371]: cephadm 2026-03-10T05:56:12.034901+0000 mgr.y (mgr.24992) 223 : cephadm [INF] Upgrade: Updating rgw.foo.vm02.pbogjd (1/4) 2026-03-10T05:56:13.492 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:56:13 vm02 bash[56371]: cephadm 2026-03-10T05:56:12.034901+0000 mgr.y (mgr.24992) 223 : cephadm [INF] Upgrade: Updating rgw.foo.vm02.pbogjd (1/4) 2026-03-10T05:56:13.492 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:56:13 vm02 bash[56371]: cephadm 2026-03-10T05:56:12.046145+0000 mgr.y (mgr.24992) 224 : cephadm [INF] Deploying daemon rgw.foo.vm02.pbogjd on vm02 2026-03-10T05:56:13.492 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:56:13 vm02 bash[56371]: cephadm 2026-03-10T05:56:12.046145+0000 mgr.y (mgr.24992) 224 : cephadm [INF] Deploying daemon rgw.foo.vm02.pbogjd on vm02 2026-03-10T05:56:13.492 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:56:13 vm02 bash[56371]: audit 2026-03-10T05:56:13.235234+0000 mon.a (mon.0) 543 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:56:13.492 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:56:13 vm02 bash[56371]: audit 2026-03-10T05:56:13.235234+0000 mon.a (mon.0) 543 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:56:13.492 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:56:13 vm02 bash[56371]: audit 2026-03-10T05:56:13.240870+0000 mon.a (mon.0) 544 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:56:13.492 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:56:13 vm02 bash[56371]: audit 2026-03-10T05:56:13.240870+0000 mon.a (mon.0) 544 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:56:13.492 INFO:journalctl@ceph.mgr.y.vm02.stdout:Mar 10 05:56:13 vm02 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:56:13.492 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:56:13 vm02 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:56:13.492 INFO:journalctl@ceph.node-exporter.a.vm02.stdout:Mar 10 05:56:13 vm02 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:56:13.749 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:56:13 vm05 bash[43541]: cephadm 2026-03-10T05:56:12.034901+0000 mgr.y (mgr.24992) 223 : cephadm [INF] Upgrade: Updating rgw.foo.vm02.pbogjd (1/4) 2026-03-10T05:56:13.749 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:56:13 vm05 bash[43541]: cephadm 2026-03-10T05:56:12.034901+0000 mgr.y (mgr.24992) 223 : cephadm [INF] Upgrade: Updating rgw.foo.vm02.pbogjd (1/4) 2026-03-10T05:56:13.749 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:56:13 vm05 bash[43541]: cephadm 2026-03-10T05:56:12.046145+0000 mgr.y (mgr.24992) 224 : cephadm [INF] Deploying daemon rgw.foo.vm02.pbogjd on vm02 2026-03-10T05:56:13.749 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:56:13 vm05 bash[43541]: cephadm 2026-03-10T05:56:12.046145+0000 mgr.y (mgr.24992) 224 : cephadm [INF] Deploying daemon rgw.foo.vm02.pbogjd on vm02 2026-03-10T05:56:13.749 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:56:13 vm05 bash[43541]: audit 2026-03-10T05:56:13.235234+0000 mon.a (mon.0) 543 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:56:13.749 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:56:13 vm05 bash[43541]: audit 2026-03-10T05:56:13.235234+0000 mon.a (mon.0) 543 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:56:13.749 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:56:13 vm05 bash[43541]: audit 2026-03-10T05:56:13.240870+0000 mon.a (mon.0) 544 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:56:13.749 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:56:13 vm05 bash[43541]: audit 2026-03-10T05:56:13.240870+0000 mon.a (mon.0) 544 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:56:13.822 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:56:13 vm02 bash[55303]: cephadm 2026-03-10T05:56:12.034901+0000 mgr.y (mgr.24992) 223 : cephadm [INF] Upgrade: Updating rgw.foo.vm02.pbogjd (1/4) 2026-03-10T05:56:13.822 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:56:13 vm02 bash[55303]: cephadm 2026-03-10T05:56:12.034901+0000 mgr.y (mgr.24992) 223 : cephadm [INF] Upgrade: Updating rgw.foo.vm02.pbogjd (1/4) 2026-03-10T05:56:13.822 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:56:13 vm02 bash[55303]: cephadm 2026-03-10T05:56:12.046145+0000 mgr.y (mgr.24992) 224 : cephadm [INF] Deploying daemon rgw.foo.vm02.pbogjd on vm02 2026-03-10T05:56:13.822 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:56:13 vm02 bash[55303]: cephadm 2026-03-10T05:56:12.046145+0000 mgr.y (mgr.24992) 224 : cephadm [INF] Deploying daemon rgw.foo.vm02.pbogjd on vm02 2026-03-10T05:56:13.822 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:56:13 vm02 bash[55303]: audit 2026-03-10T05:56:13.235234+0000 mon.a (mon.0) 543 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:56:13.822 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:56:13 vm02 bash[55303]: audit 2026-03-10T05:56:13.235234+0000 mon.a (mon.0) 543 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:56:13.822 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:56:13 vm02 bash[55303]: audit 2026-03-10T05:56:13.240870+0000 mon.a (mon.0) 544 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:56:13.822 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:56:13 vm02 bash[55303]: audit 2026-03-10T05:56:13.240870+0000 mon.a (mon.0) 544 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:56:14.486 INFO:journalctl@ceph.osd.0.vm02.stdout:Mar 10 05:56:14 vm02 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:56:14.486 INFO:journalctl@ceph.osd.1.vm02.stdout:Mar 10 05:56:14 vm02 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:56:14.486 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:56:14 vm02 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:56:14.486 INFO:journalctl@ceph.mgr.y.vm02.stdout:Mar 10 05:56:14 vm02 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:56:14.486 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:56:14 vm02 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:56:14.486 INFO:journalctl@ceph.osd.2.vm02.stdout:Mar 10 05:56:14 vm02 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:56:14.486 INFO:journalctl@ceph.osd.3.vm02.stdout:Mar 10 05:56:14 vm02 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:56:14.487 INFO:journalctl@ceph.node-exporter.a.vm02.stdout:Mar 10 05:56:14 vm02 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:56:14.487 INFO:journalctl@ceph.alertmanager.a.vm02.stdout:Mar 10 05:56:14 vm02 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:56:14.495 INFO:journalctl@ceph.prometheus.a.vm05.stdout:Mar 10 05:56:14 vm05 bash[41269]: ts=2026-03-10T05:56:14.147Z caller=group.go:483 level=warn name=CephOSDFlapping index=13 component="rule manager" file=/etc/prometheus/alerting/ceph_alerts.yml group=osd msg="Evaluating rule failed" rule="alert: CephOSDFlapping\nexpr: (rate(ceph_osd_up[5m]) * on (ceph_daemon) group_left (hostname) ceph_osd_metadata)\n * 60 > 1\nlabels:\n oid: 1.3.6.1.4.1.50495.1.2.1.4.4\n severity: warning\n type: ceph_default\nannotations:\n description: OSD {{ $labels.ceph_daemon }} on {{ $labels.hostname }} was marked\n down and back up {{ $value | humanize }} times once a minute for 5 minutes. This\n may indicate a network issue (latency, packet loss, MTU mismatch) on the cluster\n network, or the public network if no cluster network is deployed. Check the network\n stats on the listed host(s).\n documentation: https://docs.ceph.com/en/latest/rados/troubleshooting/troubleshooting-osd#flapping-osds\n summary: Network issues are causing OSDs to flap (mark each other down)\n" err="found duplicate series for the match group {ceph_daemon=\"osd.3\"} on the right hand-side of the operation: [{__name__=\"ceph_osd_metadata\", ceph_daemon=\"osd.3\", ceph_version=\"ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable)\", cluster=\"107483ae-1c44-11f1-b530-c1172cd6122a\", cluster_addr=\"192.168.123.102\", device_class=\"hdd\", hostname=\"vm02\", instance=\"ceph_cluster\", job=\"ceph\", objectstore=\"bluestore\", public_addr=\"192.168.123.102\"}, {__name__=\"ceph_osd_metadata\", ceph_daemon=\"osd.3\", ceph_version=\"ceph version 17.2.0 (43e2e60a7559d3f46c9d53f1ca875fd499a1e35e) quincy (stable)\", cluster_addr=\"192.168.123.102\", device_class=\"hdd\", hostname=\"vm02\", instance=\"192.168.123.105:9283\", job=\"ceph\", objectstore=\"bluestore\", public_addr=\"192.168.123.102\"}];many-to-many matching not allowed: matching labels must be unique on one side" 2026-03-10T05:56:14.749 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:56:14 vm05 bash[43541]: cluster 2026-03-10T05:56:12.860689+0000 mgr.y (mgr.24992) 225 : cluster [DBG] pgmap v135: 161 pgs: 161 active+clean; 457 KiB data, 282 MiB used, 160 GiB / 160 GiB avail; 1.0 KiB/s rd, 1 op/s 2026-03-10T05:56:14.749 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:56:14 vm05 bash[43541]: cluster 2026-03-10T05:56:12.860689+0000 mgr.y (mgr.24992) 225 : cluster [DBG] pgmap v135: 161 pgs: 161 active+clean; 457 KiB data, 282 MiB used, 160 GiB / 160 GiB avail; 1.0 KiB/s rd, 1 op/s 2026-03-10T05:56:14.749 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:56:14 vm05 bash[43541]: audit 2026-03-10T05:56:13.909467+0000 mon.a (mon.0) 545 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:56:14.749 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:56:14 vm05 bash[43541]: audit 2026-03-10T05:56:13.909467+0000 mon.a (mon.0) 545 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:56:14.749 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:56:14 vm05 bash[43541]: audit 2026-03-10T05:56:13.913119+0000 mon.a (mon.0) 546 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.smpl.vm02.pglcfm", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch 2026-03-10T05:56:14.749 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:56:14 vm05 bash[43541]: audit 2026-03-10T05:56:13.913119+0000 mon.a (mon.0) 546 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.smpl.vm02.pglcfm", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch 2026-03-10T05:56:14.749 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:56:14 vm05 bash[43541]: audit 2026-03-10T05:56:13.914026+0000 mon.a (mon.0) 547 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:56:14.749 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:56:14 vm05 bash[43541]: audit 2026-03-10T05:56:13.914026+0000 mon.a (mon.0) 547 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:56:14.822 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:56:14 vm02 bash[56371]: cluster 2026-03-10T05:56:12.860689+0000 mgr.y (mgr.24992) 225 : cluster [DBG] pgmap v135: 161 pgs: 161 active+clean; 457 KiB data, 282 MiB used, 160 GiB / 160 GiB avail; 1.0 KiB/s rd, 1 op/s 2026-03-10T05:56:14.822 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:56:14 vm02 bash[56371]: cluster 2026-03-10T05:56:12.860689+0000 mgr.y (mgr.24992) 225 : cluster [DBG] pgmap v135: 161 pgs: 161 active+clean; 457 KiB data, 282 MiB used, 160 GiB / 160 GiB avail; 1.0 KiB/s rd, 1 op/s 2026-03-10T05:56:14.822 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:56:14 vm02 bash[56371]: audit 2026-03-10T05:56:13.909467+0000 mon.a (mon.0) 545 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:56:14.822 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:56:14 vm02 bash[56371]: audit 2026-03-10T05:56:13.909467+0000 mon.a (mon.0) 545 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:56:14.822 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:56:14 vm02 bash[56371]: audit 2026-03-10T05:56:13.913119+0000 mon.a (mon.0) 546 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.smpl.vm02.pglcfm", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch 2026-03-10T05:56:14.822 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:56:14 vm02 bash[56371]: audit 2026-03-10T05:56:13.913119+0000 mon.a (mon.0) 546 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.smpl.vm02.pglcfm", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch 2026-03-10T05:56:14.822 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:56:14 vm02 bash[56371]: audit 2026-03-10T05:56:13.914026+0000 mon.a (mon.0) 547 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:56:14.822 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:56:14 vm02 bash[56371]: audit 2026-03-10T05:56:13.914026+0000 mon.a (mon.0) 547 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:56:14.822 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:56:14 vm02 bash[55303]: cluster 2026-03-10T05:56:12.860689+0000 mgr.y (mgr.24992) 225 : cluster [DBG] pgmap v135: 161 pgs: 161 active+clean; 457 KiB data, 282 MiB used, 160 GiB / 160 GiB avail; 1.0 KiB/s rd, 1 op/s 2026-03-10T05:56:14.822 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:56:14 vm02 bash[55303]: cluster 2026-03-10T05:56:12.860689+0000 mgr.y (mgr.24992) 225 : cluster [DBG] pgmap v135: 161 pgs: 161 active+clean; 457 KiB data, 282 MiB used, 160 GiB / 160 GiB avail; 1.0 KiB/s rd, 1 op/s 2026-03-10T05:56:14.822 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:56:14 vm02 bash[55303]: audit 2026-03-10T05:56:13.909467+0000 mon.a (mon.0) 545 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:56:14.822 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:56:14 vm02 bash[55303]: audit 2026-03-10T05:56:13.909467+0000 mon.a (mon.0) 545 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:56:14.822 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:56:14 vm02 bash[55303]: audit 2026-03-10T05:56:13.913119+0000 mon.a (mon.0) 546 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.smpl.vm02.pglcfm", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch 2026-03-10T05:56:14.822 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:56:14 vm02 bash[55303]: audit 2026-03-10T05:56:13.913119+0000 mon.a (mon.0) 546 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.smpl.vm02.pglcfm", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch 2026-03-10T05:56:14.822 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:56:14 vm02 bash[55303]: audit 2026-03-10T05:56:13.914026+0000 mon.a (mon.0) 547 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:56:14.822 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:56:14 vm02 bash[55303]: audit 2026-03-10T05:56:13.914026+0000 mon.a (mon.0) 547 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:56:15.084 INFO:journalctl@ceph.osd.0.vm02.stdout:Mar 10 05:56:15 vm02 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:56:15.085 INFO:journalctl@ceph.osd.1.vm02.stdout:Mar 10 05:56:15 vm02 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:56:15.085 INFO:journalctl@ceph.osd.2.vm02.stdout:Mar 10 05:56:15 vm02 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:56:15.085 INFO:journalctl@ceph.osd.3.vm02.stdout:Mar 10 05:56:15 vm02 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:56:15.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:56:15 vm02 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:56:15.085 INFO:journalctl@ceph.mgr.y.vm02.stdout:Mar 10 05:56:15 vm02 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:56:15.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:56:15 vm02 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:56:15.085 INFO:journalctl@ceph.node-exporter.a.vm02.stdout:Mar 10 05:56:15 vm02 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:56:15.085 INFO:journalctl@ceph.alertmanager.a.vm02.stdout:Mar 10 05:56:15 vm02 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:56:15.835 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:56:15 vm02 bash[56371]: cephadm 2026-03-10T05:56:13.904696+0000 mgr.y (mgr.24992) 226 : cephadm [INF] Upgrade: Updating rgw.smpl.vm02.pglcfm (2/4) 2026-03-10T05:56:15.835 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:56:15 vm02 bash[56371]: cephadm 2026-03-10T05:56:13.904696+0000 mgr.y (mgr.24992) 226 : cephadm [INF] Upgrade: Updating rgw.smpl.vm02.pglcfm (2/4) 2026-03-10T05:56:15.835 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:56:15 vm02 bash[56371]: cephadm 2026-03-10T05:56:13.914663+0000 mgr.y (mgr.24992) 227 : cephadm [INF] Deploying daemon rgw.smpl.vm02.pglcfm on vm02 2026-03-10T05:56:15.835 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:56:15 vm02 bash[56371]: cephadm 2026-03-10T05:56:13.914663+0000 mgr.y (mgr.24992) 227 : cephadm [INF] Deploying daemon rgw.smpl.vm02.pglcfm on vm02 2026-03-10T05:56:15.835 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:56:15 vm02 bash[56371]: audit 2026-03-10T05:56:15.141559+0000 mon.a (mon.0) 548 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:56:15.835 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:56:15 vm02 bash[56371]: audit 2026-03-10T05:56:15.141559+0000 mon.a (mon.0) 548 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:56:15.835 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:56:15 vm02 bash[56371]: audit 2026-03-10T05:56:15.147676+0000 mon.a (mon.0) 549 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:56:15.835 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:56:15 vm02 bash[56371]: audit 2026-03-10T05:56:15.147676+0000 mon.a (mon.0) 549 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:56:15.835 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:56:15 vm02 bash[55303]: cephadm 2026-03-10T05:56:13.904696+0000 mgr.y (mgr.24992) 226 : cephadm [INF] Upgrade: Updating rgw.smpl.vm02.pglcfm (2/4) 2026-03-10T05:56:15.835 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:56:15 vm02 bash[55303]: cephadm 2026-03-10T05:56:13.904696+0000 mgr.y (mgr.24992) 226 : cephadm [INF] Upgrade: Updating rgw.smpl.vm02.pglcfm (2/4) 2026-03-10T05:56:15.835 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:56:15 vm02 bash[55303]: cephadm 2026-03-10T05:56:13.914663+0000 mgr.y (mgr.24992) 227 : cephadm [INF] Deploying daemon rgw.smpl.vm02.pglcfm on vm02 2026-03-10T05:56:15.835 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:56:15 vm02 bash[55303]: cephadm 2026-03-10T05:56:13.914663+0000 mgr.y (mgr.24992) 227 : cephadm [INF] Deploying daemon rgw.smpl.vm02.pglcfm on vm02 2026-03-10T05:56:15.835 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:56:15 vm02 bash[55303]: audit 2026-03-10T05:56:15.141559+0000 mon.a (mon.0) 548 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:56:15.835 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:56:15 vm02 bash[55303]: audit 2026-03-10T05:56:15.141559+0000 mon.a (mon.0) 548 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:56:15.835 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:56:15 vm02 bash[55303]: audit 2026-03-10T05:56:15.147676+0000 mon.a (mon.0) 549 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:56:15.835 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:56:15 vm02 bash[55303]: audit 2026-03-10T05:56:15.147676+0000 mon.a (mon.0) 549 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:56:15.999 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:56:15 vm05 bash[43541]: cephadm 2026-03-10T05:56:13.904696+0000 mgr.y (mgr.24992) 226 : cephadm [INF] Upgrade: Updating rgw.smpl.vm02.pglcfm (2/4) 2026-03-10T05:56:15.999 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:56:15 vm05 bash[43541]: cephadm 2026-03-10T05:56:13.904696+0000 mgr.y (mgr.24992) 226 : cephadm [INF] Upgrade: Updating rgw.smpl.vm02.pglcfm (2/4) 2026-03-10T05:56:15.999 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:56:15 vm05 bash[43541]: cephadm 2026-03-10T05:56:13.914663+0000 mgr.y (mgr.24992) 227 : cephadm [INF] Deploying daemon rgw.smpl.vm02.pglcfm on vm02 2026-03-10T05:56:15.999 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:56:15 vm05 bash[43541]: cephadm 2026-03-10T05:56:13.914663+0000 mgr.y (mgr.24992) 227 : cephadm [INF] Deploying daemon rgw.smpl.vm02.pglcfm on vm02 2026-03-10T05:56:15.999 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:56:15 vm05 bash[43541]: audit 2026-03-10T05:56:15.141559+0000 mon.a (mon.0) 548 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:56:15.999 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:56:15 vm05 bash[43541]: audit 2026-03-10T05:56:15.141559+0000 mon.a (mon.0) 548 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:56:15.999 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:56:15 vm05 bash[43541]: audit 2026-03-10T05:56:15.147676+0000 mon.a (mon.0) 549 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:56:15.999 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:56:15 vm05 bash[43541]: audit 2026-03-10T05:56:15.147676+0000 mon.a (mon.0) 549 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:56:16.283 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:56:16 vm05 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:56:16.283 INFO:journalctl@ceph.mgr.x.vm05.stdout:Mar 10 05:56:16 vm05 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:56:16.283 INFO:journalctl@ceph.osd.4.vm05.stdout:Mar 10 05:56:16 vm05 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:56:16.283 INFO:journalctl@ceph.osd.6.vm05.stdout:Mar 10 05:56:16 vm05 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:56:16.283 INFO:journalctl@ceph.osd.7.vm05.stdout:Mar 10 05:56:16 vm05 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:56:16.283 INFO:journalctl@ceph.prometheus.a.vm05.stdout:Mar 10 05:56:16 vm05 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:56:16.283 INFO:journalctl@ceph.node-exporter.b.vm05.stdout:Mar 10 05:56:16 vm05 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:56:16.283 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:56:16 vm05 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:56:16.283 INFO:journalctl@ceph.osd.5.vm05.stdout:Mar 10 05:56:16 vm05 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:56:16.533 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:56:16 vm05 bash[43541]: cluster 2026-03-10T05:56:14.861059+0000 mgr.y (mgr.24992) 228 : cluster [DBG] pgmap v136: 161 pgs: 161 active+clean; 457 KiB data, 286 MiB used, 160 GiB / 160 GiB avail; 5.5 KiB/s rd, 0 B/s wr, 8 op/s 2026-03-10T05:56:16.533 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:56:16 vm05 bash[43541]: cluster 2026-03-10T05:56:14.861059+0000 mgr.y (mgr.24992) 228 : cluster [DBG] pgmap v136: 161 pgs: 161 active+clean; 457 KiB data, 286 MiB used, 160 GiB / 160 GiB avail; 5.5 KiB/s rd, 0 B/s wr, 8 op/s 2026-03-10T05:56:16.533 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:56:16 vm05 bash[43541]: audit 2026-03-10T05:56:15.716490+0000 mon.a (mon.0) 550 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:56:16.533 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:56:16 vm05 bash[43541]: audit 2026-03-10T05:56:15.716490+0000 mon.a (mon.0) 550 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:56:16.533 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:56:16 vm05 bash[43541]: audit 2026-03-10T05:56:15.719377+0000 mon.a (mon.0) 551 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.foo.vm05.hvmsxl", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch 2026-03-10T05:56:16.534 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:56:16 vm05 bash[43541]: audit 2026-03-10T05:56:15.719377+0000 mon.a (mon.0) 551 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.foo.vm05.hvmsxl", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch 2026-03-10T05:56:16.534 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:56:16 vm05 bash[43541]: audit 2026-03-10T05:56:15.720238+0000 mon.a (mon.0) 552 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:56:16.534 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:56:16 vm05 bash[43541]: audit 2026-03-10T05:56:15.720238+0000 mon.a (mon.0) 552 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:56:16.534 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:56:16 vm05 bash[43541]: audit 2026-03-10T05:56:15.971134+0000 mon.a (mon.0) 553 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:56:16.534 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:56:16 vm05 bash[43541]: audit 2026-03-10T05:56:15.971134+0000 mon.a (mon.0) 553 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:56:16.808 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:56:16 vm05 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:56:16.808 INFO:journalctl@ceph.mgr.x.vm05.stdout:Mar 10 05:56:16 vm05 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:56:16.808 INFO:journalctl@ceph.osd.4.vm05.stdout:Mar 10 05:56:16 vm05 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:56:16.808 INFO:journalctl@ceph.osd.5.vm05.stdout:Mar 10 05:56:16 vm05 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:56:16.808 INFO:journalctl@ceph.osd.6.vm05.stdout:Mar 10 05:56:16 vm05 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:56:16.808 INFO:journalctl@ceph.osd.7.vm05.stdout:Mar 10 05:56:16 vm05 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:56:16.808 INFO:journalctl@ceph.prometheus.a.vm05.stdout:Mar 10 05:56:16 vm05 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:56:16.808 INFO:journalctl@ceph.node-exporter.b.vm05.stdout:Mar 10 05:56:16 vm05 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:56:16.809 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:56:16 vm05 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:56:16.835 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:56:16 vm02 bash[56371]: cluster 2026-03-10T05:56:14.861059+0000 mgr.y (mgr.24992) 228 : cluster [DBG] pgmap v136: 161 pgs: 161 active+clean; 457 KiB data, 286 MiB used, 160 GiB / 160 GiB avail; 5.5 KiB/s rd, 0 B/s wr, 8 op/s 2026-03-10T05:56:16.835 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:56:16 vm02 bash[56371]: cluster 2026-03-10T05:56:14.861059+0000 mgr.y (mgr.24992) 228 : cluster [DBG] pgmap v136: 161 pgs: 161 active+clean; 457 KiB data, 286 MiB used, 160 GiB / 160 GiB avail; 5.5 KiB/s rd, 0 B/s wr, 8 op/s 2026-03-10T05:56:16.835 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:56:16 vm02 bash[56371]: audit 2026-03-10T05:56:15.716490+0000 mon.a (mon.0) 550 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:56:16.835 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:56:16 vm02 bash[56371]: audit 2026-03-10T05:56:15.716490+0000 mon.a (mon.0) 550 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:56:16.835 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:56:16 vm02 bash[56371]: audit 2026-03-10T05:56:15.719377+0000 mon.a (mon.0) 551 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.foo.vm05.hvmsxl", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch 2026-03-10T05:56:16.835 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:56:16 vm02 bash[56371]: audit 2026-03-10T05:56:15.719377+0000 mon.a (mon.0) 551 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.foo.vm05.hvmsxl", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch 2026-03-10T05:56:16.835 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:56:16 vm02 bash[56371]: audit 2026-03-10T05:56:15.720238+0000 mon.a (mon.0) 552 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:56:16.835 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:56:16 vm02 bash[56371]: audit 2026-03-10T05:56:15.720238+0000 mon.a (mon.0) 552 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:56:16.835 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:56:16 vm02 bash[56371]: audit 2026-03-10T05:56:15.971134+0000 mon.a (mon.0) 553 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:56:16.835 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:56:16 vm02 bash[56371]: audit 2026-03-10T05:56:15.971134+0000 mon.a (mon.0) 553 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:56:16.835 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:56:16 vm02 bash[55303]: cluster 2026-03-10T05:56:14.861059+0000 mgr.y (mgr.24992) 228 : cluster [DBG] pgmap v136: 161 pgs: 161 active+clean; 457 KiB data, 286 MiB used, 160 GiB / 160 GiB avail; 5.5 KiB/s rd, 0 B/s wr, 8 op/s 2026-03-10T05:56:16.835 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:56:16 vm02 bash[55303]: cluster 2026-03-10T05:56:14.861059+0000 mgr.y (mgr.24992) 228 : cluster [DBG] pgmap v136: 161 pgs: 161 active+clean; 457 KiB data, 286 MiB used, 160 GiB / 160 GiB avail; 5.5 KiB/s rd, 0 B/s wr, 8 op/s 2026-03-10T05:56:16.835 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:56:16 vm02 bash[55303]: audit 2026-03-10T05:56:15.716490+0000 mon.a (mon.0) 550 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:56:16.835 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:56:16 vm02 bash[55303]: audit 2026-03-10T05:56:15.716490+0000 mon.a (mon.0) 550 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:56:16.835 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:56:16 vm02 bash[55303]: audit 2026-03-10T05:56:15.719377+0000 mon.a (mon.0) 551 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.foo.vm05.hvmsxl", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch 2026-03-10T05:56:16.835 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:56:16 vm02 bash[55303]: audit 2026-03-10T05:56:15.719377+0000 mon.a (mon.0) 551 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.foo.vm05.hvmsxl", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch 2026-03-10T05:56:16.835 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:56:16 vm02 bash[55303]: audit 2026-03-10T05:56:15.720238+0000 mon.a (mon.0) 552 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:56:16.835 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:56:16 vm02 bash[55303]: audit 2026-03-10T05:56:15.720238+0000 mon.a (mon.0) 552 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:56:16.835 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:56:16 vm02 bash[55303]: audit 2026-03-10T05:56:15.971134+0000 mon.a (mon.0) 553 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:56:16.835 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:56:16 vm02 bash[55303]: audit 2026-03-10T05:56:15.971134+0000 mon.a (mon.0) 553 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:56:17.187 INFO:journalctl@ceph.prometheus.a.vm05.stdout:Mar 10 05:56:16 vm05 bash[41269]: ts=2026-03-10T05:56:16.948Z caller=group.go:483 level=warn name=CephNodeDiskspaceWarning index=4 component="rule manager" file=/etc/prometheus/alerting/ceph_alerts.yml group=nodes msg="Evaluating rule failed" rule="alert: CephNodeDiskspaceWarning\nexpr: predict_linear(node_filesystem_free_bytes{device=~\"/.*\"}[2d], 3600 * 24 * 5)\n * on (instance) group_left (nodename) node_uname_info < 0\nlabels:\n oid: 1.3.6.1.4.1.50495.1.2.1.8.4\n severity: warning\n type: ceph_default\nannotations:\n description: Mountpoint {{ $labels.mountpoint }} on {{ $labels.nodename }} will\n be full in less than 5 days based on the 48 hour trailing fill rate.\n summary: Host filesystem free space is getting low\n" err="found duplicate series for the match group {instance=\"vm05\"} on the right hand-side of the operation: [{__name__=\"node_uname_info\", cluster=\"107483ae-1c44-11f1-b530-c1172cd6122a\", domainname=\"(none)\", instance=\"vm05\", job=\"node\", machine=\"x86_64\", nodename=\"vm05\", release=\"5.15.0-1092-kvm\", sysname=\"Linux\", version=\"#97-Ubuntu SMP Fri Jan 23 15:00:24 UTC 2026\"}, {__name__=\"node_uname_info\", domainname=\"(none)\", instance=\"vm05\", job=\"node\", machine=\"x86_64\", nodename=\"vm05\", release=\"5.15.0-1092-kvm\", sysname=\"Linux\", version=\"#97-Ubuntu SMP Fri Jan 23 15:00:24 UTC 2026\"}];many-to-many matching not allowed: matching labels must be unique on one side" 2026-03-10T05:56:17.775 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:56:17 vm05 bash[43541]: cephadm 2026-03-10T05:56:15.712384+0000 mgr.y (mgr.24992) 229 : cephadm [INF] Upgrade: Updating rgw.foo.vm05.hvmsxl (3/4) 2026-03-10T05:56:17.775 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:56:17 vm05 bash[43541]: cephadm 2026-03-10T05:56:15.712384+0000 mgr.y (mgr.24992) 229 : cephadm [INF] Upgrade: Updating rgw.foo.vm05.hvmsxl (3/4) 2026-03-10T05:56:17.776 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:56:17 vm05 bash[43541]: cephadm 2026-03-10T05:56:15.720710+0000 mgr.y (mgr.24992) 230 : cephadm [INF] Deploying daemon rgw.foo.vm05.hvmsxl on vm05 2026-03-10T05:56:17.776 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:56:17 vm05 bash[43541]: cephadm 2026-03-10T05:56:15.720710+0000 mgr.y (mgr.24992) 230 : cephadm [INF] Deploying daemon rgw.foo.vm05.hvmsxl on vm05 2026-03-10T05:56:17.776 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:56:17 vm05 bash[43541]: audit 2026-03-10T05:56:16.780785+0000 mon.a (mon.0) 554 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:56:17.776 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:56:17 vm05 bash[43541]: audit 2026-03-10T05:56:16.780785+0000 mon.a (mon.0) 554 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:56:17.776 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:56:17 vm05 bash[43541]: audit 2026-03-10T05:56:16.787975+0000 mon.a (mon.0) 555 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:56:17.776 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:56:17 vm05 bash[43541]: audit 2026-03-10T05:56:16.787975+0000 mon.a (mon.0) 555 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:56:17.776 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:56:17 vm05 bash[43541]: audit 2026-03-10T05:56:17.451608+0000 mon.a (mon.0) 556 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:56:17.776 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:56:17 vm05 bash[43541]: audit 2026-03-10T05:56:17.451608+0000 mon.a (mon.0) 556 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:56:17.776 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:56:17 vm05 bash[43541]: audit 2026-03-10T05:56:17.455150+0000 mon.a (mon.0) 557 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.smpl.vm05.hqqmap", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch 2026-03-10T05:56:17.776 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:56:17 vm05 bash[43541]: audit 2026-03-10T05:56:17.455150+0000 mon.a (mon.0) 557 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.smpl.vm05.hqqmap", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch 2026-03-10T05:56:17.776 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:56:17 vm05 bash[43541]: audit 2026-03-10T05:56:17.456134+0000 mon.a (mon.0) 558 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:56:17.776 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:56:17 vm05 bash[43541]: audit 2026-03-10T05:56:17.456134+0000 mon.a (mon.0) 558 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:56:17.835 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:56:17 vm02 bash[56371]: cephadm 2026-03-10T05:56:15.712384+0000 mgr.y (mgr.24992) 229 : cephadm [INF] Upgrade: Updating rgw.foo.vm05.hvmsxl (3/4) 2026-03-10T05:56:17.835 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:56:17 vm02 bash[56371]: cephadm 2026-03-10T05:56:15.712384+0000 mgr.y (mgr.24992) 229 : cephadm [INF] Upgrade: Updating rgw.foo.vm05.hvmsxl (3/4) 2026-03-10T05:56:17.835 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:56:17 vm02 bash[56371]: cephadm 2026-03-10T05:56:15.720710+0000 mgr.y (mgr.24992) 230 : cephadm [INF] Deploying daemon rgw.foo.vm05.hvmsxl on vm05 2026-03-10T05:56:17.835 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:56:17 vm02 bash[56371]: cephadm 2026-03-10T05:56:15.720710+0000 mgr.y (mgr.24992) 230 : cephadm [INF] Deploying daemon rgw.foo.vm05.hvmsxl on vm05 2026-03-10T05:56:17.835 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:56:17 vm02 bash[56371]: audit 2026-03-10T05:56:16.780785+0000 mon.a (mon.0) 554 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:56:17.835 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:56:17 vm02 bash[56371]: audit 2026-03-10T05:56:16.780785+0000 mon.a (mon.0) 554 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:56:17.835 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:56:17 vm02 bash[56371]: audit 2026-03-10T05:56:16.787975+0000 mon.a (mon.0) 555 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:56:17.835 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:56:17 vm02 bash[56371]: audit 2026-03-10T05:56:16.787975+0000 mon.a (mon.0) 555 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:56:17.835 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:56:17 vm02 bash[56371]: audit 2026-03-10T05:56:17.451608+0000 mon.a (mon.0) 556 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:56:17.835 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:56:17 vm02 bash[56371]: audit 2026-03-10T05:56:17.451608+0000 mon.a (mon.0) 556 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:56:17.835 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:56:17 vm02 bash[56371]: audit 2026-03-10T05:56:17.455150+0000 mon.a (mon.0) 557 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.smpl.vm05.hqqmap", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch 2026-03-10T05:56:17.835 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:56:17 vm02 bash[56371]: audit 2026-03-10T05:56:17.455150+0000 mon.a (mon.0) 557 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.smpl.vm05.hqqmap", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch 2026-03-10T05:56:17.835 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:56:17 vm02 bash[56371]: audit 2026-03-10T05:56:17.456134+0000 mon.a (mon.0) 558 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:56:17.835 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:56:17 vm02 bash[56371]: audit 2026-03-10T05:56:17.456134+0000 mon.a (mon.0) 558 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:56:17.835 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:56:17 vm02 bash[55303]: cephadm 2026-03-10T05:56:15.712384+0000 mgr.y (mgr.24992) 229 : cephadm [INF] Upgrade: Updating rgw.foo.vm05.hvmsxl (3/4) 2026-03-10T05:56:17.835 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:56:17 vm02 bash[55303]: cephadm 2026-03-10T05:56:15.712384+0000 mgr.y (mgr.24992) 229 : cephadm [INF] Upgrade: Updating rgw.foo.vm05.hvmsxl (3/4) 2026-03-10T05:56:17.835 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:56:17 vm02 bash[55303]: cephadm 2026-03-10T05:56:15.720710+0000 mgr.y (mgr.24992) 230 : cephadm [INF] Deploying daemon rgw.foo.vm05.hvmsxl on vm05 2026-03-10T05:56:17.835 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:56:17 vm02 bash[55303]: cephadm 2026-03-10T05:56:15.720710+0000 mgr.y (mgr.24992) 230 : cephadm [INF] Deploying daemon rgw.foo.vm05.hvmsxl on vm05 2026-03-10T05:56:17.835 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:56:17 vm02 bash[55303]: audit 2026-03-10T05:56:16.780785+0000 mon.a (mon.0) 554 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:56:17.835 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:56:17 vm02 bash[55303]: audit 2026-03-10T05:56:16.780785+0000 mon.a (mon.0) 554 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:56:17.835 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:56:17 vm02 bash[55303]: audit 2026-03-10T05:56:16.787975+0000 mon.a (mon.0) 555 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:56:17.835 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:56:17 vm02 bash[55303]: audit 2026-03-10T05:56:16.787975+0000 mon.a (mon.0) 555 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:56:17.835 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:56:17 vm02 bash[55303]: audit 2026-03-10T05:56:17.451608+0000 mon.a (mon.0) 556 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:56:17.835 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:56:17 vm02 bash[55303]: audit 2026-03-10T05:56:17.451608+0000 mon.a (mon.0) 556 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:56:17.835 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:56:17 vm02 bash[55303]: audit 2026-03-10T05:56:17.455150+0000 mon.a (mon.0) 557 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.smpl.vm05.hqqmap", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch 2026-03-10T05:56:17.835 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:56:17 vm02 bash[55303]: audit 2026-03-10T05:56:17.455150+0000 mon.a (mon.0) 557 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.smpl.vm05.hqqmap", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch 2026-03-10T05:56:17.835 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:56:17 vm02 bash[55303]: audit 2026-03-10T05:56:17.456134+0000 mon.a (mon.0) 558 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:56:17.835 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:56:17 vm02 bash[55303]: audit 2026-03-10T05:56:17.456134+0000 mon.a (mon.0) 558 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:56:18.033 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:56:17 vm05 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:56:18.033 INFO:journalctl@ceph.mgr.x.vm05.stdout:Mar 10 05:56:17 vm05 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:56:18.033 INFO:journalctl@ceph.osd.4.vm05.stdout:Mar 10 05:56:17 vm05 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:56:18.034 INFO:journalctl@ceph.osd.6.vm05.stdout:Mar 10 05:56:17 vm05 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:56:18.034 INFO:journalctl@ceph.prometheus.a.vm05.stdout:Mar 10 05:56:17 vm05 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:56:18.034 INFO:journalctl@ceph.osd.5.vm05.stdout:Mar 10 05:56:17 vm05 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:56:18.034 INFO:journalctl@ceph.osd.7.vm05.stdout:Mar 10 05:56:17 vm05 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:56:18.034 INFO:journalctl@ceph.node-exporter.b.vm05.stdout:Mar 10 05:56:17 vm05 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:56:18.034 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:56:17 vm05 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:56:18.729 INFO:journalctl@ceph.mgr.x.vm05.stdout:Mar 10 05:56:18 vm05 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:56:18.729 INFO:journalctl@ceph.osd.4.vm05.stdout:Mar 10 05:56:18 vm05 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:56:18.729 INFO:journalctl@ceph.osd.6.vm05.stdout:Mar 10 05:56:18 vm05 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:56:18.729 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:56:18 vm05 bash[43541]: cluster 2026-03-10T05:56:16.861534+0000 mgr.y (mgr.24992) 231 : cluster [DBG] pgmap v137: 161 pgs: 161 active+clean; 457 KiB data, 286 MiB used, 160 GiB / 160 GiB avail; 112 KiB/s rd, 204 B/s wr, 175 op/s 2026-03-10T05:56:18.729 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:56:18 vm05 bash[43541]: cluster 2026-03-10T05:56:16.861534+0000 mgr.y (mgr.24992) 231 : cluster [DBG] pgmap v137: 161 pgs: 161 active+clean; 457 KiB data, 286 MiB used, 160 GiB / 160 GiB avail; 112 KiB/s rd, 204 B/s wr, 175 op/s 2026-03-10T05:56:18.729 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:56:18 vm05 bash[43541]: audit 2026-03-10T05:56:17.010649+0000 mgr.y (mgr.24992) 232 : audit [DBG] from='client.25048 -' entity='client.iscsi.foo.vm02.mxbwmh' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T05:56:18.729 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:56:18 vm05 bash[43541]: audit 2026-03-10T05:56:17.010649+0000 mgr.y (mgr.24992) 232 : audit [DBG] from='client.25048 -' entity='client.iscsi.foo.vm02.mxbwmh' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T05:56:18.729 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:56:18 vm05 bash[43541]: cephadm 2026-03-10T05:56:17.399930+0000 mgr.y (mgr.24992) 233 : cephadm [INF] Upgrade: Updating rgw.smpl.vm05.hqqmap (4/4) 2026-03-10T05:56:18.729 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:56:18 vm05 bash[43541]: cephadm 2026-03-10T05:56:17.399930+0000 mgr.y (mgr.24992) 233 : cephadm [INF] Upgrade: Updating rgw.smpl.vm05.hqqmap (4/4) 2026-03-10T05:56:18.730 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:56:18 vm05 bash[43541]: cephadm 2026-03-10T05:56:17.457197+0000 mgr.y (mgr.24992) 234 : cephadm [INF] Deploying daemon rgw.smpl.vm05.hqqmap on vm05 2026-03-10T05:56:18.730 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:56:18 vm05 bash[43541]: cephadm 2026-03-10T05:56:17.457197+0000 mgr.y (mgr.24992) 234 : cephadm [INF] Deploying daemon rgw.smpl.vm05.hqqmap on vm05 2026-03-10T05:56:18.730 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:56:18 vm05 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:56:18.730 INFO:journalctl@ceph.osd.5.vm05.stdout:Mar 10 05:56:18 vm05 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:56:18.730 INFO:journalctl@ceph.osd.7.vm05.stdout:Mar 10 05:56:18 vm05 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:56:18.730 INFO:journalctl@ceph.prometheus.a.vm05.stdout:Mar 10 05:56:18 vm05 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:56:18.730 INFO:journalctl@ceph.node-exporter.b.vm05.stdout:Mar 10 05:56:18 vm05 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:56:18.730 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:56:18 vm05 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:56:18.835 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:56:18 vm02 bash[56371]: cluster 2026-03-10T05:56:16.861534+0000 mgr.y (mgr.24992) 231 : cluster [DBG] pgmap v137: 161 pgs: 161 active+clean; 457 KiB data, 286 MiB used, 160 GiB / 160 GiB avail; 112 KiB/s rd, 204 B/s wr, 175 op/s 2026-03-10T05:56:18.835 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:56:18 vm02 bash[56371]: cluster 2026-03-10T05:56:16.861534+0000 mgr.y (mgr.24992) 231 : cluster [DBG] pgmap v137: 161 pgs: 161 active+clean; 457 KiB data, 286 MiB used, 160 GiB / 160 GiB avail; 112 KiB/s rd, 204 B/s wr, 175 op/s 2026-03-10T05:56:18.835 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:56:18 vm02 bash[56371]: audit 2026-03-10T05:56:17.010649+0000 mgr.y (mgr.24992) 232 : audit [DBG] from='client.25048 -' entity='client.iscsi.foo.vm02.mxbwmh' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T05:56:18.835 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:56:18 vm02 bash[56371]: audit 2026-03-10T05:56:17.010649+0000 mgr.y (mgr.24992) 232 : audit [DBG] from='client.25048 -' entity='client.iscsi.foo.vm02.mxbwmh' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T05:56:18.835 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:56:18 vm02 bash[56371]: cephadm 2026-03-10T05:56:17.399930+0000 mgr.y (mgr.24992) 233 : cephadm [INF] Upgrade: Updating rgw.smpl.vm05.hqqmap (4/4) 2026-03-10T05:56:18.835 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:56:18 vm02 bash[56371]: cephadm 2026-03-10T05:56:17.399930+0000 mgr.y (mgr.24992) 233 : cephadm [INF] Upgrade: Updating rgw.smpl.vm05.hqqmap (4/4) 2026-03-10T05:56:18.835 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:56:18 vm02 bash[56371]: cephadm 2026-03-10T05:56:17.457197+0000 mgr.y (mgr.24992) 234 : cephadm [INF] Deploying daemon rgw.smpl.vm05.hqqmap on vm05 2026-03-10T05:56:18.835 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:56:18 vm02 bash[56371]: cephadm 2026-03-10T05:56:17.457197+0000 mgr.y (mgr.24992) 234 : cephadm [INF] Deploying daemon rgw.smpl.vm05.hqqmap on vm05 2026-03-10T05:56:18.835 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:56:18 vm02 bash[55303]: cluster 2026-03-10T05:56:16.861534+0000 mgr.y (mgr.24992) 231 : cluster [DBG] pgmap v137: 161 pgs: 161 active+clean; 457 KiB data, 286 MiB used, 160 GiB / 160 GiB avail; 112 KiB/s rd, 204 B/s wr, 175 op/s 2026-03-10T05:56:18.835 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:56:18 vm02 bash[55303]: cluster 2026-03-10T05:56:16.861534+0000 mgr.y (mgr.24992) 231 : cluster [DBG] pgmap v137: 161 pgs: 161 active+clean; 457 KiB data, 286 MiB used, 160 GiB / 160 GiB avail; 112 KiB/s rd, 204 B/s wr, 175 op/s 2026-03-10T05:56:18.835 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:56:18 vm02 bash[55303]: audit 2026-03-10T05:56:17.010649+0000 mgr.y (mgr.24992) 232 : audit [DBG] from='client.25048 -' entity='client.iscsi.foo.vm02.mxbwmh' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T05:56:18.835 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:56:18 vm02 bash[55303]: audit 2026-03-10T05:56:17.010649+0000 mgr.y (mgr.24992) 232 : audit [DBG] from='client.25048 -' entity='client.iscsi.foo.vm02.mxbwmh' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T05:56:18.835 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:56:18 vm02 bash[55303]: cephadm 2026-03-10T05:56:17.399930+0000 mgr.y (mgr.24992) 233 : cephadm [INF] Upgrade: Updating rgw.smpl.vm05.hqqmap (4/4) 2026-03-10T05:56:18.835 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:56:18 vm02 bash[55303]: cephadm 2026-03-10T05:56:17.399930+0000 mgr.y (mgr.24992) 233 : cephadm [INF] Upgrade: Updating rgw.smpl.vm05.hqqmap (4/4) 2026-03-10T05:56:18.835 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:56:18 vm02 bash[55303]: cephadm 2026-03-10T05:56:17.457197+0000 mgr.y (mgr.24992) 234 : cephadm [INF] Deploying daemon rgw.smpl.vm05.hqqmap on vm05 2026-03-10T05:56:18.835 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:56:18 vm02 bash[55303]: cephadm 2026-03-10T05:56:17.457197+0000 mgr.y (mgr.24992) 234 : cephadm [INF] Deploying daemon rgw.smpl.vm05.hqqmap on vm05 2026-03-10T05:56:20.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:56:19 vm02 bash[56371]: audit 2026-03-10T05:56:18.758043+0000 mon.a (mon.0) 559 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:56:20.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:56:19 vm02 bash[56371]: audit 2026-03-10T05:56:18.758043+0000 mon.a (mon.0) 559 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:56:20.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:56:19 vm02 bash[56371]: audit 2026-03-10T05:56:18.766191+0000 mon.a (mon.0) 560 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:56:20.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:56:19 vm02 bash[56371]: audit 2026-03-10T05:56:18.766191+0000 mon.a (mon.0) 560 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:56:20.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:56:19 vm02 bash[56371]: cluster 2026-03-10T05:56:18.861919+0000 mgr.y (mgr.24992) 235 : cluster [DBG] pgmap v138: 161 pgs: 161 active+clean; 457 KiB data, 294 MiB used, 160 GiB / 160 GiB avail; 179 KiB/s rd, 204 B/s wr, 282 op/s 2026-03-10T05:56:20.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:56:19 vm02 bash[56371]: cluster 2026-03-10T05:56:18.861919+0000 mgr.y (mgr.24992) 235 : cluster [DBG] pgmap v138: 161 pgs: 161 active+clean; 457 KiB data, 294 MiB used, 160 GiB / 160 GiB avail; 179 KiB/s rd, 204 B/s wr, 282 op/s 2026-03-10T05:56:20.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:56:19 vm02 bash[55303]: audit 2026-03-10T05:56:18.758043+0000 mon.a (mon.0) 559 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:56:20.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:56:19 vm02 bash[55303]: audit 2026-03-10T05:56:18.758043+0000 mon.a (mon.0) 559 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:56:20.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:56:19 vm02 bash[55303]: audit 2026-03-10T05:56:18.766191+0000 mon.a (mon.0) 560 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:56:20.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:56:19 vm02 bash[55303]: audit 2026-03-10T05:56:18.766191+0000 mon.a (mon.0) 560 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:56:20.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:56:19 vm02 bash[55303]: cluster 2026-03-10T05:56:18.861919+0000 mgr.y (mgr.24992) 235 : cluster [DBG] pgmap v138: 161 pgs: 161 active+clean; 457 KiB data, 294 MiB used, 160 GiB / 160 GiB avail; 179 KiB/s rd, 204 B/s wr, 282 op/s 2026-03-10T05:56:20.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:56:19 vm02 bash[55303]: cluster 2026-03-10T05:56:18.861919+0000 mgr.y (mgr.24992) 235 : cluster [DBG] pgmap v138: 161 pgs: 161 active+clean; 457 KiB data, 294 MiB used, 160 GiB / 160 GiB avail; 179 KiB/s rd, 204 B/s wr, 282 op/s 2026-03-10T05:56:20.249 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:56:19 vm05 bash[43541]: audit 2026-03-10T05:56:18.758043+0000 mon.a (mon.0) 559 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:56:20.249 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:56:19 vm05 bash[43541]: audit 2026-03-10T05:56:18.758043+0000 mon.a (mon.0) 559 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:56:20.249 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:56:19 vm05 bash[43541]: audit 2026-03-10T05:56:18.766191+0000 mon.a (mon.0) 560 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:56:20.249 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:56:19 vm05 bash[43541]: audit 2026-03-10T05:56:18.766191+0000 mon.a (mon.0) 560 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:56:20.249 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:56:19 vm05 bash[43541]: cluster 2026-03-10T05:56:18.861919+0000 mgr.y (mgr.24992) 235 : cluster [DBG] pgmap v138: 161 pgs: 161 active+clean; 457 KiB data, 294 MiB used, 160 GiB / 160 GiB avail; 179 KiB/s rd, 204 B/s wr, 282 op/s 2026-03-10T05:56:20.249 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:56:19 vm05 bash[43541]: cluster 2026-03-10T05:56:18.861919+0000 mgr.y (mgr.24992) 235 : cluster [DBG] pgmap v138: 161 pgs: 161 active+clean; 457 KiB data, 294 MiB used, 160 GiB / 160 GiB avail; 179 KiB/s rd, 204 B/s wr, 282 op/s 2026-03-10T05:56:22.249 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:56:21 vm05 bash[43541]: cluster 2026-03-10T05:56:20.862258+0000 mgr.y (mgr.24992) 236 : cluster [DBG] pgmap v139: 161 pgs: 161 active+clean; 457 KiB data, 298 MiB used, 160 GiB / 160 GiB avail; 265 KiB/s rd, 204 B/s wr, 420 op/s 2026-03-10T05:56:22.249 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:56:21 vm05 bash[43541]: cluster 2026-03-10T05:56:20.862258+0000 mgr.y (mgr.24992) 236 : cluster [DBG] pgmap v139: 161 pgs: 161 active+clean; 457 KiB data, 298 MiB used, 160 GiB / 160 GiB avail; 265 KiB/s rd, 204 B/s wr, 420 op/s 2026-03-10T05:56:22.335 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:56:21 vm02 bash[56371]: cluster 2026-03-10T05:56:20.862258+0000 mgr.y (mgr.24992) 236 : cluster [DBG] pgmap v139: 161 pgs: 161 active+clean; 457 KiB data, 298 MiB used, 160 GiB / 160 GiB avail; 265 KiB/s rd, 204 B/s wr, 420 op/s 2026-03-10T05:56:22.335 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:56:21 vm02 bash[56371]: cluster 2026-03-10T05:56:20.862258+0000 mgr.y (mgr.24992) 236 : cluster [DBG] pgmap v139: 161 pgs: 161 active+clean; 457 KiB data, 298 MiB used, 160 GiB / 160 GiB avail; 265 KiB/s rd, 204 B/s wr, 420 op/s 2026-03-10T05:56:22.335 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:56:21 vm02 bash[55303]: cluster 2026-03-10T05:56:20.862258+0000 mgr.y (mgr.24992) 236 : cluster [DBG] pgmap v139: 161 pgs: 161 active+clean; 457 KiB data, 298 MiB used, 160 GiB / 160 GiB avail; 265 KiB/s rd, 204 B/s wr, 420 op/s 2026-03-10T05:56:22.335 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:56:21 vm02 bash[55303]: cluster 2026-03-10T05:56:20.862258+0000 mgr.y (mgr.24992) 236 : cluster [DBG] pgmap v139: 161 pgs: 161 active+clean; 457 KiB data, 298 MiB used, 160 GiB / 160 GiB avail; 265 KiB/s rd, 204 B/s wr, 420 op/s 2026-03-10T05:56:23.335 INFO:journalctl@ceph.mgr.y.vm02.stdout:Mar 10 05:56:22 vm02 bash[52264]: ::ffff:192.168.123.105 - - [10/Mar/2026:05:56:22] "GET /metrics HTTP/1.1" 200 38195 "" "Prometheus/2.51.0" 2026-03-10T05:56:24.143 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:56:23 vm05 bash[43541]: cluster 2026-03-10T05:56:22.862589+0000 mgr.y (mgr.24992) 237 : cluster [DBG] pgmap v140: 161 pgs: 161 active+clean; 457 KiB data, 324 MiB used, 160 GiB / 160 GiB avail; 322 KiB/s rd, 363 B/s wr, 509 op/s 2026-03-10T05:56:24.143 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:56:23 vm05 bash[43541]: cluster 2026-03-10T05:56:22.862589+0000 mgr.y (mgr.24992) 237 : cluster [DBG] pgmap v140: 161 pgs: 161 active+clean; 457 KiB data, 324 MiB used, 160 GiB / 160 GiB avail; 322 KiB/s rd, 363 B/s wr, 509 op/s 2026-03-10T05:56:24.143 INFO:journalctl@ceph.prometheus.a.vm05.stdout:Mar 10 05:56:24 vm05 bash[41269]: ts=2026-03-10T05:56:24.147Z caller=group.go:483 level=warn name=CephOSDFlapping index=13 component="rule manager" file=/etc/prometheus/alerting/ceph_alerts.yml group=osd msg="Evaluating rule failed" rule="alert: CephOSDFlapping\nexpr: (rate(ceph_osd_up[5m]) * on (ceph_daemon) group_left (hostname) ceph_osd_metadata)\n * 60 > 1\nlabels:\n oid: 1.3.6.1.4.1.50495.1.2.1.4.4\n severity: warning\n type: ceph_default\nannotations:\n description: OSD {{ $labels.ceph_daemon }} on {{ $labels.hostname }} was marked\n down and back up {{ $value | humanize }} times once a minute for 5 minutes. This\n may indicate a network issue (latency, packet loss, MTU mismatch) on the cluster\n network, or the public network if no cluster network is deployed. Check the network\n stats on the listed host(s).\n documentation: https://docs.ceph.com/en/latest/rados/troubleshooting/troubleshooting-osd#flapping-osds\n summary: Network issues are causing OSDs to flap (mark each other down)\n" err="found duplicate series for the match group {ceph_daemon=\"osd.3\"} on the right hand-side of the operation: [{__name__=\"ceph_osd_metadata\", ceph_daemon=\"osd.3\", ceph_version=\"ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable)\", cluster=\"107483ae-1c44-11f1-b530-c1172cd6122a\", cluster_addr=\"192.168.123.102\", device_class=\"hdd\", hostname=\"vm02\", instance=\"ceph_cluster\", job=\"ceph\", objectstore=\"bluestore\", public_addr=\"192.168.123.102\"}, {__name__=\"ceph_osd_metadata\", ceph_daemon=\"osd.3\", ceph_version=\"ceph version 17.2.0 (43e2e60a7559d3f46c9d53f1ca875fd499a1e35e) quincy (stable)\", cluster_addr=\"192.168.123.102\", device_class=\"hdd\", hostname=\"vm02\", instance=\"192.168.123.105:9283\", job=\"ceph\", objectstore=\"bluestore\", public_addr=\"192.168.123.102\"}];many-to-many matching not allowed: matching labels must be unique on one side" 2026-03-10T05:56:24.335 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:56:23 vm02 bash[56371]: cluster 2026-03-10T05:56:22.862589+0000 mgr.y (mgr.24992) 237 : cluster [DBG] pgmap v140: 161 pgs: 161 active+clean; 457 KiB data, 324 MiB used, 160 GiB / 160 GiB avail; 322 KiB/s rd, 363 B/s wr, 509 op/s 2026-03-10T05:56:24.335 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:56:23 vm02 bash[56371]: cluster 2026-03-10T05:56:22.862589+0000 mgr.y (mgr.24992) 237 : cluster [DBG] pgmap v140: 161 pgs: 161 active+clean; 457 KiB data, 324 MiB used, 160 GiB / 160 GiB avail; 322 KiB/s rd, 363 B/s wr, 509 op/s 2026-03-10T05:56:24.335 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:56:23 vm02 bash[55303]: cluster 2026-03-10T05:56:22.862589+0000 mgr.y (mgr.24992) 237 : cluster [DBG] pgmap v140: 161 pgs: 161 active+clean; 457 KiB data, 324 MiB used, 160 GiB / 160 GiB avail; 322 KiB/s rd, 363 B/s wr, 509 op/s 2026-03-10T05:56:24.335 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:56:23 vm02 bash[55303]: cluster 2026-03-10T05:56:22.862589+0000 mgr.y (mgr.24992) 237 : cluster [DBG] pgmap v140: 161 pgs: 161 active+clean; 457 KiB data, 324 MiB used, 160 GiB / 160 GiB avail; 322 KiB/s rd, 363 B/s wr, 509 op/s 2026-03-10T05:56:25.249 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:56:24 vm05 bash[43541]: audit 2026-03-10T05:56:23.992878+0000 mon.a (mon.0) 561 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:56:25.249 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:56:24 vm05 bash[43541]: audit 2026-03-10T05:56:23.992878+0000 mon.a (mon.0) 561 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:56:25.249 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:56:25 vm05 bash[43541]: audit 2026-03-10T05:56:23.998107+0000 mon.a (mon.0) 562 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:56:25.249 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:56:25 vm05 bash[43541]: audit 2026-03-10T05:56:23.998107+0000 mon.a (mon.0) 562 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:56:25.249 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:56:25 vm05 bash[43541]: audit 2026-03-10T05:56:24.107880+0000 mon.a (mon.0) 563 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:56:25.249 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:56:25 vm05 bash[43541]: audit 2026-03-10T05:56:24.107880+0000 mon.a (mon.0) 563 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:56:25.249 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:56:25 vm05 bash[43541]: audit 2026-03-10T05:56:24.116192+0000 mon.a (mon.0) 564 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:56:25.249 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:56:25 vm05 bash[43541]: audit 2026-03-10T05:56:24.116192+0000 mon.a (mon.0) 564 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:56:25.249 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:56:25 vm05 bash[43541]: audit 2026-03-10T05:56:24.531734+0000 mon.a (mon.0) 565 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:56:25.249 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:56:25 vm05 bash[43541]: audit 2026-03-10T05:56:24.531734+0000 mon.a (mon.0) 565 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:56:25.249 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:56:25 vm05 bash[43541]: audit 2026-03-10T05:56:24.537313+0000 mon.a (mon.0) 566 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:56:25.249 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:56:25 vm05 bash[43541]: audit 2026-03-10T05:56:24.537313+0000 mon.a (mon.0) 566 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:56:25.249 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:56:25 vm05 bash[43541]: audit 2026-03-10T05:56:24.645803+0000 mon.a (mon.0) 567 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:56:25.249 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:56:25 vm05 bash[43541]: audit 2026-03-10T05:56:24.645803+0000 mon.a (mon.0) 567 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:56:25.249 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:56:25 vm05 bash[43541]: audit 2026-03-10T05:56:24.651619+0000 mon.a (mon.0) 568 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:56:25.249 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:56:25 vm05 bash[43541]: audit 2026-03-10T05:56:24.651619+0000 mon.a (mon.0) 568 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:56:25.335 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:56:24 vm02 bash[56371]: audit 2026-03-10T05:56:23.992878+0000 mon.a (mon.0) 561 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:56:25.335 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:56:24 vm02 bash[56371]: audit 2026-03-10T05:56:23.992878+0000 mon.a (mon.0) 561 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:56:25.335 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:56:24 vm02 bash[56371]: audit 2026-03-10T05:56:23.998107+0000 mon.a (mon.0) 562 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:56:25.335 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:56:24 vm02 bash[56371]: audit 2026-03-10T05:56:23.998107+0000 mon.a (mon.0) 562 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:56:25.335 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:56:24 vm02 bash[56371]: audit 2026-03-10T05:56:24.107880+0000 mon.a (mon.0) 563 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:56:25.335 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:56:24 vm02 bash[56371]: audit 2026-03-10T05:56:24.107880+0000 mon.a (mon.0) 563 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:56:25.335 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:56:24 vm02 bash[56371]: audit 2026-03-10T05:56:24.116192+0000 mon.a (mon.0) 564 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:56:25.335 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:56:24 vm02 bash[56371]: audit 2026-03-10T05:56:24.116192+0000 mon.a (mon.0) 564 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:56:25.335 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:56:24 vm02 bash[56371]: audit 2026-03-10T05:56:24.531734+0000 mon.a (mon.0) 565 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:56:25.335 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:56:24 vm02 bash[56371]: audit 2026-03-10T05:56:24.531734+0000 mon.a (mon.0) 565 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:56:25.335 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:56:24 vm02 bash[56371]: audit 2026-03-10T05:56:24.537313+0000 mon.a (mon.0) 566 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:56:25.335 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:56:24 vm02 bash[56371]: audit 2026-03-10T05:56:24.537313+0000 mon.a (mon.0) 566 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:56:25.335 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:56:24 vm02 bash[56371]: audit 2026-03-10T05:56:24.645803+0000 mon.a (mon.0) 567 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:56:25.335 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:56:24 vm02 bash[56371]: audit 2026-03-10T05:56:24.645803+0000 mon.a (mon.0) 567 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:56:25.335 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:56:24 vm02 bash[56371]: audit 2026-03-10T05:56:24.651619+0000 mon.a (mon.0) 568 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:56:25.335 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:56:24 vm02 bash[56371]: audit 2026-03-10T05:56:24.651619+0000 mon.a (mon.0) 568 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:56:25.335 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:56:24 vm02 bash[55303]: audit 2026-03-10T05:56:23.992878+0000 mon.a (mon.0) 561 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:56:25.335 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:56:24 vm02 bash[55303]: audit 2026-03-10T05:56:23.992878+0000 mon.a (mon.0) 561 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:56:25.335 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:56:24 vm02 bash[55303]: audit 2026-03-10T05:56:23.998107+0000 mon.a (mon.0) 562 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:56:25.335 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:56:24 vm02 bash[55303]: audit 2026-03-10T05:56:23.998107+0000 mon.a (mon.0) 562 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:56:25.335 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:56:24 vm02 bash[55303]: audit 2026-03-10T05:56:24.107880+0000 mon.a (mon.0) 563 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:56:25.335 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:56:24 vm02 bash[55303]: audit 2026-03-10T05:56:24.107880+0000 mon.a (mon.0) 563 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:56:25.335 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:56:24 vm02 bash[55303]: audit 2026-03-10T05:56:24.116192+0000 mon.a (mon.0) 564 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:56:25.335 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:56:24 vm02 bash[55303]: audit 2026-03-10T05:56:24.116192+0000 mon.a (mon.0) 564 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:56:25.335 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:56:24 vm02 bash[55303]: audit 2026-03-10T05:56:24.531734+0000 mon.a (mon.0) 565 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:56:25.335 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:56:24 vm02 bash[55303]: audit 2026-03-10T05:56:24.531734+0000 mon.a (mon.0) 565 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:56:25.335 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:56:24 vm02 bash[55303]: audit 2026-03-10T05:56:24.537313+0000 mon.a (mon.0) 566 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:56:25.335 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:56:24 vm02 bash[55303]: audit 2026-03-10T05:56:24.537313+0000 mon.a (mon.0) 566 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:56:25.335 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:56:24 vm02 bash[55303]: audit 2026-03-10T05:56:24.645803+0000 mon.a (mon.0) 567 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:56:25.335 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:56:24 vm02 bash[55303]: audit 2026-03-10T05:56:24.645803+0000 mon.a (mon.0) 567 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:56:25.335 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:56:24 vm02 bash[55303]: audit 2026-03-10T05:56:24.651619+0000 mon.a (mon.0) 568 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:56:25.335 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:56:24 vm02 bash[55303]: audit 2026-03-10T05:56:24.651619+0000 mon.a (mon.0) 568 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:56:26.735 INFO:teuthology.orchestra.run.vm02.stdout:true 2026-03-10T05:56:26.944 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:56:26 vm05 bash[43541]: cluster 2026-03-10T05:56:24.862911+0000 mgr.y (mgr.24992) 238 : cluster [DBG] pgmap v141: 161 pgs: 161 active+clean; 457 KiB data, 324 MiB used, 160 GiB / 160 GiB avail; 313 KiB/s rd, 341 B/s wr, 495 op/s 2026-03-10T05:56:26.944 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:56:26 vm05 bash[43541]: cluster 2026-03-10T05:56:24.862911+0000 mgr.y (mgr.24992) 238 : cluster [DBG] pgmap v141: 161 pgs: 161 active+clean; 457 KiB data, 324 MiB used, 160 GiB / 160 GiB avail; 313 KiB/s rd, 341 B/s wr, 495 op/s 2026-03-10T05:56:26.944 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:56:26 vm05 bash[43541]: audit 2026-03-10T05:56:25.886174+0000 mon.a (mon.0) 569 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:56:26.944 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:56:26 vm05 bash[43541]: audit 2026-03-10T05:56:25.886174+0000 mon.a (mon.0) 569 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:56:26.944 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:56:26 vm05 bash[43541]: audit 2026-03-10T05:56:25.887471+0000 mon.a (mon.0) 570 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T05:56:26.944 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:56:26 vm05 bash[43541]: audit 2026-03-10T05:56:25.887471+0000 mon.a (mon.0) 570 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T05:56:27.015 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:56:26 vm02 bash[56371]: cluster 2026-03-10T05:56:24.862911+0000 mgr.y (mgr.24992) 238 : cluster [DBG] pgmap v141: 161 pgs: 161 active+clean; 457 KiB data, 324 MiB used, 160 GiB / 160 GiB avail; 313 KiB/s rd, 341 B/s wr, 495 op/s 2026-03-10T05:56:27.015 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:56:26 vm02 bash[56371]: cluster 2026-03-10T05:56:24.862911+0000 mgr.y (mgr.24992) 238 : cluster [DBG] pgmap v141: 161 pgs: 161 active+clean; 457 KiB data, 324 MiB used, 160 GiB / 160 GiB avail; 313 KiB/s rd, 341 B/s wr, 495 op/s 2026-03-10T05:56:27.015 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:56:26 vm02 bash[56371]: audit 2026-03-10T05:56:25.886174+0000 mon.a (mon.0) 569 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:56:27.015 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:56:26 vm02 bash[56371]: audit 2026-03-10T05:56:25.886174+0000 mon.a (mon.0) 569 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:56:27.015 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:56:26 vm02 bash[56371]: audit 2026-03-10T05:56:25.887471+0000 mon.a (mon.0) 570 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T05:56:27.015 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:56:26 vm02 bash[56371]: audit 2026-03-10T05:56:25.887471+0000 mon.a (mon.0) 570 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T05:56:27.015 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:56:26 vm02 bash[55303]: cluster 2026-03-10T05:56:24.862911+0000 mgr.y (mgr.24992) 238 : cluster [DBG] pgmap v141: 161 pgs: 161 active+clean; 457 KiB data, 324 MiB used, 160 GiB / 160 GiB avail; 313 KiB/s rd, 341 B/s wr, 495 op/s 2026-03-10T05:56:27.015 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:56:26 vm02 bash[55303]: cluster 2026-03-10T05:56:24.862911+0000 mgr.y (mgr.24992) 238 : cluster [DBG] pgmap v141: 161 pgs: 161 active+clean; 457 KiB data, 324 MiB used, 160 GiB / 160 GiB avail; 313 KiB/s rd, 341 B/s wr, 495 op/s 2026-03-10T05:56:27.015 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:56:26 vm02 bash[55303]: audit 2026-03-10T05:56:25.886174+0000 mon.a (mon.0) 569 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:56:27.015 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:56:26 vm02 bash[55303]: audit 2026-03-10T05:56:25.886174+0000 mon.a (mon.0) 569 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:56:27.015 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:56:26 vm02 bash[55303]: audit 2026-03-10T05:56:25.887471+0000 mon.a (mon.0) 570 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T05:56:27.015 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:56:26 vm02 bash[55303]: audit 2026-03-10T05:56:25.887471+0000 mon.a (mon.0) 570 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T05:56:27.108 INFO:teuthology.orchestra.run.vm02.stdout:NAME HOST PORTS STATUS REFRESHED AGE MEM USE MEM LIM VERSION IMAGE ID CONTAINER ID 2026-03-10T05:56:27.108 INFO:teuthology.orchestra.run.vm02.stdout:alertmanager.a vm02 *:9093,9094 running (4m) 3s ago 9m 13.2M - 0.25.0 c8568f914cd2 7a7c5c2cddb6 2026-03-10T05:56:27.108 INFO:teuthology.orchestra.run.vm02.stdout:grafana.a vm05 *:3000 running (4m) 3s ago 9m 40.3M - dad864ee21e9 95c6d977988a 2026-03-10T05:56:27.108 INFO:teuthology.orchestra.run.vm02.stdout:iscsi.foo.vm02.mxbwmh vm02 running (3m) 3s ago 8m 44.6M - 3.5 e1d6a67b021e 62aba5b41046 2026-03-10T05:56:27.108 INFO:teuthology.orchestra.run.vm02.stdout:mgr.x vm05 *:8443,9283,8765 running (3m) 3s ago 11m 465M - 19.2.3-678-ge911bdeb 654f31e6858e 7579626ada90 2026-03-10T05:56:27.108 INFO:teuthology.orchestra.run.vm02.stdout:mgr.y vm02 *:8443,9283,8765 running (4m) 3s ago 12m 535M - 19.2.3-678-ge911bdeb 654f31e6858e ef46d0f7b15e 2026-03-10T05:56:27.108 INFO:teuthology.orchestra.run.vm02.stdout:mon.a vm02 running (3m) 3s ago 12m 52.7M 2048M 19.2.3-678-ge911bdeb 654f31e6858e df3a0a290a95 2026-03-10T05:56:27.108 INFO:teuthology.orchestra.run.vm02.stdout:mon.b vm05 running (3m) 3s ago 12m 44.6M 2048M 19.2.3-678-ge911bdeb 654f31e6858e 1da04b90d16b 2026-03-10T05:56:27.108 INFO:teuthology.orchestra.run.vm02.stdout:mon.c vm02 running (3m) 3s ago 12m 49.9M 2048M 19.2.3-678-ge911bdeb 654f31e6858e 7f2cdf1b7aa6 2026-03-10T05:56:27.108 INFO:teuthology.orchestra.run.vm02.stdout:node-exporter.a vm02 *:9100 running (4m) 3s ago 9m 7528k - 1.7.0 72c9c2088986 90288450bd1f 2026-03-10T05:56:27.108 INFO:teuthology.orchestra.run.vm02.stdout:node-exporter.b vm05 *:9100 running (4m) 3s ago 9m 7700k - 1.7.0 72c9c2088986 4e859143cb0e 2026-03-10T05:56:27.108 INFO:teuthology.orchestra.run.vm02.stdout:osd.0 vm02 running (2m) 3s ago 11m 75.1M 4096M 19.2.3-678-ge911bdeb 654f31e6858e 640360275f83 2026-03-10T05:56:27.108 INFO:teuthology.orchestra.run.vm02.stdout:osd.1 vm02 running (93s) 3s ago 11m 56.2M 4096M 19.2.3-678-ge911bdeb 654f31e6858e 4de5c460789a 2026-03-10T05:56:27.108 INFO:teuthology.orchestra.run.vm02.stdout:osd.2 vm02 running (2m) 3s ago 11m 51.5M 4096M 19.2.3-678-ge911bdeb 654f31e6858e 51dac2f581d9 2026-03-10T05:56:27.108 INFO:teuthology.orchestra.run.vm02.stdout:osd.3 vm02 running (2m) 3s ago 10m 80.8M 4096M 19.2.3-678-ge911bdeb 654f31e6858e 0eca961791f4 2026-03-10T05:56:27.108 INFO:teuthology.orchestra.run.vm02.stdout:osd.4 vm05 running (77s) 3s ago 10m 57.3M 4096M 19.2.3-678-ge911bdeb 654f31e6858e 2c1b499265f4 2026-03-10T05:56:27.108 INFO:teuthology.orchestra.run.vm02.stdout:osd.5 vm05 running (60s) 3s ago 10m 75.1M 4096M 19.2.3-678-ge911bdeb 654f31e6858e 7ec1a1246098 2026-03-10T05:56:27.108 INFO:teuthology.orchestra.run.vm02.stdout:osd.6 vm05 running (44s) 3s ago 10m 73.0M 4096M 19.2.3-678-ge911bdeb 654f31e6858e bd151ab03026 2026-03-10T05:56:27.108 INFO:teuthology.orchestra.run.vm02.stdout:osd.7 vm05 running (28s) 3s ago 9m 72.0M 4096M 19.2.3-678-ge911bdeb 654f31e6858e 83fe4a7f26f5 2026-03-10T05:56:27.108 INFO:teuthology.orchestra.run.vm02.stdout:prometheus.a vm05 *:9095 running (3m) 3s ago 9m 39.2M - 2.51.0 1d3b7f56885b 3328811f8f28 2026-03-10T05:56:27.108 INFO:teuthology.orchestra.run.vm02.stdout:rgw.foo.vm02.pbogjd vm02 *:8000 running (13s) 3s ago 8m 92.2M - 19.2.3-678-ge911bdeb 654f31e6858e 4e1a47dc4ede 2026-03-10T05:56:27.108 INFO:teuthology.orchestra.run.vm02.stdout:rgw.foo.vm05.hvmsxl vm05 *:8000 running (10s) 3s ago 8m 92.0M - 19.2.3-678-ge911bdeb 654f31e6858e 51931a978021 2026-03-10T05:56:27.108 INFO:teuthology.orchestra.run.vm02.stdout:rgw.smpl.vm02.pglcfm vm02 *:80 running (11s) 3s ago 8m 92.1M - 19.2.3-678-ge911bdeb 654f31e6858e a59d6d93b54c 2026-03-10T05:56:27.108 INFO:teuthology.orchestra.run.vm02.stdout:rgw.smpl.vm05.hqqmap vm05 *:80 running (8s) 3s ago 8m 91.9M - 19.2.3-678-ge911bdeb 654f31e6858e 62b012e7d3ec 2026-03-10T05:56:27.249 INFO:journalctl@ceph.prometheus.a.vm05.stdout:Mar 10 05:56:26 vm05 bash[41269]: ts=2026-03-10T05:56:26.948Z caller=group.go:483 level=warn name=CephNodeDiskspaceWarning index=4 component="rule manager" file=/etc/prometheus/alerting/ceph_alerts.yml group=nodes msg="Evaluating rule failed" rule="alert: CephNodeDiskspaceWarning\nexpr: predict_linear(node_filesystem_free_bytes{device=~\"/.*\"}[2d], 3600 * 24 * 5)\n * on (instance) group_left (nodename) node_uname_info < 0\nlabels:\n oid: 1.3.6.1.4.1.50495.1.2.1.8.4\n severity: warning\n type: ceph_default\nannotations:\n description: Mountpoint {{ $labels.mountpoint }} on {{ $labels.nodename }} will\n be full in less than 5 days based on the 48 hour trailing fill rate.\n summary: Host filesystem free space is getting low\n" err="found duplicate series for the match group {instance=\"vm05\"} on the right hand-side of the operation: [{__name__=\"node_uname_info\", cluster=\"107483ae-1c44-11f1-b530-c1172cd6122a\", domainname=\"(none)\", instance=\"vm05\", job=\"node\", machine=\"x86_64\", nodename=\"vm05\", release=\"5.15.0-1092-kvm\", sysname=\"Linux\", version=\"#97-Ubuntu SMP Fri Jan 23 15:00:24 UTC 2026\"}, {__name__=\"node_uname_info\", domainname=\"(none)\", instance=\"vm05\", job=\"node\", machine=\"x86_64\", nodename=\"vm05\", release=\"5.15.0-1092-kvm\", sysname=\"Linux\", version=\"#97-Ubuntu SMP Fri Jan 23 15:00:24 UTC 2026\"}];many-to-many matching not allowed: matching labels must be unique on one side" 2026-03-10T05:56:27.325 INFO:teuthology.orchestra.run.vm02.stdout:{ 2026-03-10T05:56:27.325 INFO:teuthology.orchestra.run.vm02.stdout: "mon": { 2026-03-10T05:56:27.325 INFO:teuthology.orchestra.run.vm02.stdout: "ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable)": 3 2026-03-10T05:56:27.325 INFO:teuthology.orchestra.run.vm02.stdout: }, 2026-03-10T05:56:27.325 INFO:teuthology.orchestra.run.vm02.stdout: "mgr": { 2026-03-10T05:56:27.325 INFO:teuthology.orchestra.run.vm02.stdout: "ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable)": 2 2026-03-10T05:56:27.325 INFO:teuthology.orchestra.run.vm02.stdout: }, 2026-03-10T05:56:27.325 INFO:teuthology.orchestra.run.vm02.stdout: "osd": { 2026-03-10T05:56:27.325 INFO:teuthology.orchestra.run.vm02.stdout: "ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable)": 8 2026-03-10T05:56:27.325 INFO:teuthology.orchestra.run.vm02.stdout: }, 2026-03-10T05:56:27.325 INFO:teuthology.orchestra.run.vm02.stdout: "rgw": { 2026-03-10T05:56:27.325 INFO:teuthology.orchestra.run.vm02.stdout: "ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable)": 4 2026-03-10T05:56:27.326 INFO:teuthology.orchestra.run.vm02.stdout: }, 2026-03-10T05:56:27.326 INFO:teuthology.orchestra.run.vm02.stdout: "overall": { 2026-03-10T05:56:27.326 INFO:teuthology.orchestra.run.vm02.stdout: "ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable)": 17 2026-03-10T05:56:27.326 INFO:teuthology.orchestra.run.vm02.stdout: } 2026-03-10T05:56:27.326 INFO:teuthology.orchestra.run.vm02.stdout:} 2026-03-10T05:56:27.512 INFO:teuthology.orchestra.run.vm02.stdout:{ 2026-03-10T05:56:27.513 INFO:teuthology.orchestra.run.vm02.stdout: "target_image": "quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df", 2026-03-10T05:56:27.513 INFO:teuthology.orchestra.run.vm02.stdout: "in_progress": true, 2026-03-10T05:56:27.513 INFO:teuthology.orchestra.run.vm02.stdout: "which": "Upgrading all daemon types on all hosts", 2026-03-10T05:56:27.513 INFO:teuthology.orchestra.run.vm02.stdout: "services_complete": [ 2026-03-10T05:56:27.513 INFO:teuthology.orchestra.run.vm02.stdout: "mgr", 2026-03-10T05:56:27.513 INFO:teuthology.orchestra.run.vm02.stdout: "rgw", 2026-03-10T05:56:27.513 INFO:teuthology.orchestra.run.vm02.stdout: "mon", 2026-03-10T05:56:27.513 INFO:teuthology.orchestra.run.vm02.stdout: "osd" 2026-03-10T05:56:27.513 INFO:teuthology.orchestra.run.vm02.stdout: ], 2026-03-10T05:56:27.513 INFO:teuthology.orchestra.run.vm02.stdout: "progress": "17/23 daemons upgraded", 2026-03-10T05:56:27.513 INFO:teuthology.orchestra.run.vm02.stdout: "message": "Currently upgrading rgw daemons", 2026-03-10T05:56:27.513 INFO:teuthology.orchestra.run.vm02.stdout: "is_paused": false 2026-03-10T05:56:27.513 INFO:teuthology.orchestra.run.vm02.stdout:} 2026-03-10T05:56:27.739 INFO:teuthology.orchestra.run.vm02.stdout:HEALTH_OK 2026-03-10T05:56:27.999 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:56:27 vm05 bash[43541]: audit 2026-03-10T05:56:27.329305+0000 mon.b (mon.2) 15 : audit [DBG] from='client.? 192.168.123.102:0/2560958562' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T05:56:27.999 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:56:27 vm05 bash[43541]: audit 2026-03-10T05:56:27.329305+0000 mon.b (mon.2) 15 : audit [DBG] from='client.? 192.168.123.102:0/2560958562' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T05:56:28.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:56:27 vm02 bash[56371]: audit 2026-03-10T05:56:27.329305+0000 mon.b (mon.2) 15 : audit [DBG] from='client.? 192.168.123.102:0/2560958562' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T05:56:28.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:56:27 vm02 bash[56371]: audit 2026-03-10T05:56:27.329305+0000 mon.b (mon.2) 15 : audit [DBG] from='client.? 192.168.123.102:0/2560958562' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T05:56:28.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:56:27 vm02 bash[55303]: audit 2026-03-10T05:56:27.329305+0000 mon.b (mon.2) 15 : audit [DBG] from='client.? 192.168.123.102:0/2560958562' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T05:56:28.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:56:27 vm02 bash[55303]: audit 2026-03-10T05:56:27.329305+0000 mon.b (mon.2) 15 : audit [DBG] from='client.? 192.168.123.102:0/2560958562' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T05:56:28.999 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:56:28 vm05 bash[43541]: audit 2026-03-10T05:56:26.725236+0000 mgr.y (mgr.24992) 239 : audit [DBG] from='client.44424 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T05:56:28.999 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:56:28 vm05 bash[43541]: audit 2026-03-10T05:56:26.725236+0000 mgr.y (mgr.24992) 239 : audit [DBG] from='client.44424 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T05:56:28.999 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:56:28 vm05 bash[43541]: cluster 2026-03-10T05:56:26.863287+0000 mgr.y (mgr.24992) 240 : cluster [DBG] pgmap v142: 161 pgs: 161 active+clean; 457 KiB data, 324 MiB used, 160 GiB / 160 GiB avail; 310 KiB/s rd, 341 B/s wr, 489 op/s 2026-03-10T05:56:28.999 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:56:28 vm05 bash[43541]: cluster 2026-03-10T05:56:26.863287+0000 mgr.y (mgr.24992) 240 : cluster [DBG] pgmap v142: 161 pgs: 161 active+clean; 457 KiB data, 324 MiB used, 160 GiB / 160 GiB avail; 310 KiB/s rd, 341 B/s wr, 489 op/s 2026-03-10T05:56:28.999 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:56:28 vm05 bash[43541]: audit 2026-03-10T05:56:26.915620+0000 mgr.y (mgr.24992) 241 : audit [DBG] from='client.44430 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T05:56:28.999 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:56:28 vm05 bash[43541]: audit 2026-03-10T05:56:26.915620+0000 mgr.y (mgr.24992) 241 : audit [DBG] from='client.44430 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T05:56:28.999 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:56:28 vm05 bash[43541]: audit 2026-03-10T05:56:27.014034+0000 mgr.y (mgr.24992) 242 : audit [DBG] from='client.25048 -' entity='client.iscsi.foo.vm02.mxbwmh' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T05:56:28.999 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:56:28 vm05 bash[43541]: audit 2026-03-10T05:56:27.014034+0000 mgr.y (mgr.24992) 242 : audit [DBG] from='client.25048 -' entity='client.iscsi.foo.vm02.mxbwmh' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T05:56:28.999 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:56:28 vm05 bash[43541]: audit 2026-03-10T05:56:27.102927+0000 mgr.y (mgr.24992) 243 : audit [DBG] from='client.44436 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T05:56:28.999 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:56:28 vm05 bash[43541]: audit 2026-03-10T05:56:27.102927+0000 mgr.y (mgr.24992) 243 : audit [DBG] from='client.44436 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T05:56:28.999 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:56:28 vm05 bash[43541]: audit 2026-03-10T05:56:27.511440+0000 mgr.y (mgr.24992) 244 : audit [DBG] from='client.34444 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T05:56:28.999 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:56:28 vm05 bash[43541]: audit 2026-03-10T05:56:27.511440+0000 mgr.y (mgr.24992) 244 : audit [DBG] from='client.34444 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T05:56:28.999 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:56:28 vm05 bash[43541]: audit 2026-03-10T05:56:27.738281+0000 mon.c (mon.1) 13 : audit [DBG] from='client.? 192.168.123.102:0/4224223902' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch 2026-03-10T05:56:28.999 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:56:28 vm05 bash[43541]: audit 2026-03-10T05:56:27.738281+0000 mon.c (mon.1) 13 : audit [DBG] from='client.? 192.168.123.102:0/4224223902' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch 2026-03-10T05:56:29.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:56:28 vm02 bash[56371]: audit 2026-03-10T05:56:26.725236+0000 mgr.y (mgr.24992) 239 : audit [DBG] from='client.44424 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T05:56:29.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:56:28 vm02 bash[56371]: audit 2026-03-10T05:56:26.725236+0000 mgr.y (mgr.24992) 239 : audit [DBG] from='client.44424 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T05:56:29.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:56:28 vm02 bash[56371]: cluster 2026-03-10T05:56:26.863287+0000 mgr.y (mgr.24992) 240 : cluster [DBG] pgmap v142: 161 pgs: 161 active+clean; 457 KiB data, 324 MiB used, 160 GiB / 160 GiB avail; 310 KiB/s rd, 341 B/s wr, 489 op/s 2026-03-10T05:56:29.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:56:28 vm02 bash[56371]: cluster 2026-03-10T05:56:26.863287+0000 mgr.y (mgr.24992) 240 : cluster [DBG] pgmap v142: 161 pgs: 161 active+clean; 457 KiB data, 324 MiB used, 160 GiB / 160 GiB avail; 310 KiB/s rd, 341 B/s wr, 489 op/s 2026-03-10T05:56:29.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:56:28 vm02 bash[56371]: audit 2026-03-10T05:56:26.915620+0000 mgr.y (mgr.24992) 241 : audit [DBG] from='client.44430 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T05:56:29.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:56:28 vm02 bash[56371]: audit 2026-03-10T05:56:26.915620+0000 mgr.y (mgr.24992) 241 : audit [DBG] from='client.44430 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T05:56:29.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:56:28 vm02 bash[56371]: audit 2026-03-10T05:56:27.014034+0000 mgr.y (mgr.24992) 242 : audit [DBG] from='client.25048 -' entity='client.iscsi.foo.vm02.mxbwmh' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T05:56:29.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:56:28 vm02 bash[56371]: audit 2026-03-10T05:56:27.014034+0000 mgr.y (mgr.24992) 242 : audit [DBG] from='client.25048 -' entity='client.iscsi.foo.vm02.mxbwmh' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T05:56:29.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:56:28 vm02 bash[56371]: audit 2026-03-10T05:56:27.102927+0000 mgr.y (mgr.24992) 243 : audit [DBG] from='client.44436 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T05:56:29.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:56:28 vm02 bash[56371]: audit 2026-03-10T05:56:27.102927+0000 mgr.y (mgr.24992) 243 : audit [DBG] from='client.44436 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T05:56:29.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:56:28 vm02 bash[56371]: audit 2026-03-10T05:56:27.511440+0000 mgr.y (mgr.24992) 244 : audit [DBG] from='client.34444 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T05:56:29.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:56:28 vm02 bash[56371]: audit 2026-03-10T05:56:27.511440+0000 mgr.y (mgr.24992) 244 : audit [DBG] from='client.34444 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T05:56:29.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:56:28 vm02 bash[56371]: audit 2026-03-10T05:56:27.738281+0000 mon.c (mon.1) 13 : audit [DBG] from='client.? 192.168.123.102:0/4224223902' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch 2026-03-10T05:56:29.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:56:28 vm02 bash[56371]: audit 2026-03-10T05:56:27.738281+0000 mon.c (mon.1) 13 : audit [DBG] from='client.? 192.168.123.102:0/4224223902' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch 2026-03-10T05:56:29.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:56:28 vm02 bash[55303]: audit 2026-03-10T05:56:26.725236+0000 mgr.y (mgr.24992) 239 : audit [DBG] from='client.44424 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T05:56:29.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:56:28 vm02 bash[55303]: audit 2026-03-10T05:56:26.725236+0000 mgr.y (mgr.24992) 239 : audit [DBG] from='client.44424 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T05:56:29.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:56:28 vm02 bash[55303]: cluster 2026-03-10T05:56:26.863287+0000 mgr.y (mgr.24992) 240 : cluster [DBG] pgmap v142: 161 pgs: 161 active+clean; 457 KiB data, 324 MiB used, 160 GiB / 160 GiB avail; 310 KiB/s rd, 341 B/s wr, 489 op/s 2026-03-10T05:56:29.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:56:28 vm02 bash[55303]: cluster 2026-03-10T05:56:26.863287+0000 mgr.y (mgr.24992) 240 : cluster [DBG] pgmap v142: 161 pgs: 161 active+clean; 457 KiB data, 324 MiB used, 160 GiB / 160 GiB avail; 310 KiB/s rd, 341 B/s wr, 489 op/s 2026-03-10T05:56:29.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:56:28 vm02 bash[55303]: audit 2026-03-10T05:56:26.915620+0000 mgr.y (mgr.24992) 241 : audit [DBG] from='client.44430 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T05:56:29.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:56:28 vm02 bash[55303]: audit 2026-03-10T05:56:26.915620+0000 mgr.y (mgr.24992) 241 : audit [DBG] from='client.44430 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T05:56:29.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:56:28 vm02 bash[55303]: audit 2026-03-10T05:56:27.014034+0000 mgr.y (mgr.24992) 242 : audit [DBG] from='client.25048 -' entity='client.iscsi.foo.vm02.mxbwmh' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T05:56:29.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:56:28 vm02 bash[55303]: audit 2026-03-10T05:56:27.014034+0000 mgr.y (mgr.24992) 242 : audit [DBG] from='client.25048 -' entity='client.iscsi.foo.vm02.mxbwmh' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T05:56:29.086 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:56:28 vm02 bash[55303]: audit 2026-03-10T05:56:27.102927+0000 mgr.y (mgr.24992) 243 : audit [DBG] from='client.44436 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T05:56:29.086 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:56:28 vm02 bash[55303]: audit 2026-03-10T05:56:27.102927+0000 mgr.y (mgr.24992) 243 : audit [DBG] from='client.44436 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T05:56:29.086 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:56:28 vm02 bash[55303]: audit 2026-03-10T05:56:27.511440+0000 mgr.y (mgr.24992) 244 : audit [DBG] from='client.34444 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T05:56:29.086 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:56:28 vm02 bash[55303]: audit 2026-03-10T05:56:27.511440+0000 mgr.y (mgr.24992) 244 : audit [DBG] from='client.34444 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T05:56:29.086 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:56:28 vm02 bash[55303]: audit 2026-03-10T05:56:27.738281+0000 mon.c (mon.1) 13 : audit [DBG] from='client.? 192.168.123.102:0/4224223902' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch 2026-03-10T05:56:29.086 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:56:28 vm02 bash[55303]: audit 2026-03-10T05:56:27.738281+0000 mon.c (mon.1) 13 : audit [DBG] from='client.? 192.168.123.102:0/4224223902' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch 2026-03-10T05:56:30.982 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:56:30 vm02 bash[56371]: cluster 2026-03-10T05:56:28.863647+0000 mgr.y (mgr.24992) 245 : cluster [DBG] pgmap v143: 161 pgs: 161 active+clean; 457 KiB data, 324 MiB used, 160 GiB / 160 GiB avail; 221 KiB/s rd, 170 B/s wr, 349 op/s 2026-03-10T05:56:30.982 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:56:30 vm02 bash[56371]: cluster 2026-03-10T05:56:28.863647+0000 mgr.y (mgr.24992) 245 : cluster [DBG] pgmap v143: 161 pgs: 161 active+clean; 457 KiB data, 324 MiB used, 160 GiB / 160 GiB avail; 221 KiB/s rd, 170 B/s wr, 349 op/s 2026-03-10T05:56:30.982 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:56:30 vm02 bash[56371]: audit 2026-03-10T05:56:30.121550+0000 mon.a (mon.0) 571 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:56:30.982 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:56:30 vm02 bash[56371]: audit 2026-03-10T05:56:30.121550+0000 mon.a (mon.0) 571 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:56:30.982 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:56:30 vm02 bash[56371]: audit 2026-03-10T05:56:30.127746+0000 mon.a (mon.0) 572 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:56:30.982 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:56:30 vm02 bash[56371]: audit 2026-03-10T05:56:30.127746+0000 mon.a (mon.0) 572 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:56:30.982 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:56:30 vm02 bash[56371]: audit 2026-03-10T05:56:30.219455+0000 mon.a (mon.0) 573 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:56:30.982 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:56:30 vm02 bash[56371]: audit 2026-03-10T05:56:30.219455+0000 mon.a (mon.0) 573 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:56:30.982 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:56:30 vm02 bash[56371]: audit 2026-03-10T05:56:30.226503+0000 mon.a (mon.0) 574 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:56:30.982 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:56:30 vm02 bash[56371]: audit 2026-03-10T05:56:30.226503+0000 mon.a (mon.0) 574 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:56:30.982 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:56:30 vm02 bash[56371]: audit 2026-03-10T05:56:30.228334+0000 mon.a (mon.0) 575 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:56:30.982 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:56:30 vm02 bash[56371]: audit 2026-03-10T05:56:30.228334+0000 mon.a (mon.0) 575 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:56:30.982 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:56:30 vm02 bash[56371]: audit 2026-03-10T05:56:30.228817+0000 mon.a (mon.0) 576 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T05:56:30.982 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:56:30 vm02 bash[56371]: audit 2026-03-10T05:56:30.228817+0000 mon.a (mon.0) 576 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T05:56:30.982 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:56:30 vm02 bash[56371]: audit 2026-03-10T05:56:30.232086+0000 mon.a (mon.0) 577 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:56:30.982 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:56:30 vm02 bash[56371]: audit 2026-03-10T05:56:30.232086+0000 mon.a (mon.0) 577 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:56:30.982 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:56:30 vm02 bash[56371]: audit 2026-03-10T05:56:30.276388+0000 mon.a (mon.0) 578 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T05:56:30.982 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:56:30 vm02 bash[56371]: audit 2026-03-10T05:56:30.276388+0000 mon.a (mon.0) 578 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T05:56:30.982 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:56:30 vm02 bash[56371]: audit 2026-03-10T05:56:30.277325+0000 mon.a (mon.0) 579 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T05:56:30.982 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:56:30 vm02 bash[56371]: audit 2026-03-10T05:56:30.277325+0000 mon.a (mon.0) 579 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T05:56:30.982 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:56:30 vm02 bash[56371]: audit 2026-03-10T05:56:30.277962+0000 mon.a (mon.0) 580 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T05:56:30.982 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:56:30 vm02 bash[56371]: audit 2026-03-10T05:56:30.277962+0000 mon.a (mon.0) 580 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T05:56:30.982 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:56:30 vm02 bash[56371]: audit 2026-03-10T05:56:30.278515+0000 mon.a (mon.0) 581 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T05:56:30.982 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:56:30 vm02 bash[56371]: audit 2026-03-10T05:56:30.278515+0000 mon.a (mon.0) 581 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T05:56:30.982 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:56:30 vm02 bash[56371]: audit 2026-03-10T05:56:30.279221+0000 mon.a (mon.0) 582 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T05:56:30.982 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:56:30 vm02 bash[56371]: audit 2026-03-10T05:56:30.279221+0000 mon.a (mon.0) 582 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T05:56:30.982 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:56:30 vm02 bash[56371]: audit 2026-03-10T05:56:30.280159+0000 mon.a (mon.0) 583 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T05:56:30.982 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:56:30 vm02 bash[56371]: audit 2026-03-10T05:56:30.280159+0000 mon.a (mon.0) 583 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T05:56:30.982 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:56:30 vm02 bash[56371]: audit 2026-03-10T05:56:30.280743+0000 mon.a (mon.0) 584 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T05:56:30.982 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:56:30 vm02 bash[56371]: audit 2026-03-10T05:56:30.280743+0000 mon.a (mon.0) 584 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T05:56:30.982 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:56:30 vm02 bash[56371]: audit 2026-03-10T05:56:30.286517+0000 mon.a (mon.0) 585 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:56:30.982 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:56:30 vm02 bash[56371]: audit 2026-03-10T05:56:30.286517+0000 mon.a (mon.0) 585 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:56:30.982 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:56:30 vm02 bash[56371]: audit 2026-03-10T05:56:30.287691+0000 mon.a (mon.0) 586 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.rgw.foo.vm02.pbogjd"}]: dispatch 2026-03-10T05:56:30.982 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:56:30 vm02 bash[56371]: audit 2026-03-10T05:56:30.287691+0000 mon.a (mon.0) 586 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.rgw.foo.vm02.pbogjd"}]: dispatch 2026-03-10T05:56:30.982 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:56:30 vm02 bash[56371]: audit 2026-03-10T05:56:30.291542+0000 mon.a (mon.0) 587 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.rgw.foo.vm02.pbogjd"}]': finished 2026-03-10T05:56:30.982 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:56:30 vm02 bash[56371]: audit 2026-03-10T05:56:30.291542+0000 mon.a (mon.0) 587 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.rgw.foo.vm02.pbogjd"}]': finished 2026-03-10T05:56:30.982 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:56:30 vm02 bash[56371]: audit 2026-03-10T05:56:30.292666+0000 mon.a (mon.0) 588 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.rgw.foo.vm05.hvmsxl"}]: dispatch 2026-03-10T05:56:30.982 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:56:30 vm02 bash[56371]: audit 2026-03-10T05:56:30.292666+0000 mon.a (mon.0) 588 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.rgw.foo.vm05.hvmsxl"}]: dispatch 2026-03-10T05:56:30.982 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:56:30 vm02 bash[56371]: audit 2026-03-10T05:56:30.296170+0000 mon.a (mon.0) 589 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.rgw.foo.vm05.hvmsxl"}]': finished 2026-03-10T05:56:30.982 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:56:30 vm02 bash[56371]: audit 2026-03-10T05:56:30.296170+0000 mon.a (mon.0) 589 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.rgw.foo.vm05.hvmsxl"}]': finished 2026-03-10T05:56:30.982 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:56:30 vm02 bash[56371]: audit 2026-03-10T05:56:30.297225+0000 mon.a (mon.0) 590 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.rgw.smpl.vm02.pglcfm"}]: dispatch 2026-03-10T05:56:30.982 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:56:30 vm02 bash[56371]: audit 2026-03-10T05:56:30.297225+0000 mon.a (mon.0) 590 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.rgw.smpl.vm02.pglcfm"}]: dispatch 2026-03-10T05:56:30.982 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:56:30 vm02 bash[56371]: audit 2026-03-10T05:56:30.300573+0000 mon.a (mon.0) 591 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.rgw.smpl.vm02.pglcfm"}]': finished 2026-03-10T05:56:30.982 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:56:30 vm02 bash[56371]: audit 2026-03-10T05:56:30.300573+0000 mon.a (mon.0) 591 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.rgw.smpl.vm02.pglcfm"}]': finished 2026-03-10T05:56:30.982 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:56:30 vm02 bash[56371]: audit 2026-03-10T05:56:30.301573+0000 mon.a (mon.0) 592 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.rgw.smpl.vm05.hqqmap"}]: dispatch 2026-03-10T05:56:30.982 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:56:30 vm02 bash[56371]: audit 2026-03-10T05:56:30.301573+0000 mon.a (mon.0) 592 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.rgw.smpl.vm05.hqqmap"}]: dispatch 2026-03-10T05:56:30.982 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:56:30 vm02 bash[56371]: audit 2026-03-10T05:56:30.305197+0000 mon.a (mon.0) 593 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.rgw.smpl.vm05.hqqmap"}]': finished 2026-03-10T05:56:30.982 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:56:30 vm02 bash[56371]: audit 2026-03-10T05:56:30.305197+0000 mon.a (mon.0) 593 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.rgw.smpl.vm05.hqqmap"}]': finished 2026-03-10T05:56:30.982 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:56:30 vm02 bash[56371]: audit 2026-03-10T05:56:30.306481+0000 mon.a (mon.0) 594 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T05:56:30.982 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:56:30 vm02 bash[56371]: audit 2026-03-10T05:56:30.306481+0000 mon.a (mon.0) 594 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T05:56:30.982 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:56:30 vm02 bash[56371]: audit 2026-03-10T05:56:30.310760+0000 mon.a (mon.0) 595 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:56:30.982 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:56:30 vm02 bash[56371]: audit 2026-03-10T05:56:30.310760+0000 mon.a (mon.0) 595 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:56:30.983 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:56:30 vm02 bash[56371]: audit 2026-03-10T05:56:30.312019+0000 mon.a (mon.0) 596 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T05:56:30.983 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:56:30 vm02 bash[56371]: audit 2026-03-10T05:56:30.312019+0000 mon.a (mon.0) 596 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T05:56:30.983 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:56:30 vm02 bash[56371]: audit 2026-03-10T05:56:30.312541+0000 mon.a (mon.0) 597 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T05:56:30.983 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:56:30 vm02 bash[56371]: audit 2026-03-10T05:56:30.312541+0000 mon.a (mon.0) 597 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T05:56:30.983 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:56:30 vm02 bash[56371]: audit 2026-03-10T05:56:30.317004+0000 mon.a (mon.0) 598 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:56:30.983 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:56:30 vm02 bash[56371]: audit 2026-03-10T05:56:30.317004+0000 mon.a (mon.0) 598 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:56:30.983 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:56:30 vm02 bash[56371]: audit 2026-03-10T05:56:30.695646+0000 mon.a (mon.0) 599 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:56:30.983 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:56:30 vm02 bash[56371]: audit 2026-03-10T05:56:30.695646+0000 mon.a (mon.0) 599 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:56:30.983 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:56:30 vm02 bash[56371]: audit 2026-03-10T05:56:30.696882+0000 mon.a (mon.0) 600 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "client.iscsi.foo.vm02.mxbwmh", "caps": ["mon", "profile rbd, allow command \"osd blocklist\", allow command \"config-key get\" with \"key\" prefix \"iscsi/\"", "mgr", "allow command \"service status\"", "osd", "allow rwx"]}]: dispatch 2026-03-10T05:56:30.983 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:56:30 vm02 bash[56371]: audit 2026-03-10T05:56:30.696882+0000 mon.a (mon.0) 600 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "client.iscsi.foo.vm02.mxbwmh", "caps": ["mon", "profile rbd, allow command \"osd blocklist\", allow command \"config-key get\" with \"key\" prefix \"iscsi/\"", "mgr", "allow command \"service status\"", "osd", "allow rwx"]}]: dispatch 2026-03-10T05:56:30.983 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:56:30 vm02 bash[56371]: audit 2026-03-10T05:56:30.700846+0000 mon.a (mon.0) 601 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:56:30.983 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:56:30 vm02 bash[56371]: audit 2026-03-10T05:56:30.700846+0000 mon.a (mon.0) 601 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:56:30.983 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:56:30 vm02 bash[55303]: cluster 2026-03-10T05:56:28.863647+0000 mgr.y (mgr.24992) 245 : cluster [DBG] pgmap v143: 161 pgs: 161 active+clean; 457 KiB data, 324 MiB used, 160 GiB / 160 GiB avail; 221 KiB/s rd, 170 B/s wr, 349 op/s 2026-03-10T05:56:30.983 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:56:30 vm02 bash[55303]: cluster 2026-03-10T05:56:28.863647+0000 mgr.y (mgr.24992) 245 : cluster [DBG] pgmap v143: 161 pgs: 161 active+clean; 457 KiB data, 324 MiB used, 160 GiB / 160 GiB avail; 221 KiB/s rd, 170 B/s wr, 349 op/s 2026-03-10T05:56:30.983 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:56:30 vm02 bash[55303]: audit 2026-03-10T05:56:30.121550+0000 mon.a (mon.0) 571 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:56:30.983 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:56:30 vm02 bash[55303]: audit 2026-03-10T05:56:30.121550+0000 mon.a (mon.0) 571 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:56:30.983 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:56:30 vm02 bash[55303]: audit 2026-03-10T05:56:30.127746+0000 mon.a (mon.0) 572 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:56:30.983 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:56:30 vm02 bash[55303]: audit 2026-03-10T05:56:30.127746+0000 mon.a (mon.0) 572 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:56:30.983 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:56:30 vm02 bash[55303]: audit 2026-03-10T05:56:30.219455+0000 mon.a (mon.0) 573 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:56:30.983 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:56:30 vm02 bash[55303]: audit 2026-03-10T05:56:30.219455+0000 mon.a (mon.0) 573 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:56:30.983 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:56:30 vm02 bash[55303]: audit 2026-03-10T05:56:30.226503+0000 mon.a (mon.0) 574 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:56:30.983 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:56:30 vm02 bash[55303]: audit 2026-03-10T05:56:30.226503+0000 mon.a (mon.0) 574 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:56:30.983 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:56:30 vm02 bash[55303]: audit 2026-03-10T05:56:30.228334+0000 mon.a (mon.0) 575 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:56:30.983 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:56:30 vm02 bash[55303]: audit 2026-03-10T05:56:30.228334+0000 mon.a (mon.0) 575 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:56:30.983 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:56:30 vm02 bash[55303]: audit 2026-03-10T05:56:30.228817+0000 mon.a (mon.0) 576 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T05:56:30.983 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:56:30 vm02 bash[55303]: audit 2026-03-10T05:56:30.228817+0000 mon.a (mon.0) 576 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T05:56:30.983 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:56:30 vm02 bash[55303]: audit 2026-03-10T05:56:30.232086+0000 mon.a (mon.0) 577 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:56:30.983 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:56:30 vm02 bash[55303]: audit 2026-03-10T05:56:30.232086+0000 mon.a (mon.0) 577 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:56:30.983 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:56:30 vm02 bash[55303]: audit 2026-03-10T05:56:30.276388+0000 mon.a (mon.0) 578 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T05:56:30.983 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:56:30 vm02 bash[55303]: audit 2026-03-10T05:56:30.276388+0000 mon.a (mon.0) 578 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T05:56:30.983 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:56:30 vm02 bash[55303]: audit 2026-03-10T05:56:30.277325+0000 mon.a (mon.0) 579 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T05:56:30.983 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:56:30 vm02 bash[55303]: audit 2026-03-10T05:56:30.277325+0000 mon.a (mon.0) 579 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T05:56:30.983 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:56:30 vm02 bash[55303]: audit 2026-03-10T05:56:30.277962+0000 mon.a (mon.0) 580 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T05:56:30.983 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:56:30 vm02 bash[55303]: audit 2026-03-10T05:56:30.277962+0000 mon.a (mon.0) 580 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T05:56:30.983 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:56:30 vm02 bash[55303]: audit 2026-03-10T05:56:30.278515+0000 mon.a (mon.0) 581 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T05:56:30.983 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:56:30 vm02 bash[55303]: audit 2026-03-10T05:56:30.278515+0000 mon.a (mon.0) 581 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T05:56:30.983 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:56:30 vm02 bash[55303]: audit 2026-03-10T05:56:30.279221+0000 mon.a (mon.0) 582 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T05:56:30.983 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:56:30 vm02 bash[55303]: audit 2026-03-10T05:56:30.279221+0000 mon.a (mon.0) 582 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T05:56:30.983 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:56:30 vm02 bash[55303]: audit 2026-03-10T05:56:30.280159+0000 mon.a (mon.0) 583 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T05:56:30.983 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:56:30 vm02 bash[55303]: audit 2026-03-10T05:56:30.280159+0000 mon.a (mon.0) 583 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T05:56:30.983 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:56:30 vm02 bash[55303]: audit 2026-03-10T05:56:30.280743+0000 mon.a (mon.0) 584 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T05:56:30.983 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:56:30 vm02 bash[55303]: audit 2026-03-10T05:56:30.280743+0000 mon.a (mon.0) 584 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T05:56:30.983 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:56:30 vm02 bash[55303]: audit 2026-03-10T05:56:30.286517+0000 mon.a (mon.0) 585 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:56:30.983 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:56:30 vm02 bash[55303]: audit 2026-03-10T05:56:30.286517+0000 mon.a (mon.0) 585 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:56:30.983 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:56:30 vm02 bash[55303]: audit 2026-03-10T05:56:30.287691+0000 mon.a (mon.0) 586 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.rgw.foo.vm02.pbogjd"}]: dispatch 2026-03-10T05:56:30.983 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:56:30 vm02 bash[55303]: audit 2026-03-10T05:56:30.287691+0000 mon.a (mon.0) 586 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.rgw.foo.vm02.pbogjd"}]: dispatch 2026-03-10T05:56:30.983 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:56:30 vm02 bash[55303]: audit 2026-03-10T05:56:30.291542+0000 mon.a (mon.0) 587 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.rgw.foo.vm02.pbogjd"}]': finished 2026-03-10T05:56:30.983 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:56:30 vm02 bash[55303]: audit 2026-03-10T05:56:30.291542+0000 mon.a (mon.0) 587 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.rgw.foo.vm02.pbogjd"}]': finished 2026-03-10T05:56:30.983 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:56:30 vm02 bash[55303]: audit 2026-03-10T05:56:30.292666+0000 mon.a (mon.0) 588 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.rgw.foo.vm05.hvmsxl"}]: dispatch 2026-03-10T05:56:30.983 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:56:30 vm02 bash[55303]: audit 2026-03-10T05:56:30.292666+0000 mon.a (mon.0) 588 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.rgw.foo.vm05.hvmsxl"}]: dispatch 2026-03-10T05:56:30.983 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:56:30 vm02 bash[55303]: audit 2026-03-10T05:56:30.296170+0000 mon.a (mon.0) 589 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.rgw.foo.vm05.hvmsxl"}]': finished 2026-03-10T05:56:30.983 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:56:30 vm02 bash[55303]: audit 2026-03-10T05:56:30.296170+0000 mon.a (mon.0) 589 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.rgw.foo.vm05.hvmsxl"}]': finished 2026-03-10T05:56:30.983 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:56:30 vm02 bash[55303]: audit 2026-03-10T05:56:30.297225+0000 mon.a (mon.0) 590 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.rgw.smpl.vm02.pglcfm"}]: dispatch 2026-03-10T05:56:30.984 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:56:30 vm02 bash[55303]: audit 2026-03-10T05:56:30.297225+0000 mon.a (mon.0) 590 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.rgw.smpl.vm02.pglcfm"}]: dispatch 2026-03-10T05:56:30.984 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:56:30 vm02 bash[55303]: audit 2026-03-10T05:56:30.300573+0000 mon.a (mon.0) 591 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.rgw.smpl.vm02.pglcfm"}]': finished 2026-03-10T05:56:30.984 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:56:30 vm02 bash[55303]: audit 2026-03-10T05:56:30.300573+0000 mon.a (mon.0) 591 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.rgw.smpl.vm02.pglcfm"}]': finished 2026-03-10T05:56:30.984 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:56:30 vm02 bash[55303]: audit 2026-03-10T05:56:30.301573+0000 mon.a (mon.0) 592 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.rgw.smpl.vm05.hqqmap"}]: dispatch 2026-03-10T05:56:30.984 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:56:30 vm02 bash[55303]: audit 2026-03-10T05:56:30.301573+0000 mon.a (mon.0) 592 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.rgw.smpl.vm05.hqqmap"}]: dispatch 2026-03-10T05:56:30.984 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:56:30 vm02 bash[55303]: audit 2026-03-10T05:56:30.305197+0000 mon.a (mon.0) 593 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.rgw.smpl.vm05.hqqmap"}]': finished 2026-03-10T05:56:30.984 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:56:30 vm02 bash[55303]: audit 2026-03-10T05:56:30.305197+0000 mon.a (mon.0) 593 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.rgw.smpl.vm05.hqqmap"}]': finished 2026-03-10T05:56:30.984 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:56:30 vm02 bash[55303]: audit 2026-03-10T05:56:30.306481+0000 mon.a (mon.0) 594 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T05:56:30.984 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:56:30 vm02 bash[55303]: audit 2026-03-10T05:56:30.306481+0000 mon.a (mon.0) 594 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T05:56:30.984 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:56:30 vm02 bash[55303]: audit 2026-03-10T05:56:30.310760+0000 mon.a (mon.0) 595 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:56:30.984 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:56:30 vm02 bash[55303]: audit 2026-03-10T05:56:30.310760+0000 mon.a (mon.0) 595 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:56:30.984 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:56:30 vm02 bash[55303]: audit 2026-03-10T05:56:30.312019+0000 mon.a (mon.0) 596 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T05:56:30.984 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:56:30 vm02 bash[55303]: audit 2026-03-10T05:56:30.312019+0000 mon.a (mon.0) 596 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T05:56:30.984 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:56:30 vm02 bash[55303]: audit 2026-03-10T05:56:30.312541+0000 mon.a (mon.0) 597 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T05:56:30.984 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:56:30 vm02 bash[55303]: audit 2026-03-10T05:56:30.312541+0000 mon.a (mon.0) 597 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T05:56:30.984 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:56:30 vm02 bash[55303]: audit 2026-03-10T05:56:30.317004+0000 mon.a (mon.0) 598 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:56:30.984 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:56:30 vm02 bash[55303]: audit 2026-03-10T05:56:30.317004+0000 mon.a (mon.0) 598 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:56:30.984 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:56:30 vm02 bash[55303]: audit 2026-03-10T05:56:30.695646+0000 mon.a (mon.0) 599 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:56:30.984 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:56:30 vm02 bash[55303]: audit 2026-03-10T05:56:30.695646+0000 mon.a (mon.0) 599 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:56:30.984 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:56:30 vm02 bash[55303]: audit 2026-03-10T05:56:30.696882+0000 mon.a (mon.0) 600 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "client.iscsi.foo.vm02.mxbwmh", "caps": ["mon", "profile rbd, allow command \"osd blocklist\", allow command \"config-key get\" with \"key\" prefix \"iscsi/\"", "mgr", "allow command \"service status\"", "osd", "allow rwx"]}]: dispatch 2026-03-10T05:56:30.984 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:56:30 vm02 bash[55303]: audit 2026-03-10T05:56:30.696882+0000 mon.a (mon.0) 600 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "client.iscsi.foo.vm02.mxbwmh", "caps": ["mon", "profile rbd, allow command \"osd blocklist\", allow command \"config-key get\" with \"key\" prefix \"iscsi/\"", "mgr", "allow command \"service status\"", "osd", "allow rwx"]}]: dispatch 2026-03-10T05:56:30.984 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:56:30 vm02 bash[55303]: audit 2026-03-10T05:56:30.700846+0000 mon.a (mon.0) 601 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:56:30.984 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:56:30 vm02 bash[55303]: audit 2026-03-10T05:56:30.700846+0000 mon.a (mon.0) 601 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:56:30.999 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:56:30 vm05 bash[43541]: cluster 2026-03-10T05:56:28.863647+0000 mgr.y (mgr.24992) 245 : cluster [DBG] pgmap v143: 161 pgs: 161 active+clean; 457 KiB data, 324 MiB used, 160 GiB / 160 GiB avail; 221 KiB/s rd, 170 B/s wr, 349 op/s 2026-03-10T05:56:30.999 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:56:30 vm05 bash[43541]: cluster 2026-03-10T05:56:28.863647+0000 mgr.y (mgr.24992) 245 : cluster [DBG] pgmap v143: 161 pgs: 161 active+clean; 457 KiB data, 324 MiB used, 160 GiB / 160 GiB avail; 221 KiB/s rd, 170 B/s wr, 349 op/s 2026-03-10T05:56:30.999 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:56:30 vm05 bash[43541]: audit 2026-03-10T05:56:30.121550+0000 mon.a (mon.0) 571 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:56:30.999 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:56:30 vm05 bash[43541]: audit 2026-03-10T05:56:30.121550+0000 mon.a (mon.0) 571 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:56:30.999 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:56:30 vm05 bash[43541]: audit 2026-03-10T05:56:30.127746+0000 mon.a (mon.0) 572 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:56:30.999 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:56:30 vm05 bash[43541]: audit 2026-03-10T05:56:30.127746+0000 mon.a (mon.0) 572 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:56:30.999 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:56:30 vm05 bash[43541]: audit 2026-03-10T05:56:30.219455+0000 mon.a (mon.0) 573 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:56:30.999 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:56:30 vm05 bash[43541]: audit 2026-03-10T05:56:30.219455+0000 mon.a (mon.0) 573 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:56:30.999 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:56:30 vm05 bash[43541]: audit 2026-03-10T05:56:30.226503+0000 mon.a (mon.0) 574 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:56:30.999 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:56:30 vm05 bash[43541]: audit 2026-03-10T05:56:30.226503+0000 mon.a (mon.0) 574 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:56:30.999 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:56:30 vm05 bash[43541]: audit 2026-03-10T05:56:30.228334+0000 mon.a (mon.0) 575 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:56:30.999 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:56:30 vm05 bash[43541]: audit 2026-03-10T05:56:30.228334+0000 mon.a (mon.0) 575 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:56:30.999 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:56:30 vm05 bash[43541]: audit 2026-03-10T05:56:30.228817+0000 mon.a (mon.0) 576 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T05:56:30.999 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:56:30 vm05 bash[43541]: audit 2026-03-10T05:56:30.228817+0000 mon.a (mon.0) 576 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T05:56:30.999 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:56:30 vm05 bash[43541]: audit 2026-03-10T05:56:30.232086+0000 mon.a (mon.0) 577 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:56:30.999 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:56:30 vm05 bash[43541]: audit 2026-03-10T05:56:30.232086+0000 mon.a (mon.0) 577 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:56:30.999 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:56:30 vm05 bash[43541]: audit 2026-03-10T05:56:30.276388+0000 mon.a (mon.0) 578 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T05:56:30.999 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:56:30 vm05 bash[43541]: audit 2026-03-10T05:56:30.276388+0000 mon.a (mon.0) 578 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T05:56:30.999 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:56:30 vm05 bash[43541]: audit 2026-03-10T05:56:30.277325+0000 mon.a (mon.0) 579 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T05:56:30.999 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:56:30 vm05 bash[43541]: audit 2026-03-10T05:56:30.277325+0000 mon.a (mon.0) 579 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T05:56:30.999 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:56:30 vm05 bash[43541]: audit 2026-03-10T05:56:30.277962+0000 mon.a (mon.0) 580 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T05:56:31.000 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:56:30 vm05 bash[43541]: audit 2026-03-10T05:56:30.277962+0000 mon.a (mon.0) 580 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T05:56:31.000 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:56:30 vm05 bash[43541]: audit 2026-03-10T05:56:30.278515+0000 mon.a (mon.0) 581 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T05:56:31.000 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:56:30 vm05 bash[43541]: audit 2026-03-10T05:56:30.278515+0000 mon.a (mon.0) 581 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T05:56:31.000 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:56:30 vm05 bash[43541]: audit 2026-03-10T05:56:30.279221+0000 mon.a (mon.0) 582 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T05:56:31.000 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:56:30 vm05 bash[43541]: audit 2026-03-10T05:56:30.279221+0000 mon.a (mon.0) 582 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T05:56:31.000 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:56:30 vm05 bash[43541]: audit 2026-03-10T05:56:30.280159+0000 mon.a (mon.0) 583 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T05:56:31.000 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:56:30 vm05 bash[43541]: audit 2026-03-10T05:56:30.280159+0000 mon.a (mon.0) 583 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T05:56:31.000 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:56:30 vm05 bash[43541]: audit 2026-03-10T05:56:30.280743+0000 mon.a (mon.0) 584 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T05:56:31.000 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:56:30 vm05 bash[43541]: audit 2026-03-10T05:56:30.280743+0000 mon.a (mon.0) 584 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T05:56:31.000 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:56:30 vm05 bash[43541]: audit 2026-03-10T05:56:30.286517+0000 mon.a (mon.0) 585 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:56:31.000 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:56:30 vm05 bash[43541]: audit 2026-03-10T05:56:30.286517+0000 mon.a (mon.0) 585 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:56:31.000 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:56:30 vm05 bash[43541]: audit 2026-03-10T05:56:30.287691+0000 mon.a (mon.0) 586 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.rgw.foo.vm02.pbogjd"}]: dispatch 2026-03-10T05:56:31.000 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:56:30 vm05 bash[43541]: audit 2026-03-10T05:56:30.287691+0000 mon.a (mon.0) 586 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.rgw.foo.vm02.pbogjd"}]: dispatch 2026-03-10T05:56:31.000 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:56:30 vm05 bash[43541]: audit 2026-03-10T05:56:30.291542+0000 mon.a (mon.0) 587 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.rgw.foo.vm02.pbogjd"}]': finished 2026-03-10T05:56:31.000 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:56:30 vm05 bash[43541]: audit 2026-03-10T05:56:30.291542+0000 mon.a (mon.0) 587 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.rgw.foo.vm02.pbogjd"}]': finished 2026-03-10T05:56:31.000 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:56:30 vm05 bash[43541]: audit 2026-03-10T05:56:30.292666+0000 mon.a (mon.0) 588 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.rgw.foo.vm05.hvmsxl"}]: dispatch 2026-03-10T05:56:31.000 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:56:30 vm05 bash[43541]: audit 2026-03-10T05:56:30.292666+0000 mon.a (mon.0) 588 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.rgw.foo.vm05.hvmsxl"}]: dispatch 2026-03-10T05:56:31.000 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:56:30 vm05 bash[43541]: audit 2026-03-10T05:56:30.296170+0000 mon.a (mon.0) 589 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.rgw.foo.vm05.hvmsxl"}]': finished 2026-03-10T05:56:31.000 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:56:30 vm05 bash[43541]: audit 2026-03-10T05:56:30.296170+0000 mon.a (mon.0) 589 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.rgw.foo.vm05.hvmsxl"}]': finished 2026-03-10T05:56:31.000 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:56:30 vm05 bash[43541]: audit 2026-03-10T05:56:30.297225+0000 mon.a (mon.0) 590 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.rgw.smpl.vm02.pglcfm"}]: dispatch 2026-03-10T05:56:31.000 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:56:30 vm05 bash[43541]: audit 2026-03-10T05:56:30.297225+0000 mon.a (mon.0) 590 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.rgw.smpl.vm02.pglcfm"}]: dispatch 2026-03-10T05:56:31.000 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:56:30 vm05 bash[43541]: audit 2026-03-10T05:56:30.300573+0000 mon.a (mon.0) 591 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.rgw.smpl.vm02.pglcfm"}]': finished 2026-03-10T05:56:31.000 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:56:30 vm05 bash[43541]: audit 2026-03-10T05:56:30.300573+0000 mon.a (mon.0) 591 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.rgw.smpl.vm02.pglcfm"}]': finished 2026-03-10T05:56:31.000 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:56:30 vm05 bash[43541]: audit 2026-03-10T05:56:30.301573+0000 mon.a (mon.0) 592 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.rgw.smpl.vm05.hqqmap"}]: dispatch 2026-03-10T05:56:31.000 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:56:30 vm05 bash[43541]: audit 2026-03-10T05:56:30.301573+0000 mon.a (mon.0) 592 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.rgw.smpl.vm05.hqqmap"}]: dispatch 2026-03-10T05:56:31.000 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:56:30 vm05 bash[43541]: audit 2026-03-10T05:56:30.305197+0000 mon.a (mon.0) 593 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.rgw.smpl.vm05.hqqmap"}]': finished 2026-03-10T05:56:31.000 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:56:30 vm05 bash[43541]: audit 2026-03-10T05:56:30.305197+0000 mon.a (mon.0) 593 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.rgw.smpl.vm05.hqqmap"}]': finished 2026-03-10T05:56:31.000 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:56:30 vm05 bash[43541]: audit 2026-03-10T05:56:30.306481+0000 mon.a (mon.0) 594 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T05:56:31.000 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:56:30 vm05 bash[43541]: audit 2026-03-10T05:56:30.306481+0000 mon.a (mon.0) 594 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T05:56:31.000 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:56:30 vm05 bash[43541]: audit 2026-03-10T05:56:30.310760+0000 mon.a (mon.0) 595 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:56:31.000 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:56:30 vm05 bash[43541]: audit 2026-03-10T05:56:30.310760+0000 mon.a (mon.0) 595 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:56:31.000 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:56:30 vm05 bash[43541]: audit 2026-03-10T05:56:30.312019+0000 mon.a (mon.0) 596 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T05:56:31.000 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:56:30 vm05 bash[43541]: audit 2026-03-10T05:56:30.312019+0000 mon.a (mon.0) 596 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T05:56:31.000 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:56:30 vm05 bash[43541]: audit 2026-03-10T05:56:30.312541+0000 mon.a (mon.0) 597 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T05:56:31.000 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:56:30 vm05 bash[43541]: audit 2026-03-10T05:56:30.312541+0000 mon.a (mon.0) 597 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T05:56:31.000 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:56:30 vm05 bash[43541]: audit 2026-03-10T05:56:30.317004+0000 mon.a (mon.0) 598 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:56:31.000 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:56:30 vm05 bash[43541]: audit 2026-03-10T05:56:30.317004+0000 mon.a (mon.0) 598 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:56:31.000 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:56:30 vm05 bash[43541]: audit 2026-03-10T05:56:30.695646+0000 mon.a (mon.0) 599 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:56:31.000 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:56:30 vm05 bash[43541]: audit 2026-03-10T05:56:30.695646+0000 mon.a (mon.0) 599 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:56:31.000 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:56:30 vm05 bash[43541]: audit 2026-03-10T05:56:30.696882+0000 mon.a (mon.0) 600 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "client.iscsi.foo.vm02.mxbwmh", "caps": ["mon", "profile rbd, allow command \"osd blocklist\", allow command \"config-key get\" with \"key\" prefix \"iscsi/\"", "mgr", "allow command \"service status\"", "osd", "allow rwx"]}]: dispatch 2026-03-10T05:56:31.000 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:56:30 vm05 bash[43541]: audit 2026-03-10T05:56:30.696882+0000 mon.a (mon.0) 600 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "client.iscsi.foo.vm02.mxbwmh", "caps": ["mon", "profile rbd, allow command \"osd blocklist\", allow command \"config-key get\" with \"key\" prefix \"iscsi/\"", "mgr", "allow command \"service status\"", "osd", "allow rwx"]}]: dispatch 2026-03-10T05:56:31.000 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:56:30 vm05 bash[43541]: audit 2026-03-10T05:56:30.700846+0000 mon.a (mon.0) 601 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:56:31.000 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:56:30 vm05 bash[43541]: audit 2026-03-10T05:56:30.700846+0000 mon.a (mon.0) 601 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:56:31.260 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:56:31 vm02 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:56:31.260 INFO:journalctl@ceph.mgr.y.vm02.stdout:Mar 10 05:56:31 vm02 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:56:31.260 INFO:journalctl@ceph.osd.0.vm02.stdout:Mar 10 05:56:31 vm02 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:56:31.260 INFO:journalctl@ceph.osd.1.vm02.stdout:Mar 10 05:56:31 vm02 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:56:31.260 INFO:journalctl@ceph.osd.2.vm02.stdout:Mar 10 05:56:31 vm02 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:56:31.260 INFO:journalctl@ceph.osd.3.vm02.stdout:Mar 10 05:56:31 vm02 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:56:31.260 INFO:journalctl@ceph.node-exporter.a.vm02.stdout:Mar 10 05:56:31 vm02 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:56:31.260 INFO:journalctl@ceph.alertmanager.a.vm02.stdout:Mar 10 05:56:31 vm02 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:56:31.260 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:56:31 vm02 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:56:31.999 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:56:31 vm05 bash[43541]: cephadm 2026-03-10T05:56:30.116239+0000 mgr.y (mgr.24992) 246 : cephadm [INF] Detected new or changed devices on vm05 2026-03-10T05:56:31.999 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:56:31 vm05 bash[43541]: cephadm 2026-03-10T05:56:30.116239+0000 mgr.y (mgr.24992) 246 : cephadm [INF] Detected new or changed devices on vm05 2026-03-10T05:56:31.999 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:56:31 vm05 bash[43541]: cephadm 2026-03-10T05:56:30.214068+0000 mgr.y (mgr.24992) 247 : cephadm [INF] Detected new or changed devices on vm02 2026-03-10T05:56:31.999 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:56:31 vm05 bash[43541]: cephadm 2026-03-10T05:56:30.214068+0000 mgr.y (mgr.24992) 247 : cephadm [INF] Detected new or changed devices on vm02 2026-03-10T05:56:31.999 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:56:31 vm05 bash[43541]: cephadm 2026-03-10T05:56:30.281032+0000 mgr.y (mgr.24992) 248 : cephadm [INF] Upgrade: Setting container_image for all rgw 2026-03-10T05:56:31.999 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:56:31 vm05 bash[43541]: cephadm 2026-03-10T05:56:30.281032+0000 mgr.y (mgr.24992) 248 : cephadm [INF] Upgrade: Setting container_image for all rgw 2026-03-10T05:56:31.999 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:56:31 vm05 bash[43541]: cephadm 2026-03-10T05:56:30.306817+0000 mgr.y (mgr.24992) 249 : cephadm [INF] Upgrade: Setting container_image for all rbd-mirror 2026-03-10T05:56:31.999 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:56:31 vm05 bash[43541]: cephadm 2026-03-10T05:56:30.306817+0000 mgr.y (mgr.24992) 249 : cephadm [INF] Upgrade: Setting container_image for all rbd-mirror 2026-03-10T05:56:31.999 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:56:31 vm05 bash[43541]: cephadm 2026-03-10T05:56:30.312851+0000 mgr.y (mgr.24992) 250 : cephadm [INF] Upgrade: Setting container_image for all ceph-exporter 2026-03-10T05:56:31.999 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:56:31 vm05 bash[43541]: cephadm 2026-03-10T05:56:30.312851+0000 mgr.y (mgr.24992) 250 : cephadm [INF] Upgrade: Setting container_image for all ceph-exporter 2026-03-10T05:56:31.999 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:56:31 vm05 bash[43541]: cephadm 2026-03-10T05:56:30.691678+0000 mgr.y (mgr.24992) 251 : cephadm [INF] Upgrade: Updating iscsi.foo.vm02.mxbwmh 2026-03-10T05:56:31.999 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:56:31 vm05 bash[43541]: cephadm 2026-03-10T05:56:30.691678+0000 mgr.y (mgr.24992) 251 : cephadm [INF] Upgrade: Updating iscsi.foo.vm02.mxbwmh 2026-03-10T05:56:31.999 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:56:31 vm05 bash[43541]: cephadm 2026-03-10T05:56:30.701586+0000 mgr.y (mgr.24992) 252 : cephadm [INF] Deploying daemon iscsi.foo.vm02.mxbwmh on vm02 2026-03-10T05:56:31.999 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:56:31 vm05 bash[43541]: cephadm 2026-03-10T05:56:30.701586+0000 mgr.y (mgr.24992) 252 : cephadm [INF] Deploying daemon iscsi.foo.vm02.mxbwmh on vm02 2026-03-10T05:56:32.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:56:31 vm02 bash[56371]: cephadm 2026-03-10T05:56:30.116239+0000 mgr.y (mgr.24992) 246 : cephadm [INF] Detected new or changed devices on vm05 2026-03-10T05:56:32.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:56:31 vm02 bash[56371]: cephadm 2026-03-10T05:56:30.116239+0000 mgr.y (mgr.24992) 246 : cephadm [INF] Detected new or changed devices on vm05 2026-03-10T05:56:32.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:56:31 vm02 bash[56371]: cephadm 2026-03-10T05:56:30.214068+0000 mgr.y (mgr.24992) 247 : cephadm [INF] Detected new or changed devices on vm02 2026-03-10T05:56:32.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:56:31 vm02 bash[56371]: cephadm 2026-03-10T05:56:30.214068+0000 mgr.y (mgr.24992) 247 : cephadm [INF] Detected new or changed devices on vm02 2026-03-10T05:56:32.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:56:31 vm02 bash[56371]: cephadm 2026-03-10T05:56:30.281032+0000 mgr.y (mgr.24992) 248 : cephadm [INF] Upgrade: Setting container_image for all rgw 2026-03-10T05:56:32.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:56:31 vm02 bash[56371]: cephadm 2026-03-10T05:56:30.281032+0000 mgr.y (mgr.24992) 248 : cephadm [INF] Upgrade: Setting container_image for all rgw 2026-03-10T05:56:32.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:56:31 vm02 bash[56371]: cephadm 2026-03-10T05:56:30.306817+0000 mgr.y (mgr.24992) 249 : cephadm [INF] Upgrade: Setting container_image for all rbd-mirror 2026-03-10T05:56:32.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:56:31 vm02 bash[56371]: cephadm 2026-03-10T05:56:30.306817+0000 mgr.y (mgr.24992) 249 : cephadm [INF] Upgrade: Setting container_image for all rbd-mirror 2026-03-10T05:56:32.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:56:31 vm02 bash[56371]: cephadm 2026-03-10T05:56:30.312851+0000 mgr.y (mgr.24992) 250 : cephadm [INF] Upgrade: Setting container_image for all ceph-exporter 2026-03-10T05:56:32.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:56:31 vm02 bash[56371]: cephadm 2026-03-10T05:56:30.312851+0000 mgr.y (mgr.24992) 250 : cephadm [INF] Upgrade: Setting container_image for all ceph-exporter 2026-03-10T05:56:32.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:56:31 vm02 bash[56371]: cephadm 2026-03-10T05:56:30.691678+0000 mgr.y (mgr.24992) 251 : cephadm [INF] Upgrade: Updating iscsi.foo.vm02.mxbwmh 2026-03-10T05:56:32.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:56:31 vm02 bash[56371]: cephadm 2026-03-10T05:56:30.691678+0000 mgr.y (mgr.24992) 251 : cephadm [INF] Upgrade: Updating iscsi.foo.vm02.mxbwmh 2026-03-10T05:56:32.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:56:31 vm02 bash[56371]: cephadm 2026-03-10T05:56:30.701586+0000 mgr.y (mgr.24992) 252 : cephadm [INF] Deploying daemon iscsi.foo.vm02.mxbwmh on vm02 2026-03-10T05:56:32.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:56:31 vm02 bash[56371]: cephadm 2026-03-10T05:56:30.701586+0000 mgr.y (mgr.24992) 252 : cephadm [INF] Deploying daemon iscsi.foo.vm02.mxbwmh on vm02 2026-03-10T05:56:32.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:56:31 vm02 bash[55303]: cephadm 2026-03-10T05:56:30.116239+0000 mgr.y (mgr.24992) 246 : cephadm [INF] Detected new or changed devices on vm05 2026-03-10T05:56:32.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:56:31 vm02 bash[55303]: cephadm 2026-03-10T05:56:30.116239+0000 mgr.y (mgr.24992) 246 : cephadm [INF] Detected new or changed devices on vm05 2026-03-10T05:56:32.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:56:31 vm02 bash[55303]: cephadm 2026-03-10T05:56:30.214068+0000 mgr.y (mgr.24992) 247 : cephadm [INF] Detected new or changed devices on vm02 2026-03-10T05:56:32.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:56:31 vm02 bash[55303]: cephadm 2026-03-10T05:56:30.214068+0000 mgr.y (mgr.24992) 247 : cephadm [INF] Detected new or changed devices on vm02 2026-03-10T05:56:32.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:56:31 vm02 bash[55303]: cephadm 2026-03-10T05:56:30.281032+0000 mgr.y (mgr.24992) 248 : cephadm [INF] Upgrade: Setting container_image for all rgw 2026-03-10T05:56:32.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:56:31 vm02 bash[55303]: cephadm 2026-03-10T05:56:30.281032+0000 mgr.y (mgr.24992) 248 : cephadm [INF] Upgrade: Setting container_image for all rgw 2026-03-10T05:56:32.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:56:31 vm02 bash[55303]: cephadm 2026-03-10T05:56:30.306817+0000 mgr.y (mgr.24992) 249 : cephadm [INF] Upgrade: Setting container_image for all rbd-mirror 2026-03-10T05:56:32.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:56:31 vm02 bash[55303]: cephadm 2026-03-10T05:56:30.306817+0000 mgr.y (mgr.24992) 249 : cephadm [INF] Upgrade: Setting container_image for all rbd-mirror 2026-03-10T05:56:32.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:56:31 vm02 bash[55303]: cephadm 2026-03-10T05:56:30.312851+0000 mgr.y (mgr.24992) 250 : cephadm [INF] Upgrade: Setting container_image for all ceph-exporter 2026-03-10T05:56:32.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:56:31 vm02 bash[55303]: cephadm 2026-03-10T05:56:30.312851+0000 mgr.y (mgr.24992) 250 : cephadm [INF] Upgrade: Setting container_image for all ceph-exporter 2026-03-10T05:56:32.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:56:31 vm02 bash[55303]: cephadm 2026-03-10T05:56:30.691678+0000 mgr.y (mgr.24992) 251 : cephadm [INF] Upgrade: Updating iscsi.foo.vm02.mxbwmh 2026-03-10T05:56:32.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:56:31 vm02 bash[55303]: cephadm 2026-03-10T05:56:30.691678+0000 mgr.y (mgr.24992) 251 : cephadm [INF] Upgrade: Updating iscsi.foo.vm02.mxbwmh 2026-03-10T05:56:32.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:56:31 vm02 bash[55303]: cephadm 2026-03-10T05:56:30.701586+0000 mgr.y (mgr.24992) 252 : cephadm [INF] Deploying daemon iscsi.foo.vm02.mxbwmh on vm02 2026-03-10T05:56:32.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:56:31 vm02 bash[55303]: cephadm 2026-03-10T05:56:30.701586+0000 mgr.y (mgr.24992) 252 : cephadm [INF] Deploying daemon iscsi.foo.vm02.mxbwmh on vm02 2026-03-10T05:56:32.999 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:56:32 vm05 bash[43541]: cluster 2026-03-10T05:56:30.863978+0000 mgr.y (mgr.24992) 253 : cluster [DBG] pgmap v144: 161 pgs: 161 active+clean; 457 KiB data, 324 MiB used, 160 GiB / 160 GiB avail; 165 KiB/s rd, 170 B/s wr, 260 op/s 2026-03-10T05:56:32.999 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:56:32 vm05 bash[43541]: cluster 2026-03-10T05:56:30.863978+0000 mgr.y (mgr.24992) 253 : cluster [DBG] pgmap v144: 161 pgs: 161 active+clean; 457 KiB data, 324 MiB used, 160 GiB / 160 GiB avail; 165 KiB/s rd, 170 B/s wr, 260 op/s 2026-03-10T05:56:33.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:56:32 vm02 bash[56371]: cluster 2026-03-10T05:56:30.863978+0000 mgr.y (mgr.24992) 253 : cluster [DBG] pgmap v144: 161 pgs: 161 active+clean; 457 KiB data, 324 MiB used, 160 GiB / 160 GiB avail; 165 KiB/s rd, 170 B/s wr, 260 op/s 2026-03-10T05:56:33.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:56:32 vm02 bash[56371]: cluster 2026-03-10T05:56:30.863978+0000 mgr.y (mgr.24992) 253 : cluster [DBG] pgmap v144: 161 pgs: 161 active+clean; 457 KiB data, 324 MiB used, 160 GiB / 160 GiB avail; 165 KiB/s rd, 170 B/s wr, 260 op/s 2026-03-10T05:56:33.085 INFO:journalctl@ceph.mgr.y.vm02.stdout:Mar 10 05:56:32 vm02 bash[52264]: ::ffff:192.168.123.105 - - [10/Mar/2026:05:56:32] "GET /metrics HTTP/1.1" 200 38255 "" "Prometheus/2.51.0" 2026-03-10T05:56:33.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:56:32 vm02 bash[55303]: cluster 2026-03-10T05:56:30.863978+0000 mgr.y (mgr.24992) 253 : cluster [DBG] pgmap v144: 161 pgs: 161 active+clean; 457 KiB data, 324 MiB used, 160 GiB / 160 GiB avail; 165 KiB/s rd, 170 B/s wr, 260 op/s 2026-03-10T05:56:33.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:56:32 vm02 bash[55303]: cluster 2026-03-10T05:56:30.863978+0000 mgr.y (mgr.24992) 253 : cluster [DBG] pgmap v144: 161 pgs: 161 active+clean; 457 KiB data, 324 MiB used, 160 GiB / 160 GiB avail; 165 KiB/s rd, 170 B/s wr, 260 op/s 2026-03-10T05:56:33.999 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:56:33 vm05 bash[43541]: cluster 2026-03-10T05:56:32.864371+0000 mgr.y (mgr.24992) 254 : cluster [DBG] pgmap v145: 161 pgs: 161 active+clean; 457 KiB data, 324 MiB used, 160 GiB / 160 GiB avail; 93 KiB/s rd, 170 B/s wr, 146 op/s 2026-03-10T05:56:33.999 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:56:33 vm05 bash[43541]: cluster 2026-03-10T05:56:32.864371+0000 mgr.y (mgr.24992) 254 : cluster [DBG] pgmap v145: 161 pgs: 161 active+clean; 457 KiB data, 324 MiB used, 160 GiB / 160 GiB avail; 93 KiB/s rd, 170 B/s wr, 146 op/s 2026-03-10T05:56:34.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:56:33 vm02 bash[56371]: cluster 2026-03-10T05:56:32.864371+0000 mgr.y (mgr.24992) 254 : cluster [DBG] pgmap v145: 161 pgs: 161 active+clean; 457 KiB data, 324 MiB used, 160 GiB / 160 GiB avail; 93 KiB/s rd, 170 B/s wr, 146 op/s 2026-03-10T05:56:34.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:56:33 vm02 bash[56371]: cluster 2026-03-10T05:56:32.864371+0000 mgr.y (mgr.24992) 254 : cluster [DBG] pgmap v145: 161 pgs: 161 active+clean; 457 KiB data, 324 MiB used, 160 GiB / 160 GiB avail; 93 KiB/s rd, 170 B/s wr, 146 op/s 2026-03-10T05:56:34.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:56:33 vm02 bash[55303]: cluster 2026-03-10T05:56:32.864371+0000 mgr.y (mgr.24992) 254 : cluster [DBG] pgmap v145: 161 pgs: 161 active+clean; 457 KiB data, 324 MiB used, 160 GiB / 160 GiB avail; 93 KiB/s rd, 170 B/s wr, 146 op/s 2026-03-10T05:56:34.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:56:33 vm02 bash[55303]: cluster 2026-03-10T05:56:32.864371+0000 mgr.y (mgr.24992) 254 : cluster [DBG] pgmap v145: 161 pgs: 161 active+clean; 457 KiB data, 324 MiB used, 160 GiB / 160 GiB avail; 93 KiB/s rd, 170 B/s wr, 146 op/s 2026-03-10T05:56:34.499 INFO:journalctl@ceph.prometheus.a.vm05.stdout:Mar 10 05:56:34 vm05 bash[41269]: ts=2026-03-10T05:56:34.147Z caller=group.go:483 level=warn name=CephOSDFlapping index=13 component="rule manager" file=/etc/prometheus/alerting/ceph_alerts.yml group=osd msg="Evaluating rule failed" rule="alert: CephOSDFlapping\nexpr: (rate(ceph_osd_up[5m]) * on (ceph_daemon) group_left (hostname) ceph_osd_metadata)\n * 60 > 1\nlabels:\n oid: 1.3.6.1.4.1.50495.1.2.1.4.4\n severity: warning\n type: ceph_default\nannotations:\n description: OSD {{ $labels.ceph_daemon }} on {{ $labels.hostname }} was marked\n down and back up {{ $value | humanize }} times once a minute for 5 minutes. This\n may indicate a network issue (latency, packet loss, MTU mismatch) on the cluster\n network, or the public network if no cluster network is deployed. Check the network\n stats on the listed host(s).\n documentation: https://docs.ceph.com/en/latest/rados/troubleshooting/troubleshooting-osd#flapping-osds\n summary: Network issues are causing OSDs to flap (mark each other down)\n" err="found duplicate series for the match group {ceph_daemon=\"osd.3\"} on the right hand-side of the operation: [{__name__=\"ceph_osd_metadata\", ceph_daemon=\"osd.3\", ceph_version=\"ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable)\", cluster=\"107483ae-1c44-11f1-b530-c1172cd6122a\", cluster_addr=\"192.168.123.102\", device_class=\"hdd\", hostname=\"vm02\", instance=\"ceph_cluster\", job=\"ceph\", objectstore=\"bluestore\", public_addr=\"192.168.123.102\"}, {__name__=\"ceph_osd_metadata\", ceph_daemon=\"osd.3\", ceph_version=\"ceph version 17.2.0 (43e2e60a7559d3f46c9d53f1ca875fd499a1e35e) quincy (stable)\", cluster_addr=\"192.168.123.102\", device_class=\"hdd\", hostname=\"vm02\", instance=\"192.168.123.105:9283\", job=\"ceph\", objectstore=\"bluestore\", public_addr=\"192.168.123.102\"}];many-to-many matching not allowed: matching labels must be unique on one side" 2026-03-10T05:56:36.249 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:56:35 vm05 bash[43541]: cluster 2026-03-10T05:56:34.864722+0000 mgr.y (mgr.24992) 255 : cluster [DBG] pgmap v146: 161 pgs: 161 active+clean; 457 KiB data, 324 MiB used, 160 GiB / 160 GiB avail; 12 KiB/s rd, 0 B/s wr, 17 op/s 2026-03-10T05:56:36.249 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:56:35 vm05 bash[43541]: cluster 2026-03-10T05:56:34.864722+0000 mgr.y (mgr.24992) 255 : cluster [DBG] pgmap v146: 161 pgs: 161 active+clean; 457 KiB data, 324 MiB used, 160 GiB / 160 GiB avail; 12 KiB/s rd, 0 B/s wr, 17 op/s 2026-03-10T05:56:36.334 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:56:35 vm02 bash[56371]: cluster 2026-03-10T05:56:34.864722+0000 mgr.y (mgr.24992) 255 : cluster [DBG] pgmap v146: 161 pgs: 161 active+clean; 457 KiB data, 324 MiB used, 160 GiB / 160 GiB avail; 12 KiB/s rd, 0 B/s wr, 17 op/s 2026-03-10T05:56:36.334 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:56:35 vm02 bash[56371]: cluster 2026-03-10T05:56:34.864722+0000 mgr.y (mgr.24992) 255 : cluster [DBG] pgmap v146: 161 pgs: 161 active+clean; 457 KiB data, 324 MiB used, 160 GiB / 160 GiB avail; 12 KiB/s rd, 0 B/s wr, 17 op/s 2026-03-10T05:56:36.335 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:56:35 vm02 bash[55303]: cluster 2026-03-10T05:56:34.864722+0000 mgr.y (mgr.24992) 255 : cluster [DBG] pgmap v146: 161 pgs: 161 active+clean; 457 KiB data, 324 MiB used, 160 GiB / 160 GiB avail; 12 KiB/s rd, 0 B/s wr, 17 op/s 2026-03-10T05:56:36.335 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:56:35 vm02 bash[55303]: cluster 2026-03-10T05:56:34.864722+0000 mgr.y (mgr.24992) 255 : cluster [DBG] pgmap v146: 161 pgs: 161 active+clean; 457 KiB data, 324 MiB used, 160 GiB / 160 GiB avail; 12 KiB/s rd, 0 B/s wr, 17 op/s 2026-03-10T05:56:37.249 INFO:journalctl@ceph.prometheus.a.vm05.stdout:Mar 10 05:56:36 vm05 bash[41269]: ts=2026-03-10T05:56:36.949Z caller=group.go:483 level=warn name=CephNodeDiskspaceWarning index=4 component="rule manager" file=/etc/prometheus/alerting/ceph_alerts.yml group=nodes msg="Evaluating rule failed" rule="alert: CephNodeDiskspaceWarning\nexpr: predict_linear(node_filesystem_free_bytes{device=~\"/.*\"}[2d], 3600 * 24 * 5)\n * on (instance) group_left (nodename) node_uname_info < 0\nlabels:\n oid: 1.3.6.1.4.1.50495.1.2.1.8.4\n severity: warning\n type: ceph_default\nannotations:\n description: Mountpoint {{ $labels.mountpoint }} on {{ $labels.nodename }} will\n be full in less than 5 days based on the 48 hour trailing fill rate.\n summary: Host filesystem free space is getting low\n" err="found duplicate series for the match group {instance=\"vm05\"} on the right hand-side of the operation: [{__name__=\"node_uname_info\", cluster=\"107483ae-1c44-11f1-b530-c1172cd6122a\", domainname=\"(none)\", instance=\"vm05\", job=\"node\", machine=\"x86_64\", nodename=\"vm05\", release=\"5.15.0-1092-kvm\", sysname=\"Linux\", version=\"#97-Ubuntu SMP Fri Jan 23 15:00:24 UTC 2026\"}, {__name__=\"node_uname_info\", domainname=\"(none)\", instance=\"vm05\", job=\"node\", machine=\"x86_64\", nodename=\"vm05\", release=\"5.15.0-1092-kvm\", sysname=\"Linux\", version=\"#97-Ubuntu SMP Fri Jan 23 15:00:24 UTC 2026\"}];many-to-many matching not allowed: matching labels must be unique on one side" 2026-03-10T05:56:38.249 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:56:37 vm05 bash[43541]: cluster 2026-03-10T05:56:36.865161+0000 mgr.y (mgr.24992) 256 : cluster [DBG] pgmap v147: 161 pgs: 161 active+clean; 457 KiB data, 324 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T05:56:38.249 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:56:37 vm05 bash[43541]: cluster 2026-03-10T05:56:36.865161+0000 mgr.y (mgr.24992) 256 : cluster [DBG] pgmap v147: 161 pgs: 161 active+clean; 457 KiB data, 324 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T05:56:38.249 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:56:37 vm05 bash[43541]: audit 2026-03-10T05:56:37.024110+0000 mgr.y (mgr.24992) 257 : audit [DBG] from='client.25048 -' entity='client.iscsi.foo.vm02.mxbwmh' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T05:56:38.249 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:56:37 vm05 bash[43541]: audit 2026-03-10T05:56:37.024110+0000 mgr.y (mgr.24992) 257 : audit [DBG] from='client.25048 -' entity='client.iscsi.foo.vm02.mxbwmh' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T05:56:38.335 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:56:37 vm02 bash[56371]: cluster 2026-03-10T05:56:36.865161+0000 mgr.y (mgr.24992) 256 : cluster [DBG] pgmap v147: 161 pgs: 161 active+clean; 457 KiB data, 324 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T05:56:38.335 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:56:37 vm02 bash[56371]: cluster 2026-03-10T05:56:36.865161+0000 mgr.y (mgr.24992) 256 : cluster [DBG] pgmap v147: 161 pgs: 161 active+clean; 457 KiB data, 324 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T05:56:38.335 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:56:37 vm02 bash[56371]: audit 2026-03-10T05:56:37.024110+0000 mgr.y (mgr.24992) 257 : audit [DBG] from='client.25048 -' entity='client.iscsi.foo.vm02.mxbwmh' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T05:56:38.335 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:56:37 vm02 bash[56371]: audit 2026-03-10T05:56:37.024110+0000 mgr.y (mgr.24992) 257 : audit [DBG] from='client.25048 -' entity='client.iscsi.foo.vm02.mxbwmh' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T05:56:38.335 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:56:37 vm02 bash[55303]: cluster 2026-03-10T05:56:36.865161+0000 mgr.y (mgr.24992) 256 : cluster [DBG] pgmap v147: 161 pgs: 161 active+clean; 457 KiB data, 324 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T05:56:38.335 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:56:37 vm02 bash[55303]: cluster 2026-03-10T05:56:36.865161+0000 mgr.y (mgr.24992) 256 : cluster [DBG] pgmap v147: 161 pgs: 161 active+clean; 457 KiB data, 324 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T05:56:38.335 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:56:37 vm02 bash[55303]: audit 2026-03-10T05:56:37.024110+0000 mgr.y (mgr.24992) 257 : audit [DBG] from='client.25048 -' entity='client.iscsi.foo.vm02.mxbwmh' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T05:56:38.335 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:56:37 vm02 bash[55303]: audit 2026-03-10T05:56:37.024110+0000 mgr.y (mgr.24992) 257 : audit [DBG] from='client.25048 -' entity='client.iscsi.foo.vm02.mxbwmh' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T05:56:40.249 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:56:39 vm05 bash[43541]: cluster 2026-03-10T05:56:38.865518+0000 mgr.y (mgr.24992) 258 : cluster [DBG] pgmap v148: 161 pgs: 161 active+clean; 457 KiB data, 324 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T05:56:40.249 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:56:39 vm05 bash[43541]: cluster 2026-03-10T05:56:38.865518+0000 mgr.y (mgr.24992) 258 : cluster [DBG] pgmap v148: 161 pgs: 161 active+clean; 457 KiB data, 324 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T05:56:40.335 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:56:39 vm02 bash[56371]: cluster 2026-03-10T05:56:38.865518+0000 mgr.y (mgr.24992) 258 : cluster [DBG] pgmap v148: 161 pgs: 161 active+clean; 457 KiB data, 324 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T05:56:40.335 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:56:39 vm02 bash[56371]: cluster 2026-03-10T05:56:38.865518+0000 mgr.y (mgr.24992) 258 : cluster [DBG] pgmap v148: 161 pgs: 161 active+clean; 457 KiB data, 324 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T05:56:40.335 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:56:39 vm02 bash[55303]: cluster 2026-03-10T05:56:38.865518+0000 mgr.y (mgr.24992) 258 : cluster [DBG] pgmap v148: 161 pgs: 161 active+clean; 457 KiB data, 324 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T05:56:40.335 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:56:39 vm02 bash[55303]: cluster 2026-03-10T05:56:38.865518+0000 mgr.y (mgr.24992) 258 : cluster [DBG] pgmap v148: 161 pgs: 161 active+clean; 457 KiB data, 324 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T05:56:41.248 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:56:40 vm05 bash[43541]: audit 2026-03-10T05:56:40.882076+0000 mon.a (mon.0) 602 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T05:56:41.249 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:56:40 vm05 bash[43541]: audit 2026-03-10T05:56:40.882076+0000 mon.a (mon.0) 602 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T05:56:41.293 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:56:40 vm02 bash[56371]: audit 2026-03-10T05:56:40.882076+0000 mon.a (mon.0) 602 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T05:56:41.293 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:56:40 vm02 bash[56371]: audit 2026-03-10T05:56:40.882076+0000 mon.a (mon.0) 602 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T05:56:41.293 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:56:40 vm02 bash[55303]: audit 2026-03-10T05:56:40.882076+0000 mon.a (mon.0) 602 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T05:56:41.294 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:56:40 vm02 bash[55303]: audit 2026-03-10T05:56:40.882076+0000 mon.a (mon.0) 602 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T05:56:41.586 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:56:41 vm02 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:56:41.586 INFO:journalctl@ceph.mgr.y.vm02.stdout:Mar 10 05:56:41 vm02 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:56:41.586 INFO:journalctl@ceph.osd.0.vm02.stdout:Mar 10 05:56:41 vm02 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:56:41.586 INFO:journalctl@ceph.osd.1.vm02.stdout:Mar 10 05:56:41 vm02 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:56:41.586 INFO:journalctl@ceph.osd.2.vm02.stdout:Mar 10 05:56:41 vm02 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:56:41.586 INFO:journalctl@ceph.alertmanager.a.vm02.stdout:Mar 10 05:56:41 vm02 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:56:41.586 INFO:journalctl@ceph.osd.3.vm02.stdout:Mar 10 05:56:41 vm02 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:56:41.586 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:56:41 vm02 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:56:41.586 INFO:journalctl@ceph.node-exporter.a.vm02.stdout:Mar 10 05:56:41 vm02 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:56:42.884 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:56:42 vm02 bash[56371]: cluster 2026-03-10T05:56:40.865832+0000 mgr.y (mgr.24992) 259 : cluster [DBG] pgmap v149: 161 pgs: 161 active+clean; 457 KiB data, 324 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T05:56:42.884 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:56:42 vm02 bash[56371]: cluster 2026-03-10T05:56:40.865832+0000 mgr.y (mgr.24992) 259 : cluster [DBG] pgmap v149: 161 pgs: 161 active+clean; 457 KiB data, 324 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T05:56:42.884 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:56:42 vm02 bash[56371]: audit 2026-03-10T05:56:41.616968+0000 mon.a (mon.0) 603 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:56:42.884 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:56:42 vm02 bash[56371]: audit 2026-03-10T05:56:41.616968+0000 mon.a (mon.0) 603 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:56:42.884 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:56:42 vm02 bash[56371]: audit 2026-03-10T05:56:41.624480+0000 mon.a (mon.0) 604 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:56:42.884 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:56:42 vm02 bash[56371]: audit 2026-03-10T05:56:41.624480+0000 mon.a (mon.0) 604 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:56:42.884 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:56:42 vm02 bash[56371]: audit 2026-03-10T05:56:42.177743+0000 mon.a (mon.0) 605 : audit [DBG] from='client.? 192.168.123.102:0/2590221720' entity='client.iscsi.foo.vm02.mxbwmh' cmd=[{"prefix": "osd blocklist ls"}]: dispatch 2026-03-10T05:56:42.884 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:56:42 vm02 bash[56371]: audit 2026-03-10T05:56:42.177743+0000 mon.a (mon.0) 605 : audit [DBG] from='client.? 192.168.123.102:0/2590221720' entity='client.iscsi.foo.vm02.mxbwmh' cmd=[{"prefix": "osd blocklist ls"}]: dispatch 2026-03-10T05:56:42.884 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:56:42 vm02 bash[56371]: audit 2026-03-10T05:56:42.334593+0000 mon.c (mon.1) 14 : audit [INF] from='client.? 192.168.123.102:0/1725169284' entity='client.iscsi.foo.vm02.mxbwmh' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.102:0/323651797"}]: dispatch 2026-03-10T05:56:42.884 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:56:42 vm02 bash[56371]: audit 2026-03-10T05:56:42.334593+0000 mon.c (mon.1) 14 : audit [INF] from='client.? 192.168.123.102:0/1725169284' entity='client.iscsi.foo.vm02.mxbwmh' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.102:0/323651797"}]: dispatch 2026-03-10T05:56:42.885 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:56:42 vm02 bash[56371]: audit 2026-03-10T05:56:42.334965+0000 mon.a (mon.0) 606 : audit [INF] from='client.? ' entity='client.iscsi.foo.vm02.mxbwmh' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.102:0/323651797"}]: dispatch 2026-03-10T05:56:42.885 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:56:42 vm02 bash[56371]: audit 2026-03-10T05:56:42.334965+0000 mon.a (mon.0) 606 : audit [INF] from='client.? ' entity='client.iscsi.foo.vm02.mxbwmh' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.102:0/323651797"}]: dispatch 2026-03-10T05:56:42.885 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:56:42 vm02 bash[55303]: cluster 2026-03-10T05:56:40.865832+0000 mgr.y (mgr.24992) 259 : cluster [DBG] pgmap v149: 161 pgs: 161 active+clean; 457 KiB data, 324 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T05:56:42.885 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:56:42 vm02 bash[55303]: cluster 2026-03-10T05:56:40.865832+0000 mgr.y (mgr.24992) 259 : cluster [DBG] pgmap v149: 161 pgs: 161 active+clean; 457 KiB data, 324 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T05:56:42.885 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:56:42 vm02 bash[55303]: audit 2026-03-10T05:56:41.616968+0000 mon.a (mon.0) 603 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:56:42.885 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:56:42 vm02 bash[55303]: audit 2026-03-10T05:56:41.616968+0000 mon.a (mon.0) 603 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:56:42.885 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:56:42 vm02 bash[55303]: audit 2026-03-10T05:56:41.624480+0000 mon.a (mon.0) 604 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:56:42.885 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:56:42 vm02 bash[55303]: audit 2026-03-10T05:56:41.624480+0000 mon.a (mon.0) 604 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:56:42.885 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:56:42 vm02 bash[55303]: audit 2026-03-10T05:56:42.177743+0000 mon.a (mon.0) 605 : audit [DBG] from='client.? 192.168.123.102:0/2590221720' entity='client.iscsi.foo.vm02.mxbwmh' cmd=[{"prefix": "osd blocklist ls"}]: dispatch 2026-03-10T05:56:42.885 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:56:42 vm02 bash[55303]: audit 2026-03-10T05:56:42.177743+0000 mon.a (mon.0) 605 : audit [DBG] from='client.? 192.168.123.102:0/2590221720' entity='client.iscsi.foo.vm02.mxbwmh' cmd=[{"prefix": "osd blocklist ls"}]: dispatch 2026-03-10T05:56:42.885 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:56:42 vm02 bash[55303]: audit 2026-03-10T05:56:42.334593+0000 mon.c (mon.1) 14 : audit [INF] from='client.? 192.168.123.102:0/1725169284' entity='client.iscsi.foo.vm02.mxbwmh' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.102:0/323651797"}]: dispatch 2026-03-10T05:56:42.885 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:56:42 vm02 bash[55303]: audit 2026-03-10T05:56:42.334593+0000 mon.c (mon.1) 14 : audit [INF] from='client.? 192.168.123.102:0/1725169284' entity='client.iscsi.foo.vm02.mxbwmh' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.102:0/323651797"}]: dispatch 2026-03-10T05:56:42.885 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:56:42 vm02 bash[55303]: audit 2026-03-10T05:56:42.334965+0000 mon.a (mon.0) 606 : audit [INF] from='client.? ' entity='client.iscsi.foo.vm02.mxbwmh' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.102:0/323651797"}]: dispatch 2026-03-10T05:56:42.885 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:56:42 vm02 bash[55303]: audit 2026-03-10T05:56:42.334965+0000 mon.a (mon.0) 606 : audit [INF] from='client.? ' entity='client.iscsi.foo.vm02.mxbwmh' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.102:0/323651797"}]: dispatch 2026-03-10T05:56:42.998 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:56:42 vm05 bash[43541]: cluster 2026-03-10T05:56:40.865832+0000 mgr.y (mgr.24992) 259 : cluster [DBG] pgmap v149: 161 pgs: 161 active+clean; 457 KiB data, 324 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T05:56:42.999 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:56:42 vm05 bash[43541]: cluster 2026-03-10T05:56:40.865832+0000 mgr.y (mgr.24992) 259 : cluster [DBG] pgmap v149: 161 pgs: 161 active+clean; 457 KiB data, 324 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T05:56:42.999 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:56:42 vm05 bash[43541]: audit 2026-03-10T05:56:41.616968+0000 mon.a (mon.0) 603 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:56:42.999 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:56:42 vm05 bash[43541]: audit 2026-03-10T05:56:41.616968+0000 mon.a (mon.0) 603 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:56:42.999 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:56:42 vm05 bash[43541]: audit 2026-03-10T05:56:41.624480+0000 mon.a (mon.0) 604 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:56:42.999 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:56:42 vm05 bash[43541]: audit 2026-03-10T05:56:41.624480+0000 mon.a (mon.0) 604 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:56:42.999 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:56:42 vm05 bash[43541]: audit 2026-03-10T05:56:42.177743+0000 mon.a (mon.0) 605 : audit [DBG] from='client.? 192.168.123.102:0/2590221720' entity='client.iscsi.foo.vm02.mxbwmh' cmd=[{"prefix": "osd blocklist ls"}]: dispatch 2026-03-10T05:56:42.999 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:56:42 vm05 bash[43541]: audit 2026-03-10T05:56:42.177743+0000 mon.a (mon.0) 605 : audit [DBG] from='client.? 192.168.123.102:0/2590221720' entity='client.iscsi.foo.vm02.mxbwmh' cmd=[{"prefix": "osd blocklist ls"}]: dispatch 2026-03-10T05:56:42.999 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:56:42 vm05 bash[43541]: audit 2026-03-10T05:56:42.334593+0000 mon.c (mon.1) 14 : audit [INF] from='client.? 192.168.123.102:0/1725169284' entity='client.iscsi.foo.vm02.mxbwmh' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.102:0/323651797"}]: dispatch 2026-03-10T05:56:42.999 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:56:42 vm05 bash[43541]: audit 2026-03-10T05:56:42.334593+0000 mon.c (mon.1) 14 : audit [INF] from='client.? 192.168.123.102:0/1725169284' entity='client.iscsi.foo.vm02.mxbwmh' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.102:0/323651797"}]: dispatch 2026-03-10T05:56:42.999 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:56:42 vm05 bash[43541]: audit 2026-03-10T05:56:42.334965+0000 mon.a (mon.0) 606 : audit [INF] from='client.? ' entity='client.iscsi.foo.vm02.mxbwmh' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.102:0/323651797"}]: dispatch 2026-03-10T05:56:42.999 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:56:42 vm05 bash[43541]: audit 2026-03-10T05:56:42.334965+0000 mon.a (mon.0) 606 : audit [INF] from='client.? ' entity='client.iscsi.foo.vm02.mxbwmh' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.102:0/323651797"}]: dispatch 2026-03-10T05:56:43.335 INFO:journalctl@ceph.mgr.y.vm02.stdout:Mar 10 05:56:42 vm02 bash[52264]: ::ffff:192.168.123.105 - - [10/Mar/2026:05:56:42] "GET /metrics HTTP/1.1" 200 38257 "" "Prometheus/2.51.0" 2026-03-10T05:56:43.895 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:56:43 vm02 bash[56371]: audit 2026-03-10T05:56:42.630336+0000 mon.a (mon.0) 607 : audit [INF] from='client.? ' entity='client.iscsi.foo.vm02.mxbwmh' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.102:0/323651797"}]': finished 2026-03-10T05:56:43.895 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:56:43 vm02 bash[56371]: audit 2026-03-10T05:56:42.630336+0000 mon.a (mon.0) 607 : audit [INF] from='client.? ' entity='client.iscsi.foo.vm02.mxbwmh' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.102:0/323651797"}]': finished 2026-03-10T05:56:43.895 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:56:43 vm02 bash[56371]: cluster 2026-03-10T05:56:42.639343+0000 mon.a (mon.0) 608 : cluster [DBG] osdmap e133: 8 total, 8 up, 8 in 2026-03-10T05:56:43.896 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:56:43 vm02 bash[56371]: cluster 2026-03-10T05:56:42.639343+0000 mon.a (mon.0) 608 : cluster [DBG] osdmap e133: 8 total, 8 up, 8 in 2026-03-10T05:56:43.896 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:56:43 vm02 bash[56371]: audit 2026-03-10T05:56:42.792822+0000 mon.c (mon.1) 15 : audit [INF] from='client.? 192.168.123.102:0/1844467881' entity='client.iscsi.foo.vm02.mxbwmh' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.102:6801/2805167735"}]: dispatch 2026-03-10T05:56:43.896 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:56:43 vm02 bash[56371]: audit 2026-03-10T05:56:42.792822+0000 mon.c (mon.1) 15 : audit [INF] from='client.? 192.168.123.102:0/1844467881' entity='client.iscsi.foo.vm02.mxbwmh' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.102:6801/2805167735"}]: dispatch 2026-03-10T05:56:43.896 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:56:43 vm02 bash[56371]: audit 2026-03-10T05:56:42.793170+0000 mon.a (mon.0) 609 : audit [INF] from='client.? ' entity='client.iscsi.foo.vm02.mxbwmh' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.102:6801/2805167735"}]: dispatch 2026-03-10T05:56:43.896 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:56:43 vm02 bash[56371]: audit 2026-03-10T05:56:42.793170+0000 mon.a (mon.0) 609 : audit [INF] from='client.? ' entity='client.iscsi.foo.vm02.mxbwmh' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.102:6801/2805167735"}]: dispatch 2026-03-10T05:56:43.896 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:56:43 vm02 bash[55303]: audit 2026-03-10T05:56:42.630336+0000 mon.a (mon.0) 607 : audit [INF] from='client.? ' entity='client.iscsi.foo.vm02.mxbwmh' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.102:0/323651797"}]': finished 2026-03-10T05:56:43.896 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:56:43 vm02 bash[55303]: audit 2026-03-10T05:56:42.630336+0000 mon.a (mon.0) 607 : audit [INF] from='client.? ' entity='client.iscsi.foo.vm02.mxbwmh' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.102:0/323651797"}]': finished 2026-03-10T05:56:43.896 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:56:43 vm02 bash[55303]: cluster 2026-03-10T05:56:42.639343+0000 mon.a (mon.0) 608 : cluster [DBG] osdmap e133: 8 total, 8 up, 8 in 2026-03-10T05:56:43.896 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:56:43 vm02 bash[55303]: cluster 2026-03-10T05:56:42.639343+0000 mon.a (mon.0) 608 : cluster [DBG] osdmap e133: 8 total, 8 up, 8 in 2026-03-10T05:56:43.896 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:56:43 vm02 bash[55303]: audit 2026-03-10T05:56:42.792822+0000 mon.c (mon.1) 15 : audit [INF] from='client.? 192.168.123.102:0/1844467881' entity='client.iscsi.foo.vm02.mxbwmh' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.102:6801/2805167735"}]: dispatch 2026-03-10T05:56:43.896 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:56:43 vm02 bash[55303]: audit 2026-03-10T05:56:42.792822+0000 mon.c (mon.1) 15 : audit [INF] from='client.? 192.168.123.102:0/1844467881' entity='client.iscsi.foo.vm02.mxbwmh' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.102:6801/2805167735"}]: dispatch 2026-03-10T05:56:43.896 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:56:43 vm02 bash[55303]: audit 2026-03-10T05:56:42.793170+0000 mon.a (mon.0) 609 : audit [INF] from='client.? ' entity='client.iscsi.foo.vm02.mxbwmh' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.102:6801/2805167735"}]: dispatch 2026-03-10T05:56:43.896 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:56:43 vm02 bash[55303]: audit 2026-03-10T05:56:42.793170+0000 mon.a (mon.0) 609 : audit [INF] from='client.? ' entity='client.iscsi.foo.vm02.mxbwmh' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.102:6801/2805167735"}]: dispatch 2026-03-10T05:56:43.998 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:56:43 vm05 bash[43541]: audit 2026-03-10T05:56:42.630336+0000 mon.a (mon.0) 607 : audit [INF] from='client.? ' entity='client.iscsi.foo.vm02.mxbwmh' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.102:0/323651797"}]': finished 2026-03-10T05:56:43.999 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:56:43 vm05 bash[43541]: audit 2026-03-10T05:56:42.630336+0000 mon.a (mon.0) 607 : audit [INF] from='client.? ' entity='client.iscsi.foo.vm02.mxbwmh' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.102:0/323651797"}]': finished 2026-03-10T05:56:43.999 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:56:43 vm05 bash[43541]: cluster 2026-03-10T05:56:42.639343+0000 mon.a (mon.0) 608 : cluster [DBG] osdmap e133: 8 total, 8 up, 8 in 2026-03-10T05:56:43.999 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:56:43 vm05 bash[43541]: cluster 2026-03-10T05:56:42.639343+0000 mon.a (mon.0) 608 : cluster [DBG] osdmap e133: 8 total, 8 up, 8 in 2026-03-10T05:56:43.999 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:56:43 vm05 bash[43541]: audit 2026-03-10T05:56:42.792822+0000 mon.c (mon.1) 15 : audit [INF] from='client.? 192.168.123.102:0/1844467881' entity='client.iscsi.foo.vm02.mxbwmh' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.102:6801/2805167735"}]: dispatch 2026-03-10T05:56:43.999 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:56:43 vm05 bash[43541]: audit 2026-03-10T05:56:42.792822+0000 mon.c (mon.1) 15 : audit [INF] from='client.? 192.168.123.102:0/1844467881' entity='client.iscsi.foo.vm02.mxbwmh' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.102:6801/2805167735"}]: dispatch 2026-03-10T05:56:43.999 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:56:43 vm05 bash[43541]: audit 2026-03-10T05:56:42.793170+0000 mon.a (mon.0) 609 : audit [INF] from='client.? ' entity='client.iscsi.foo.vm02.mxbwmh' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.102:6801/2805167735"}]: dispatch 2026-03-10T05:56:43.999 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:56:43 vm05 bash[43541]: audit 2026-03-10T05:56:42.793170+0000 mon.a (mon.0) 609 : audit [INF] from='client.? ' entity='client.iscsi.foo.vm02.mxbwmh' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.102:6801/2805167735"}]: dispatch 2026-03-10T05:56:44.499 INFO:journalctl@ceph.prometheus.a.vm05.stdout:Mar 10 05:56:44 vm05 bash[41269]: ts=2026-03-10T05:56:44.148Z caller=group.go:483 level=warn name=CephOSDFlapping index=13 component="rule manager" file=/etc/prometheus/alerting/ceph_alerts.yml group=osd msg="Evaluating rule failed" rule="alert: CephOSDFlapping\nexpr: (rate(ceph_osd_up[5m]) * on (ceph_daemon) group_left (hostname) ceph_osd_metadata)\n * 60 > 1\nlabels:\n oid: 1.3.6.1.4.1.50495.1.2.1.4.4\n severity: warning\n type: ceph_default\nannotations:\n description: OSD {{ $labels.ceph_daemon }} on {{ $labels.hostname }} was marked\n down and back up {{ $value | humanize }} times once a minute for 5 minutes. This\n may indicate a network issue (latency, packet loss, MTU mismatch) on the cluster\n network, or the public network if no cluster network is deployed. Check the network\n stats on the listed host(s).\n documentation: https://docs.ceph.com/en/latest/rados/troubleshooting/troubleshooting-osd#flapping-osds\n summary: Network issues are causing OSDs to flap (mark each other down)\n" err="found duplicate series for the match group {ceph_daemon=\"osd.3\"} on the right hand-side of the operation: [{__name__=\"ceph_osd_metadata\", ceph_daemon=\"osd.3\", ceph_version=\"ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable)\", cluster=\"107483ae-1c44-11f1-b530-c1172cd6122a\", cluster_addr=\"192.168.123.102\", device_class=\"hdd\", hostname=\"vm02\", instance=\"ceph_cluster\", job=\"ceph\", objectstore=\"bluestore\", public_addr=\"192.168.123.102\"}, {__name__=\"ceph_osd_metadata\", ceph_daemon=\"osd.3\", ceph_version=\"ceph version 17.2.0 (43e2e60a7559d3f46c9d53f1ca875fd499a1e35e) quincy (stable)\", cluster_addr=\"192.168.123.102\", device_class=\"hdd\", hostname=\"vm02\", instance=\"192.168.123.105:9283\", job=\"ceph\", objectstore=\"bluestore\", public_addr=\"192.168.123.102\"}];many-to-many matching not allowed: matching labels must be unique on one side" 2026-03-10T05:56:44.999 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:56:44 vm05 bash[43541]: cluster 2026-03-10T05:56:42.866100+0000 mgr.y (mgr.24992) 260 : cluster [DBG] pgmap v151: 161 pgs: 161 active+clean; 457 KiB data, 324 MiB used, 160 GiB / 160 GiB avail; 921 B/s rd, 0 op/s 2026-03-10T05:56:44.999 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:56:44 vm05 bash[43541]: cluster 2026-03-10T05:56:42.866100+0000 mgr.y (mgr.24992) 260 : cluster [DBG] pgmap v151: 161 pgs: 161 active+clean; 457 KiB data, 324 MiB used, 160 GiB / 160 GiB avail; 921 B/s rd, 0 op/s 2026-03-10T05:56:44.999 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:56:44 vm05 bash[43541]: audit 2026-03-10T05:56:43.638389+0000 mon.a (mon.0) 610 : audit [INF] from='client.? ' entity='client.iscsi.foo.vm02.mxbwmh' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.102:6801/2805167735"}]': finished 2026-03-10T05:56:44.999 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:56:44 vm05 bash[43541]: audit 2026-03-10T05:56:43.638389+0000 mon.a (mon.0) 610 : audit [INF] from='client.? ' entity='client.iscsi.foo.vm02.mxbwmh' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.102:6801/2805167735"}]': finished 2026-03-10T05:56:44.999 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:56:44 vm05 bash[43541]: cluster 2026-03-10T05:56:43.646562+0000 mon.a (mon.0) 611 : cluster [DBG] osdmap e134: 8 total, 8 up, 8 in 2026-03-10T05:56:44.999 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:56:44 vm05 bash[43541]: cluster 2026-03-10T05:56:43.646562+0000 mon.a (mon.0) 611 : cluster [DBG] osdmap e134: 8 total, 8 up, 8 in 2026-03-10T05:56:44.999 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:56:44 vm05 bash[43541]: audit 2026-03-10T05:56:43.811304+0000 mon.c (mon.1) 16 : audit [INF] from='client.? 192.168.123.102:0/783350755' entity='client.iscsi.foo.vm02.mxbwmh' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.102:6800/2805167735"}]: dispatch 2026-03-10T05:56:44.999 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:56:44 vm05 bash[43541]: audit 2026-03-10T05:56:43.811304+0000 mon.c (mon.1) 16 : audit [INF] from='client.? 192.168.123.102:0/783350755' entity='client.iscsi.foo.vm02.mxbwmh' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.102:6800/2805167735"}]: dispatch 2026-03-10T05:56:44.999 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:56:44 vm05 bash[43541]: audit 2026-03-10T05:56:43.811559+0000 mon.a (mon.0) 612 : audit [INF] from='client.? ' entity='client.iscsi.foo.vm02.mxbwmh' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.102:6800/2805167735"}]: dispatch 2026-03-10T05:56:44.999 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:56:44 vm05 bash[43541]: audit 2026-03-10T05:56:43.811559+0000 mon.a (mon.0) 612 : audit [INF] from='client.? ' entity='client.iscsi.foo.vm02.mxbwmh' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.102:6800/2805167735"}]: dispatch 2026-03-10T05:56:45.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:56:44 vm02 bash[56371]: cluster 2026-03-10T05:56:42.866100+0000 mgr.y (mgr.24992) 260 : cluster [DBG] pgmap v151: 161 pgs: 161 active+clean; 457 KiB data, 324 MiB used, 160 GiB / 160 GiB avail; 921 B/s rd, 0 op/s 2026-03-10T05:56:45.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:56:44 vm02 bash[56371]: cluster 2026-03-10T05:56:42.866100+0000 mgr.y (mgr.24992) 260 : cluster [DBG] pgmap v151: 161 pgs: 161 active+clean; 457 KiB data, 324 MiB used, 160 GiB / 160 GiB avail; 921 B/s rd, 0 op/s 2026-03-10T05:56:45.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:56:44 vm02 bash[56371]: audit 2026-03-10T05:56:43.638389+0000 mon.a (mon.0) 610 : audit [INF] from='client.? ' entity='client.iscsi.foo.vm02.mxbwmh' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.102:6801/2805167735"}]': finished 2026-03-10T05:56:45.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:56:44 vm02 bash[56371]: audit 2026-03-10T05:56:43.638389+0000 mon.a (mon.0) 610 : audit [INF] from='client.? ' entity='client.iscsi.foo.vm02.mxbwmh' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.102:6801/2805167735"}]': finished 2026-03-10T05:56:45.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:56:44 vm02 bash[56371]: cluster 2026-03-10T05:56:43.646562+0000 mon.a (mon.0) 611 : cluster [DBG] osdmap e134: 8 total, 8 up, 8 in 2026-03-10T05:56:45.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:56:44 vm02 bash[56371]: cluster 2026-03-10T05:56:43.646562+0000 mon.a (mon.0) 611 : cluster [DBG] osdmap e134: 8 total, 8 up, 8 in 2026-03-10T05:56:45.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:56:44 vm02 bash[56371]: audit 2026-03-10T05:56:43.811304+0000 mon.c (mon.1) 16 : audit [INF] from='client.? 192.168.123.102:0/783350755' entity='client.iscsi.foo.vm02.mxbwmh' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.102:6800/2805167735"}]: dispatch 2026-03-10T05:56:45.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:56:44 vm02 bash[56371]: audit 2026-03-10T05:56:43.811304+0000 mon.c (mon.1) 16 : audit [INF] from='client.? 192.168.123.102:0/783350755' entity='client.iscsi.foo.vm02.mxbwmh' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.102:6800/2805167735"}]: dispatch 2026-03-10T05:56:45.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:56:44 vm02 bash[56371]: audit 2026-03-10T05:56:43.811559+0000 mon.a (mon.0) 612 : audit [INF] from='client.? ' entity='client.iscsi.foo.vm02.mxbwmh' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.102:6800/2805167735"}]: dispatch 2026-03-10T05:56:45.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:56:44 vm02 bash[56371]: audit 2026-03-10T05:56:43.811559+0000 mon.a (mon.0) 612 : audit [INF] from='client.? ' entity='client.iscsi.foo.vm02.mxbwmh' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.102:6800/2805167735"}]: dispatch 2026-03-10T05:56:45.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:56:44 vm02 bash[55303]: cluster 2026-03-10T05:56:42.866100+0000 mgr.y (mgr.24992) 260 : cluster [DBG] pgmap v151: 161 pgs: 161 active+clean; 457 KiB data, 324 MiB used, 160 GiB / 160 GiB avail; 921 B/s rd, 0 op/s 2026-03-10T05:56:45.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:56:44 vm02 bash[55303]: cluster 2026-03-10T05:56:42.866100+0000 mgr.y (mgr.24992) 260 : cluster [DBG] pgmap v151: 161 pgs: 161 active+clean; 457 KiB data, 324 MiB used, 160 GiB / 160 GiB avail; 921 B/s rd, 0 op/s 2026-03-10T05:56:45.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:56:44 vm02 bash[55303]: audit 2026-03-10T05:56:43.638389+0000 mon.a (mon.0) 610 : audit [INF] from='client.? ' entity='client.iscsi.foo.vm02.mxbwmh' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.102:6801/2805167735"}]': finished 2026-03-10T05:56:45.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:56:44 vm02 bash[55303]: audit 2026-03-10T05:56:43.638389+0000 mon.a (mon.0) 610 : audit [INF] from='client.? ' entity='client.iscsi.foo.vm02.mxbwmh' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.102:6801/2805167735"}]': finished 2026-03-10T05:56:45.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:56:44 vm02 bash[55303]: cluster 2026-03-10T05:56:43.646562+0000 mon.a (mon.0) 611 : cluster [DBG] osdmap e134: 8 total, 8 up, 8 in 2026-03-10T05:56:45.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:56:44 vm02 bash[55303]: cluster 2026-03-10T05:56:43.646562+0000 mon.a (mon.0) 611 : cluster [DBG] osdmap e134: 8 total, 8 up, 8 in 2026-03-10T05:56:45.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:56:44 vm02 bash[55303]: audit 2026-03-10T05:56:43.811304+0000 mon.c (mon.1) 16 : audit [INF] from='client.? 192.168.123.102:0/783350755' entity='client.iscsi.foo.vm02.mxbwmh' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.102:6800/2805167735"}]: dispatch 2026-03-10T05:56:45.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:56:44 vm02 bash[55303]: audit 2026-03-10T05:56:43.811304+0000 mon.c (mon.1) 16 : audit [INF] from='client.? 192.168.123.102:0/783350755' entity='client.iscsi.foo.vm02.mxbwmh' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.102:6800/2805167735"}]: dispatch 2026-03-10T05:56:45.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:56:44 vm02 bash[55303]: audit 2026-03-10T05:56:43.811559+0000 mon.a (mon.0) 612 : audit [INF] from='client.? ' entity='client.iscsi.foo.vm02.mxbwmh' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.102:6800/2805167735"}]: dispatch 2026-03-10T05:56:45.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:56:44 vm02 bash[55303]: audit 2026-03-10T05:56:43.811559+0000 mon.a (mon.0) 612 : audit [INF] from='client.? ' entity='client.iscsi.foo.vm02.mxbwmh' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.102:6800/2805167735"}]: dispatch 2026-03-10T05:56:45.999 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:56:45 vm05 bash[43541]: audit 2026-03-10T05:56:44.649868+0000 mon.a (mon.0) 613 : audit [INF] from='client.? ' entity='client.iscsi.foo.vm02.mxbwmh' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.102:6800/2805167735"}]': finished 2026-03-10T05:56:45.999 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:56:45 vm05 bash[43541]: audit 2026-03-10T05:56:44.649868+0000 mon.a (mon.0) 613 : audit [INF] from='client.? ' entity='client.iscsi.foo.vm02.mxbwmh' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.102:6800/2805167735"}]': finished 2026-03-10T05:56:45.999 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:56:45 vm05 bash[43541]: cluster 2026-03-10T05:56:44.655668+0000 mon.a (mon.0) 614 : cluster [DBG] osdmap e135: 8 total, 8 up, 8 in 2026-03-10T05:56:45.999 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:56:45 vm05 bash[43541]: cluster 2026-03-10T05:56:44.655668+0000 mon.a (mon.0) 614 : cluster [DBG] osdmap e135: 8 total, 8 up, 8 in 2026-03-10T05:56:45.999 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:56:45 vm05 bash[43541]: audit 2026-03-10T05:56:44.805722+0000 mon.a (mon.0) 615 : audit [INF] from='client.? 192.168.123.102:0/4237401813' entity='client.iscsi.foo.vm02.mxbwmh' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.102:0/330342084"}]: dispatch 2026-03-10T05:56:45.999 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:56:45 vm05 bash[43541]: audit 2026-03-10T05:56:44.805722+0000 mon.a (mon.0) 615 : audit [INF] from='client.? 192.168.123.102:0/4237401813' entity='client.iscsi.foo.vm02.mxbwmh' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.102:0/330342084"}]: dispatch 2026-03-10T05:56:46.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:56:45 vm02 bash[56371]: audit 2026-03-10T05:56:44.649868+0000 mon.a (mon.0) 613 : audit [INF] from='client.? ' entity='client.iscsi.foo.vm02.mxbwmh' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.102:6800/2805167735"}]': finished 2026-03-10T05:56:46.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:56:45 vm02 bash[56371]: audit 2026-03-10T05:56:44.649868+0000 mon.a (mon.0) 613 : audit [INF] from='client.? ' entity='client.iscsi.foo.vm02.mxbwmh' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.102:6800/2805167735"}]': finished 2026-03-10T05:56:46.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:56:45 vm02 bash[56371]: cluster 2026-03-10T05:56:44.655668+0000 mon.a (mon.0) 614 : cluster [DBG] osdmap e135: 8 total, 8 up, 8 in 2026-03-10T05:56:46.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:56:45 vm02 bash[56371]: cluster 2026-03-10T05:56:44.655668+0000 mon.a (mon.0) 614 : cluster [DBG] osdmap e135: 8 total, 8 up, 8 in 2026-03-10T05:56:46.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:56:45 vm02 bash[56371]: audit 2026-03-10T05:56:44.805722+0000 mon.a (mon.0) 615 : audit [INF] from='client.? 192.168.123.102:0/4237401813' entity='client.iscsi.foo.vm02.mxbwmh' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.102:0/330342084"}]: dispatch 2026-03-10T05:56:46.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:56:45 vm02 bash[56371]: audit 2026-03-10T05:56:44.805722+0000 mon.a (mon.0) 615 : audit [INF] from='client.? 192.168.123.102:0/4237401813' entity='client.iscsi.foo.vm02.mxbwmh' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.102:0/330342084"}]: dispatch 2026-03-10T05:56:46.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:56:45 vm02 bash[55303]: audit 2026-03-10T05:56:44.649868+0000 mon.a (mon.0) 613 : audit [INF] from='client.? ' entity='client.iscsi.foo.vm02.mxbwmh' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.102:6800/2805167735"}]': finished 2026-03-10T05:56:46.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:56:45 vm02 bash[55303]: audit 2026-03-10T05:56:44.649868+0000 mon.a (mon.0) 613 : audit [INF] from='client.? ' entity='client.iscsi.foo.vm02.mxbwmh' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.102:6800/2805167735"}]': finished 2026-03-10T05:56:46.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:56:45 vm02 bash[55303]: cluster 2026-03-10T05:56:44.655668+0000 mon.a (mon.0) 614 : cluster [DBG] osdmap e135: 8 total, 8 up, 8 in 2026-03-10T05:56:46.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:56:45 vm02 bash[55303]: cluster 2026-03-10T05:56:44.655668+0000 mon.a (mon.0) 614 : cluster [DBG] osdmap e135: 8 total, 8 up, 8 in 2026-03-10T05:56:46.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:56:45 vm02 bash[55303]: audit 2026-03-10T05:56:44.805722+0000 mon.a (mon.0) 615 : audit [INF] from='client.? 192.168.123.102:0/4237401813' entity='client.iscsi.foo.vm02.mxbwmh' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.102:0/330342084"}]: dispatch 2026-03-10T05:56:46.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:56:45 vm02 bash[55303]: audit 2026-03-10T05:56:44.805722+0000 mon.a (mon.0) 615 : audit [INF] from='client.? 192.168.123.102:0/4237401813' entity='client.iscsi.foo.vm02.mxbwmh' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.102:0/330342084"}]: dispatch 2026-03-10T05:56:46.944 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:56:46 vm05 bash[43541]: cluster 2026-03-10T05:56:44.866407+0000 mgr.y (mgr.24992) 261 : cluster [DBG] pgmap v154: 161 pgs: 161 active+clean; 457 KiB data, 324 MiB used, 160 GiB / 160 GiB avail; 682 B/s rd, 0 op/s 2026-03-10T05:56:46.944 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:56:46 vm05 bash[43541]: cluster 2026-03-10T05:56:44.866407+0000 mgr.y (mgr.24992) 261 : cluster [DBG] pgmap v154: 161 pgs: 161 active+clean; 457 KiB data, 324 MiB used, 160 GiB / 160 GiB avail; 682 B/s rd, 0 op/s 2026-03-10T05:56:46.944 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:56:46 vm05 bash[43541]: audit 2026-03-10T05:56:45.658446+0000 mon.a (mon.0) 616 : audit [INF] from='client.? 192.168.123.102:0/4237401813' entity='client.iscsi.foo.vm02.mxbwmh' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.102:0/330342084"}]': finished 2026-03-10T05:56:46.944 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:56:46 vm05 bash[43541]: audit 2026-03-10T05:56:45.658446+0000 mon.a (mon.0) 616 : audit [INF] from='client.? 192.168.123.102:0/4237401813' entity='client.iscsi.foo.vm02.mxbwmh' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.102:0/330342084"}]': finished 2026-03-10T05:56:46.944 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:56:46 vm05 bash[43541]: cluster 2026-03-10T05:56:45.669707+0000 mon.a (mon.0) 617 : cluster [DBG] osdmap e136: 8 total, 8 up, 8 in 2026-03-10T05:56:46.944 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:56:46 vm05 bash[43541]: cluster 2026-03-10T05:56:45.669707+0000 mon.a (mon.0) 617 : cluster [DBG] osdmap e136: 8 total, 8 up, 8 in 2026-03-10T05:56:46.944 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:56:46 vm05 bash[43541]: audit 2026-03-10T05:56:45.816826+0000 mon.c (mon.1) 17 : audit [INF] from='client.? 192.168.123.102:0/1141297978' entity='client.iscsi.foo.vm02.mxbwmh' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.102:0/1008797775"}]: dispatch 2026-03-10T05:56:46.944 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:56:46 vm05 bash[43541]: audit 2026-03-10T05:56:45.816826+0000 mon.c (mon.1) 17 : audit [INF] from='client.? 192.168.123.102:0/1141297978' entity='client.iscsi.foo.vm02.mxbwmh' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.102:0/1008797775"}]: dispatch 2026-03-10T05:56:46.944 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:56:46 vm05 bash[43541]: audit 2026-03-10T05:56:45.817514+0000 mon.a (mon.0) 618 : audit [INF] from='client.? ' entity='client.iscsi.foo.vm02.mxbwmh' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.102:0/1008797775"}]: dispatch 2026-03-10T05:56:46.944 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:56:46 vm05 bash[43541]: audit 2026-03-10T05:56:45.817514+0000 mon.a (mon.0) 618 : audit [INF] from='client.? ' entity='client.iscsi.foo.vm02.mxbwmh' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.102:0/1008797775"}]: dispatch 2026-03-10T05:56:47.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:56:46 vm02 bash[56371]: cluster 2026-03-10T05:56:44.866407+0000 mgr.y (mgr.24992) 261 : cluster [DBG] pgmap v154: 161 pgs: 161 active+clean; 457 KiB data, 324 MiB used, 160 GiB / 160 GiB avail; 682 B/s rd, 0 op/s 2026-03-10T05:56:47.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:56:46 vm02 bash[56371]: cluster 2026-03-10T05:56:44.866407+0000 mgr.y (mgr.24992) 261 : cluster [DBG] pgmap v154: 161 pgs: 161 active+clean; 457 KiB data, 324 MiB used, 160 GiB / 160 GiB avail; 682 B/s rd, 0 op/s 2026-03-10T05:56:47.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:56:46 vm02 bash[56371]: audit 2026-03-10T05:56:45.658446+0000 mon.a (mon.0) 616 : audit [INF] from='client.? 192.168.123.102:0/4237401813' entity='client.iscsi.foo.vm02.mxbwmh' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.102:0/330342084"}]': finished 2026-03-10T05:56:47.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:56:46 vm02 bash[56371]: audit 2026-03-10T05:56:45.658446+0000 mon.a (mon.0) 616 : audit [INF] from='client.? 192.168.123.102:0/4237401813' entity='client.iscsi.foo.vm02.mxbwmh' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.102:0/330342084"}]': finished 2026-03-10T05:56:47.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:56:46 vm02 bash[56371]: cluster 2026-03-10T05:56:45.669707+0000 mon.a (mon.0) 617 : cluster [DBG] osdmap e136: 8 total, 8 up, 8 in 2026-03-10T05:56:47.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:56:46 vm02 bash[56371]: cluster 2026-03-10T05:56:45.669707+0000 mon.a (mon.0) 617 : cluster [DBG] osdmap e136: 8 total, 8 up, 8 in 2026-03-10T05:56:47.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:56:46 vm02 bash[56371]: audit 2026-03-10T05:56:45.816826+0000 mon.c (mon.1) 17 : audit [INF] from='client.? 192.168.123.102:0/1141297978' entity='client.iscsi.foo.vm02.mxbwmh' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.102:0/1008797775"}]: dispatch 2026-03-10T05:56:47.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:56:46 vm02 bash[56371]: audit 2026-03-10T05:56:45.816826+0000 mon.c (mon.1) 17 : audit [INF] from='client.? 192.168.123.102:0/1141297978' entity='client.iscsi.foo.vm02.mxbwmh' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.102:0/1008797775"}]: dispatch 2026-03-10T05:56:47.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:56:46 vm02 bash[56371]: audit 2026-03-10T05:56:45.817514+0000 mon.a (mon.0) 618 : audit [INF] from='client.? ' entity='client.iscsi.foo.vm02.mxbwmh' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.102:0/1008797775"}]: dispatch 2026-03-10T05:56:47.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:56:46 vm02 bash[56371]: audit 2026-03-10T05:56:45.817514+0000 mon.a (mon.0) 618 : audit [INF] from='client.? ' entity='client.iscsi.foo.vm02.mxbwmh' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.102:0/1008797775"}]: dispatch 2026-03-10T05:56:47.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:56:46 vm02 bash[55303]: cluster 2026-03-10T05:56:44.866407+0000 mgr.y (mgr.24992) 261 : cluster [DBG] pgmap v154: 161 pgs: 161 active+clean; 457 KiB data, 324 MiB used, 160 GiB / 160 GiB avail; 682 B/s rd, 0 op/s 2026-03-10T05:56:47.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:56:46 vm02 bash[55303]: cluster 2026-03-10T05:56:44.866407+0000 mgr.y (mgr.24992) 261 : cluster [DBG] pgmap v154: 161 pgs: 161 active+clean; 457 KiB data, 324 MiB used, 160 GiB / 160 GiB avail; 682 B/s rd, 0 op/s 2026-03-10T05:56:47.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:56:46 vm02 bash[55303]: audit 2026-03-10T05:56:45.658446+0000 mon.a (mon.0) 616 : audit [INF] from='client.? 192.168.123.102:0/4237401813' entity='client.iscsi.foo.vm02.mxbwmh' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.102:0/330342084"}]': finished 2026-03-10T05:56:47.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:56:46 vm02 bash[55303]: audit 2026-03-10T05:56:45.658446+0000 mon.a (mon.0) 616 : audit [INF] from='client.? 192.168.123.102:0/4237401813' entity='client.iscsi.foo.vm02.mxbwmh' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.102:0/330342084"}]': finished 2026-03-10T05:56:47.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:56:46 vm02 bash[55303]: cluster 2026-03-10T05:56:45.669707+0000 mon.a (mon.0) 617 : cluster [DBG] osdmap e136: 8 total, 8 up, 8 in 2026-03-10T05:56:47.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:56:46 vm02 bash[55303]: cluster 2026-03-10T05:56:45.669707+0000 mon.a (mon.0) 617 : cluster [DBG] osdmap e136: 8 total, 8 up, 8 in 2026-03-10T05:56:47.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:56:46 vm02 bash[55303]: audit 2026-03-10T05:56:45.816826+0000 mon.c (mon.1) 17 : audit [INF] from='client.? 192.168.123.102:0/1141297978' entity='client.iscsi.foo.vm02.mxbwmh' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.102:0/1008797775"}]: dispatch 2026-03-10T05:56:47.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:56:46 vm02 bash[55303]: audit 2026-03-10T05:56:45.816826+0000 mon.c (mon.1) 17 : audit [INF] from='client.? 192.168.123.102:0/1141297978' entity='client.iscsi.foo.vm02.mxbwmh' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.102:0/1008797775"}]: dispatch 2026-03-10T05:56:47.086 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:56:46 vm02 bash[55303]: audit 2026-03-10T05:56:45.817514+0000 mon.a (mon.0) 618 : audit [INF] from='client.? ' entity='client.iscsi.foo.vm02.mxbwmh' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.102:0/1008797775"}]: dispatch 2026-03-10T05:56:47.086 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:56:46 vm02 bash[55303]: audit 2026-03-10T05:56:45.817514+0000 mon.a (mon.0) 618 : audit [INF] from='client.? ' entity='client.iscsi.foo.vm02.mxbwmh' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.102:0/1008797775"}]: dispatch 2026-03-10T05:56:47.249 INFO:journalctl@ceph.prometheus.a.vm05.stdout:Mar 10 05:56:46 vm05 bash[41269]: ts=2026-03-10T05:56:46.948Z caller=group.go:483 level=warn name=CephNodeDiskspaceWarning index=4 component="rule manager" file=/etc/prometheus/alerting/ceph_alerts.yml group=nodes msg="Evaluating rule failed" rule="alert: CephNodeDiskspaceWarning\nexpr: predict_linear(node_filesystem_free_bytes{device=~\"/.*\"}[2d], 3600 * 24 * 5)\n * on (instance) group_left (nodename) node_uname_info < 0\nlabels:\n oid: 1.3.6.1.4.1.50495.1.2.1.8.4\n severity: warning\n type: ceph_default\nannotations:\n description: Mountpoint {{ $labels.mountpoint }} on {{ $labels.nodename }} will\n be full in less than 5 days based on the 48 hour trailing fill rate.\n summary: Host filesystem free space is getting low\n" err="found duplicate series for the match group {instance=\"vm05\"} on the right hand-side of the operation: [{__name__=\"node_uname_info\", cluster=\"107483ae-1c44-11f1-b530-c1172cd6122a\", domainname=\"(none)\", instance=\"vm05\", job=\"node\", machine=\"x86_64\", nodename=\"vm05\", release=\"5.15.0-1092-kvm\", sysname=\"Linux\", version=\"#97-Ubuntu SMP Fri Jan 23 15:00:24 UTC 2026\"}, {__name__=\"node_uname_info\", domainname=\"(none)\", instance=\"vm05\", job=\"node\", machine=\"x86_64\", nodename=\"vm05\", release=\"5.15.0-1092-kvm\", sysname=\"Linux\", version=\"#97-Ubuntu SMP Fri Jan 23 15:00:24 UTC 2026\"}];many-to-many matching not allowed: matching labels must be unique on one side" 2026-03-10T05:56:47.999 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:56:47 vm05 bash[43541]: audit 2026-03-10T05:56:46.668767+0000 mon.a (mon.0) 619 : audit [INF] from='client.? ' entity='client.iscsi.foo.vm02.mxbwmh' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.102:0/1008797775"}]': finished 2026-03-10T05:56:47.999 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:56:47 vm05 bash[43541]: audit 2026-03-10T05:56:46.668767+0000 mon.a (mon.0) 619 : audit [INF] from='client.? ' entity='client.iscsi.foo.vm02.mxbwmh' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.102:0/1008797775"}]': finished 2026-03-10T05:56:47.999 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:56:47 vm05 bash[43541]: cluster 2026-03-10T05:56:46.675323+0000 mon.a (mon.0) 620 : cluster [DBG] osdmap e137: 8 total, 8 up, 8 in 2026-03-10T05:56:47.999 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:56:47 vm05 bash[43541]: cluster 2026-03-10T05:56:46.675323+0000 mon.a (mon.0) 620 : cluster [DBG] osdmap e137: 8 total, 8 up, 8 in 2026-03-10T05:56:47.999 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:56:47 vm05 bash[43541]: audit 2026-03-10T05:56:46.787224+0000 mon.a (mon.0) 621 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:56:47.999 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:56:47 vm05 bash[43541]: audit 2026-03-10T05:56:46.787224+0000 mon.a (mon.0) 621 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:56:47.999 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:56:47 vm05 bash[43541]: audit 2026-03-10T05:56:46.796126+0000 mon.a (mon.0) 622 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:56:47.999 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:56:47 vm05 bash[43541]: audit 2026-03-10T05:56:46.796126+0000 mon.a (mon.0) 622 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:56:47.999 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:56:47 vm05 bash[43541]: audit 2026-03-10T05:56:46.908965+0000 mon.c (mon.1) 18 : audit [INF] from='client.? 192.168.123.102:0/296329131' entity='client.iscsi.foo.vm02.mxbwmh' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.102:0/1511793199"}]: dispatch 2026-03-10T05:56:47.999 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:56:47 vm05 bash[43541]: audit 2026-03-10T05:56:46.908965+0000 mon.c (mon.1) 18 : audit [INF] from='client.? 192.168.123.102:0/296329131' entity='client.iscsi.foo.vm02.mxbwmh' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.102:0/1511793199"}]: dispatch 2026-03-10T05:56:47.999 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:56:47 vm05 bash[43541]: audit 2026-03-10T05:56:46.909309+0000 mon.a (mon.0) 623 : audit [INF] from='client.? ' entity='client.iscsi.foo.vm02.mxbwmh' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.102:0/1511793199"}]: dispatch 2026-03-10T05:56:47.999 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:56:47 vm05 bash[43541]: audit 2026-03-10T05:56:46.909309+0000 mon.a (mon.0) 623 : audit [INF] from='client.? ' entity='client.iscsi.foo.vm02.mxbwmh' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.102:0/1511793199"}]: dispatch 2026-03-10T05:56:47.999 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:56:47 vm05 bash[43541]: audit 2026-03-10T05:56:46.935656+0000 mon.a (mon.0) 624 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:56:47.999 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:56:47 vm05 bash[43541]: audit 2026-03-10T05:56:46.935656+0000 mon.a (mon.0) 624 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:56:47.999 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:56:47 vm05 bash[43541]: audit 2026-03-10T05:56:46.941760+0000 mon.a (mon.0) 625 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:56:47.999 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:56:47 vm05 bash[43541]: audit 2026-03-10T05:56:46.941760+0000 mon.a (mon.0) 625 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:56:47.999 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:56:47 vm05 bash[43541]: audit 2026-03-10T05:56:47.461314+0000 mon.a (mon.0) 626 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:56:47.999 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:56:47 vm05 bash[43541]: audit 2026-03-10T05:56:47.461314+0000 mon.a (mon.0) 626 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:56:47.999 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:56:47 vm05 bash[43541]: audit 2026-03-10T05:56:47.467005+0000 mon.a (mon.0) 627 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:56:47.999 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:56:47 vm05 bash[43541]: audit 2026-03-10T05:56:47.467005+0000 mon.a (mon.0) 627 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:56:48.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:56:47 vm02 bash[56371]: audit 2026-03-10T05:56:46.668767+0000 mon.a (mon.0) 619 : audit [INF] from='client.? ' entity='client.iscsi.foo.vm02.mxbwmh' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.102:0/1008797775"}]': finished 2026-03-10T05:56:48.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:56:47 vm02 bash[56371]: audit 2026-03-10T05:56:46.668767+0000 mon.a (mon.0) 619 : audit [INF] from='client.? ' entity='client.iscsi.foo.vm02.mxbwmh' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.102:0/1008797775"}]': finished 2026-03-10T05:56:48.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:56:47 vm02 bash[56371]: cluster 2026-03-10T05:56:46.675323+0000 mon.a (mon.0) 620 : cluster [DBG] osdmap e137: 8 total, 8 up, 8 in 2026-03-10T05:56:48.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:56:47 vm02 bash[56371]: cluster 2026-03-10T05:56:46.675323+0000 mon.a (mon.0) 620 : cluster [DBG] osdmap e137: 8 total, 8 up, 8 in 2026-03-10T05:56:48.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:56:47 vm02 bash[56371]: audit 2026-03-10T05:56:46.787224+0000 mon.a (mon.0) 621 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:56:48.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:56:47 vm02 bash[56371]: audit 2026-03-10T05:56:46.787224+0000 mon.a (mon.0) 621 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:56:48.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:56:47 vm02 bash[56371]: audit 2026-03-10T05:56:46.796126+0000 mon.a (mon.0) 622 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:56:48.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:56:47 vm02 bash[56371]: audit 2026-03-10T05:56:46.796126+0000 mon.a (mon.0) 622 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:56:48.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:56:47 vm02 bash[56371]: audit 2026-03-10T05:56:46.908965+0000 mon.c (mon.1) 18 : audit [INF] from='client.? 192.168.123.102:0/296329131' entity='client.iscsi.foo.vm02.mxbwmh' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.102:0/1511793199"}]: dispatch 2026-03-10T05:56:48.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:56:47 vm02 bash[56371]: audit 2026-03-10T05:56:46.908965+0000 mon.c (mon.1) 18 : audit [INF] from='client.? 192.168.123.102:0/296329131' entity='client.iscsi.foo.vm02.mxbwmh' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.102:0/1511793199"}]: dispatch 2026-03-10T05:56:48.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:56:47 vm02 bash[56371]: audit 2026-03-10T05:56:46.909309+0000 mon.a (mon.0) 623 : audit [INF] from='client.? ' entity='client.iscsi.foo.vm02.mxbwmh' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.102:0/1511793199"}]: dispatch 2026-03-10T05:56:48.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:56:47 vm02 bash[56371]: audit 2026-03-10T05:56:46.909309+0000 mon.a (mon.0) 623 : audit [INF] from='client.? ' entity='client.iscsi.foo.vm02.mxbwmh' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.102:0/1511793199"}]: dispatch 2026-03-10T05:56:48.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:56:47 vm02 bash[56371]: audit 2026-03-10T05:56:46.935656+0000 mon.a (mon.0) 624 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:56:48.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:56:47 vm02 bash[56371]: audit 2026-03-10T05:56:46.935656+0000 mon.a (mon.0) 624 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:56:48.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:56:47 vm02 bash[56371]: audit 2026-03-10T05:56:46.941760+0000 mon.a (mon.0) 625 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:56:48.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:56:47 vm02 bash[56371]: audit 2026-03-10T05:56:46.941760+0000 mon.a (mon.0) 625 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:56:48.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:56:47 vm02 bash[56371]: audit 2026-03-10T05:56:47.461314+0000 mon.a (mon.0) 626 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:56:48.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:56:47 vm02 bash[56371]: audit 2026-03-10T05:56:47.461314+0000 mon.a (mon.0) 626 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:56:48.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:56:47 vm02 bash[56371]: audit 2026-03-10T05:56:47.467005+0000 mon.a (mon.0) 627 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:56:48.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:56:47 vm02 bash[56371]: audit 2026-03-10T05:56:47.467005+0000 mon.a (mon.0) 627 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:56:48.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:56:47 vm02 bash[55303]: audit 2026-03-10T05:56:46.668767+0000 mon.a (mon.0) 619 : audit [INF] from='client.? ' entity='client.iscsi.foo.vm02.mxbwmh' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.102:0/1008797775"}]': finished 2026-03-10T05:56:48.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:56:47 vm02 bash[55303]: audit 2026-03-10T05:56:46.668767+0000 mon.a (mon.0) 619 : audit [INF] from='client.? ' entity='client.iscsi.foo.vm02.mxbwmh' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.102:0/1008797775"}]': finished 2026-03-10T05:56:48.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:56:47 vm02 bash[55303]: cluster 2026-03-10T05:56:46.675323+0000 mon.a (mon.0) 620 : cluster [DBG] osdmap e137: 8 total, 8 up, 8 in 2026-03-10T05:56:48.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:56:47 vm02 bash[55303]: cluster 2026-03-10T05:56:46.675323+0000 mon.a (mon.0) 620 : cluster [DBG] osdmap e137: 8 total, 8 up, 8 in 2026-03-10T05:56:48.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:56:47 vm02 bash[55303]: audit 2026-03-10T05:56:46.787224+0000 mon.a (mon.0) 621 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:56:48.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:56:47 vm02 bash[55303]: audit 2026-03-10T05:56:46.787224+0000 mon.a (mon.0) 621 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:56:48.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:56:47 vm02 bash[55303]: audit 2026-03-10T05:56:46.796126+0000 mon.a (mon.0) 622 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:56:48.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:56:47 vm02 bash[55303]: audit 2026-03-10T05:56:46.796126+0000 mon.a (mon.0) 622 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:56:48.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:56:47 vm02 bash[55303]: audit 2026-03-10T05:56:46.908965+0000 mon.c (mon.1) 18 : audit [INF] from='client.? 192.168.123.102:0/296329131' entity='client.iscsi.foo.vm02.mxbwmh' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.102:0/1511793199"}]: dispatch 2026-03-10T05:56:48.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:56:47 vm02 bash[55303]: audit 2026-03-10T05:56:46.908965+0000 mon.c (mon.1) 18 : audit [INF] from='client.? 192.168.123.102:0/296329131' entity='client.iscsi.foo.vm02.mxbwmh' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.102:0/1511793199"}]: dispatch 2026-03-10T05:56:48.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:56:47 vm02 bash[55303]: audit 2026-03-10T05:56:46.909309+0000 mon.a (mon.0) 623 : audit [INF] from='client.? ' entity='client.iscsi.foo.vm02.mxbwmh' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.102:0/1511793199"}]: dispatch 2026-03-10T05:56:48.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:56:47 vm02 bash[55303]: audit 2026-03-10T05:56:46.909309+0000 mon.a (mon.0) 623 : audit [INF] from='client.? ' entity='client.iscsi.foo.vm02.mxbwmh' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.102:0/1511793199"}]: dispatch 2026-03-10T05:56:48.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:56:47 vm02 bash[55303]: audit 2026-03-10T05:56:46.935656+0000 mon.a (mon.0) 624 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:56:48.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:56:47 vm02 bash[55303]: audit 2026-03-10T05:56:46.935656+0000 mon.a (mon.0) 624 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:56:48.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:56:47 vm02 bash[55303]: audit 2026-03-10T05:56:46.941760+0000 mon.a (mon.0) 625 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:56:48.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:56:47 vm02 bash[55303]: audit 2026-03-10T05:56:46.941760+0000 mon.a (mon.0) 625 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:56:48.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:56:47 vm02 bash[55303]: audit 2026-03-10T05:56:47.461314+0000 mon.a (mon.0) 626 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:56:48.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:56:47 vm02 bash[55303]: audit 2026-03-10T05:56:47.461314+0000 mon.a (mon.0) 626 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:56:48.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:56:47 vm02 bash[55303]: audit 2026-03-10T05:56:47.467005+0000 mon.a (mon.0) 627 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:56:48.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:56:47 vm02 bash[55303]: audit 2026-03-10T05:56:47.467005+0000 mon.a (mon.0) 627 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:56:48.999 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:56:48 vm05 bash[43541]: cluster 2026-03-10T05:56:46.866694+0000 mgr.y (mgr.24992) 262 : cluster [DBG] pgmap v157: 161 pgs: 161 active+clean; 457 KiB data, 324 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T05:56:48.999 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:56:48 vm05 bash[43541]: cluster 2026-03-10T05:56:46.866694+0000 mgr.y (mgr.24992) 262 : cluster [DBG] pgmap v157: 161 pgs: 161 active+clean; 457 KiB data, 324 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T05:56:48.999 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:56:48 vm05 bash[43541]: audit 2026-03-10T05:56:47.801472+0000 mon.a (mon.0) 628 : audit [INF] from='client.? ' entity='client.iscsi.foo.vm02.mxbwmh' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.102:0/1511793199"}]': finished 2026-03-10T05:56:48.999 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:56:48 vm05 bash[43541]: audit 2026-03-10T05:56:47.801472+0000 mon.a (mon.0) 628 : audit [INF] from='client.? ' entity='client.iscsi.foo.vm02.mxbwmh' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.102:0/1511793199"}]': finished 2026-03-10T05:56:48.999 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:56:48 vm05 bash[43541]: cluster 2026-03-10T05:56:47.810112+0000 mon.a (mon.0) 629 : cluster [DBG] osdmap e138: 8 total, 8 up, 8 in 2026-03-10T05:56:48.999 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:56:48 vm05 bash[43541]: cluster 2026-03-10T05:56:47.810112+0000 mon.a (mon.0) 629 : cluster [DBG] osdmap e138: 8 total, 8 up, 8 in 2026-03-10T05:56:49.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:56:48 vm02 bash[56371]: cluster 2026-03-10T05:56:46.866694+0000 mgr.y (mgr.24992) 262 : cluster [DBG] pgmap v157: 161 pgs: 161 active+clean; 457 KiB data, 324 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T05:56:49.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:56:48 vm02 bash[56371]: cluster 2026-03-10T05:56:46.866694+0000 mgr.y (mgr.24992) 262 : cluster [DBG] pgmap v157: 161 pgs: 161 active+clean; 457 KiB data, 324 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T05:56:49.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:56:48 vm02 bash[56371]: audit 2026-03-10T05:56:47.801472+0000 mon.a (mon.0) 628 : audit [INF] from='client.? ' entity='client.iscsi.foo.vm02.mxbwmh' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.102:0/1511793199"}]': finished 2026-03-10T05:56:49.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:56:48 vm02 bash[56371]: audit 2026-03-10T05:56:47.801472+0000 mon.a (mon.0) 628 : audit [INF] from='client.? ' entity='client.iscsi.foo.vm02.mxbwmh' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.102:0/1511793199"}]': finished 2026-03-10T05:56:49.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:56:48 vm02 bash[56371]: cluster 2026-03-10T05:56:47.810112+0000 mon.a (mon.0) 629 : cluster [DBG] osdmap e138: 8 total, 8 up, 8 in 2026-03-10T05:56:49.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:56:48 vm02 bash[56371]: cluster 2026-03-10T05:56:47.810112+0000 mon.a (mon.0) 629 : cluster [DBG] osdmap e138: 8 total, 8 up, 8 in 2026-03-10T05:56:49.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:56:48 vm02 bash[55303]: cluster 2026-03-10T05:56:46.866694+0000 mgr.y (mgr.24992) 262 : cluster [DBG] pgmap v157: 161 pgs: 161 active+clean; 457 KiB data, 324 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T05:56:49.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:56:48 vm02 bash[55303]: cluster 2026-03-10T05:56:46.866694+0000 mgr.y (mgr.24992) 262 : cluster [DBG] pgmap v157: 161 pgs: 161 active+clean; 457 KiB data, 324 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T05:56:49.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:56:48 vm02 bash[55303]: audit 2026-03-10T05:56:47.801472+0000 mon.a (mon.0) 628 : audit [INF] from='client.? ' entity='client.iscsi.foo.vm02.mxbwmh' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.102:0/1511793199"}]': finished 2026-03-10T05:56:49.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:56:48 vm02 bash[55303]: audit 2026-03-10T05:56:47.801472+0000 mon.a (mon.0) 628 : audit [INF] from='client.? ' entity='client.iscsi.foo.vm02.mxbwmh' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.102:0/1511793199"}]': finished 2026-03-10T05:56:49.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:56:48 vm02 bash[55303]: cluster 2026-03-10T05:56:47.810112+0000 mon.a (mon.0) 629 : cluster [DBG] osdmap e138: 8 total, 8 up, 8 in 2026-03-10T05:56:49.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:56:48 vm02 bash[55303]: cluster 2026-03-10T05:56:47.810112+0000 mon.a (mon.0) 629 : cluster [DBG] osdmap e138: 8 total, 8 up, 8 in 2026-03-10T05:56:50.998 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:56:50 vm05 bash[43541]: cluster 2026-03-10T05:56:48.867010+0000 mgr.y (mgr.24992) 263 : cluster [DBG] pgmap v159: 161 pgs: 161 active+clean; 457 KiB data, 325 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T05:56:50.999 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:56:50 vm05 bash[43541]: cluster 2026-03-10T05:56:48.867010+0000 mgr.y (mgr.24992) 263 : cluster [DBG] pgmap v159: 161 pgs: 161 active+clean; 457 KiB data, 325 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T05:56:51.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:56:50 vm02 bash[56371]: cluster 2026-03-10T05:56:48.867010+0000 mgr.y (mgr.24992) 263 : cluster [DBG] pgmap v159: 161 pgs: 161 active+clean; 457 KiB data, 325 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T05:56:51.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:56:50 vm02 bash[56371]: cluster 2026-03-10T05:56:48.867010+0000 mgr.y (mgr.24992) 263 : cluster [DBG] pgmap v159: 161 pgs: 161 active+clean; 457 KiB data, 325 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T05:56:51.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:56:50 vm02 bash[55303]: cluster 2026-03-10T05:56:48.867010+0000 mgr.y (mgr.24992) 263 : cluster [DBG] pgmap v159: 161 pgs: 161 active+clean; 457 KiB data, 325 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T05:56:51.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:56:50 vm02 bash[55303]: cluster 2026-03-10T05:56:48.867010+0000 mgr.y (mgr.24992) 263 : cluster [DBG] pgmap v159: 161 pgs: 161 active+clean; 457 KiB data, 325 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T05:56:52.956 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:56:52 vm02 bash[56371]: cluster 2026-03-10T05:56:50.867458+0000 mgr.y (mgr.24992) 264 : cluster [DBG] pgmap v160: 161 pgs: 161 active+clean; 457 KiB data, 325 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T05:56:52.956 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:56:52 vm02 bash[56371]: cluster 2026-03-10T05:56:50.867458+0000 mgr.y (mgr.24992) 264 : cluster [DBG] pgmap v160: 161 pgs: 161 active+clean; 457 KiB data, 325 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T05:56:52.956 INFO:journalctl@ceph.mgr.y.vm02.stdout:Mar 10 05:56:52 vm02 bash[52264]: ::ffff:192.168.123.105 - - [10/Mar/2026:05:56:52] "GET /metrics HTTP/1.1" 200 38257 "" "Prometheus/2.51.0" 2026-03-10T05:56:52.956 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:56:52 vm02 bash[55303]: cluster 2026-03-10T05:56:50.867458+0000 mgr.y (mgr.24992) 264 : cluster [DBG] pgmap v160: 161 pgs: 161 active+clean; 457 KiB data, 325 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T05:56:52.956 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:56:52 vm02 bash[55303]: cluster 2026-03-10T05:56:50.867458+0000 mgr.y (mgr.24992) 264 : cluster [DBG] pgmap v160: 161 pgs: 161 active+clean; 457 KiB data, 325 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T05:56:52.998 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:56:52 vm05 bash[43541]: cluster 2026-03-10T05:56:50.867458+0000 mgr.y (mgr.24992) 264 : cluster [DBG] pgmap v160: 161 pgs: 161 active+clean; 457 KiB data, 325 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T05:56:52.999 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:56:52 vm05 bash[43541]: cluster 2026-03-10T05:56:50.867458+0000 mgr.y (mgr.24992) 264 : cluster [DBG] pgmap v160: 161 pgs: 161 active+clean; 457 KiB data, 325 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T05:56:53.999 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:56:53 vm05 bash[43541]: audit 2026-03-10T05:56:52.038263+0000 mgr.y (mgr.24992) 265 : audit [DBG] from='client.34459 -' entity='client.iscsi.foo.vm02.mxbwmh' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T05:56:53.999 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:56:53 vm05 bash[43541]: audit 2026-03-10T05:56:52.038263+0000 mgr.y (mgr.24992) 265 : audit [DBG] from='client.34459 -' entity='client.iscsi.foo.vm02.mxbwmh' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T05:56:53.999 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:56:53 vm05 bash[43541]: audit 2026-03-10T05:56:53.000811+0000 mon.a (mon.0) 630 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:56:53.999 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:56:53 vm05 bash[43541]: audit 2026-03-10T05:56:53.000811+0000 mon.a (mon.0) 630 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:56:53.999 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:56:53 vm05 bash[43541]: audit 2026-03-10T05:56:53.007110+0000 mon.a (mon.0) 631 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:56:53.999 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:56:53 vm05 bash[43541]: audit 2026-03-10T05:56:53.007110+0000 mon.a (mon.0) 631 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:56:53.999 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:56:53 vm05 bash[43541]: audit 2026-03-10T05:56:53.009213+0000 mon.a (mon.0) 632 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:56:53.999 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:56:53 vm05 bash[43541]: audit 2026-03-10T05:56:53.009213+0000 mon.a (mon.0) 632 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:56:53.999 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:56:53 vm05 bash[43541]: audit 2026-03-10T05:56:53.009730+0000 mon.a (mon.0) 633 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T05:56:53.999 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:56:53 vm05 bash[43541]: audit 2026-03-10T05:56:53.009730+0000 mon.a (mon.0) 633 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T05:56:53.999 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:56:53 vm05 bash[43541]: audit 2026-03-10T05:56:53.155965+0000 mon.a (mon.0) 634 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:56:53.999 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:56:53 vm05 bash[43541]: audit 2026-03-10T05:56:53.155965+0000 mon.a (mon.0) 634 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:56:53.999 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:56:53 vm05 bash[43541]: audit 2026-03-10T05:56:53.172571+0000 mon.a (mon.0) 635 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "dashboard iscsi-gateway-list"}]: dispatch 2026-03-10T05:56:53.999 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:56:53 vm05 bash[43541]: audit 2026-03-10T05:56:53.172571+0000 mon.a (mon.0) 635 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "dashboard iscsi-gateway-list"}]: dispatch 2026-03-10T05:56:53.999 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:56:53 vm05 bash[43541]: audit 2026-03-10T05:56:53.180760+0000 mon.a (mon.0) 636 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:56:53.999 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:56:53 vm05 bash[43541]: audit 2026-03-10T05:56:53.180760+0000 mon.a (mon.0) 636 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:56:53.999 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:56:53 vm05 bash[43541]: audit 2026-03-10T05:56:53.183604+0000 mon.a (mon.0) 637 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "dashboard set-iscsi-api-ssl-verification", "value": "true"}]: dispatch 2026-03-10T05:56:53.999 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:56:53 vm05 bash[43541]: audit 2026-03-10T05:56:53.183604+0000 mon.a (mon.0) 637 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "dashboard set-iscsi-api-ssl-verification", "value": "true"}]: dispatch 2026-03-10T05:56:53.999 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:56:53 vm05 bash[43541]: audit 2026-03-10T05:56:53.184476+0000 mon.a (mon.0) 638 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "dashboard iscsi-gateway-add", "name": "vm02"}]: dispatch 2026-03-10T05:56:53.999 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:56:53 vm05 bash[43541]: audit 2026-03-10T05:56:53.184476+0000 mon.a (mon.0) 638 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "dashboard iscsi-gateway-add", "name": "vm02"}]: dispatch 2026-03-10T05:56:53.999 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:56:53 vm05 bash[43541]: audit 2026-03-10T05:56:53.188143+0000 mon.a (mon.0) 639 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:56:53.999 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:56:53 vm05 bash[43541]: audit 2026-03-10T05:56:53.188143+0000 mon.a (mon.0) 639 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:56:53.999 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:56:53 vm05 bash[43541]: audit 2026-03-10T05:56:53.217394+0000 mon.a (mon.0) 640 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T05:56:53.999 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:56:53 vm05 bash[43541]: audit 2026-03-10T05:56:53.217394+0000 mon.a (mon.0) 640 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T05:56:53.999 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:56:53 vm05 bash[43541]: audit 2026-03-10T05:56:53.218405+0000 mon.a (mon.0) 641 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T05:56:53.999 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:56:53 vm05 bash[43541]: audit 2026-03-10T05:56:53.218405+0000 mon.a (mon.0) 641 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T05:56:53.999 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:56:53 vm05 bash[43541]: audit 2026-03-10T05:56:53.219031+0000 mon.a (mon.0) 642 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T05:56:53.999 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:56:53 vm05 bash[43541]: audit 2026-03-10T05:56:53.219031+0000 mon.a (mon.0) 642 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T05:56:53.999 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:56:53 vm05 bash[43541]: audit 2026-03-10T05:56:53.219501+0000 mon.a (mon.0) 643 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T05:56:53.999 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:56:53 vm05 bash[43541]: audit 2026-03-10T05:56:53.219501+0000 mon.a (mon.0) 643 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T05:56:53.999 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:56:53 vm05 bash[43541]: audit 2026-03-10T05:56:53.220198+0000 mon.a (mon.0) 644 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T05:56:53.999 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:56:53 vm05 bash[43541]: audit 2026-03-10T05:56:53.220198+0000 mon.a (mon.0) 644 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T05:56:53.999 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:56:53 vm05 bash[43541]: audit 2026-03-10T05:56:53.221134+0000 mon.a (mon.0) 645 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T05:56:53.999 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:56:53 vm05 bash[43541]: audit 2026-03-10T05:56:53.221134+0000 mon.a (mon.0) 645 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T05:56:53.999 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:56:53 vm05 bash[43541]: audit 2026-03-10T05:56:53.221733+0000 mon.a (mon.0) 646 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T05:56:53.999 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:56:53 vm05 bash[43541]: audit 2026-03-10T05:56:53.221733+0000 mon.a (mon.0) 646 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T05:56:53.999 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:56:53 vm05 bash[43541]: audit 2026-03-10T05:56:53.222232+0000 mon.a (mon.0) 647 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T05:56:53.999 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:56:53 vm05 bash[43541]: audit 2026-03-10T05:56:53.222232+0000 mon.a (mon.0) 647 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T05:56:53.999 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:56:53 vm05 bash[43541]: audit 2026-03-10T05:56:53.222700+0000 mon.a (mon.0) 648 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T05:56:53.999 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:56:53 vm05 bash[43541]: audit 2026-03-10T05:56:53.222700+0000 mon.a (mon.0) 648 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T05:56:53.999 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:56:53 vm05 bash[43541]: audit 2026-03-10T05:56:53.223191+0000 mon.a (mon.0) 649 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T05:56:53.999 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:56:53 vm05 bash[43541]: audit 2026-03-10T05:56:53.223191+0000 mon.a (mon.0) 649 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T05:56:53.999 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:56:53 vm05 bash[43541]: audit 2026-03-10T05:56:53.223673+0000 mon.a (mon.0) 650 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T05:56:53.999 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:56:53 vm05 bash[43541]: audit 2026-03-10T05:56:53.223673+0000 mon.a (mon.0) 650 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T05:56:53.999 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:56:53 vm05 bash[43541]: audit 2026-03-10T05:56:53.227083+0000 mon.a (mon.0) 651 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:56:53.999 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:56:53 vm05 bash[43541]: audit 2026-03-10T05:56:53.227083+0000 mon.a (mon.0) 651 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:56:53.999 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:56:53 vm05 bash[43541]: audit 2026-03-10T05:56:53.229551+0000 mon.a (mon.0) 652 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.iscsi.foo.vm02.mxbwmh"}]: dispatch 2026-03-10T05:56:53.999 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:56:53 vm05 bash[43541]: audit 2026-03-10T05:56:53.229551+0000 mon.a (mon.0) 652 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.iscsi.foo.vm02.mxbwmh"}]: dispatch 2026-03-10T05:56:53.999 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:56:53 vm05 bash[43541]: audit 2026-03-10T05:56:53.231932+0000 mon.a (mon.0) 653 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.iscsi.foo.vm02.mxbwmh"}]': finished 2026-03-10T05:56:53.999 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:56:53 vm05 bash[43541]: audit 2026-03-10T05:56:53.231932+0000 mon.a (mon.0) 653 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.iscsi.foo.vm02.mxbwmh"}]': finished 2026-03-10T05:56:53.999 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:56:53 vm05 bash[43541]: audit 2026-03-10T05:56:53.234636+0000 mon.a (mon.0) 654 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T05:56:53.999 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:56:53 vm05 bash[43541]: audit 2026-03-10T05:56:53.234636+0000 mon.a (mon.0) 654 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T05:56:53.999 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:56:53 vm05 bash[43541]: audit 2026-03-10T05:56:53.237364+0000 mon.a (mon.0) 655 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:56:53.999 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:56:53 vm05 bash[43541]: audit 2026-03-10T05:56:53.237364+0000 mon.a (mon.0) 655 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:56:53.999 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:56:53 vm05 bash[43541]: audit 2026-03-10T05:56:53.239578+0000 mon.a (mon.0) 656 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T05:56:53.999 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:56:53 vm05 bash[43541]: audit 2026-03-10T05:56:53.239578+0000 mon.a (mon.0) 656 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T05:56:54.000 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:56:53 vm05 bash[43541]: audit 2026-03-10T05:56:53.242651+0000 mon.a (mon.0) 657 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:56:54.000 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:56:53 vm05 bash[43541]: audit 2026-03-10T05:56:53.242651+0000 mon.a (mon.0) 657 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:56:54.000 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:56:53 vm05 bash[43541]: audit 2026-03-10T05:56:53.243969+0000 mon.a (mon.0) 658 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T05:56:54.000 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:56:53 vm05 bash[43541]: audit 2026-03-10T05:56:53.243969+0000 mon.a (mon.0) 658 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T05:56:54.000 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:56:53 vm05 bash[43541]: audit 2026-03-10T05:56:53.245479+0000 mon.a (mon.0) 659 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T05:56:54.000 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:56:53 vm05 bash[43541]: audit 2026-03-10T05:56:53.245479+0000 mon.a (mon.0) 659 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T05:56:54.000 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:56:53 vm05 bash[43541]: audit 2026-03-10T05:56:53.246132+0000 mon.a (mon.0) 660 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T05:56:54.000 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:56:53 vm05 bash[43541]: audit 2026-03-10T05:56:53.246132+0000 mon.a (mon.0) 660 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T05:56:54.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:56:53 vm02 bash[56371]: audit 2026-03-10T05:56:52.038263+0000 mgr.y (mgr.24992) 265 : audit [DBG] from='client.34459 -' entity='client.iscsi.foo.vm02.mxbwmh' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T05:56:54.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:56:53 vm02 bash[56371]: audit 2026-03-10T05:56:52.038263+0000 mgr.y (mgr.24992) 265 : audit [DBG] from='client.34459 -' entity='client.iscsi.foo.vm02.mxbwmh' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T05:56:54.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:56:53 vm02 bash[56371]: audit 2026-03-10T05:56:53.000811+0000 mon.a (mon.0) 630 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:56:54.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:56:53 vm02 bash[56371]: audit 2026-03-10T05:56:53.000811+0000 mon.a (mon.0) 630 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:56:54.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:56:53 vm02 bash[56371]: audit 2026-03-10T05:56:53.007110+0000 mon.a (mon.0) 631 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:56:54.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:56:53 vm02 bash[56371]: audit 2026-03-10T05:56:53.007110+0000 mon.a (mon.0) 631 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:56:54.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:56:53 vm02 bash[56371]: audit 2026-03-10T05:56:53.009213+0000 mon.a (mon.0) 632 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:56:54.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:56:53 vm02 bash[56371]: audit 2026-03-10T05:56:53.009213+0000 mon.a (mon.0) 632 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:56:54.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:56:53 vm02 bash[56371]: audit 2026-03-10T05:56:53.009730+0000 mon.a (mon.0) 633 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T05:56:54.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:56:53 vm02 bash[56371]: audit 2026-03-10T05:56:53.009730+0000 mon.a (mon.0) 633 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T05:56:54.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:56:53 vm02 bash[56371]: audit 2026-03-10T05:56:53.155965+0000 mon.a (mon.0) 634 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:56:54.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:56:53 vm02 bash[56371]: audit 2026-03-10T05:56:53.155965+0000 mon.a (mon.0) 634 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:56:54.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:56:53 vm02 bash[56371]: audit 2026-03-10T05:56:53.172571+0000 mon.a (mon.0) 635 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "dashboard iscsi-gateway-list"}]: dispatch 2026-03-10T05:56:54.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:56:53 vm02 bash[56371]: audit 2026-03-10T05:56:53.172571+0000 mon.a (mon.0) 635 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "dashboard iscsi-gateway-list"}]: dispatch 2026-03-10T05:56:54.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:56:53 vm02 bash[56371]: audit 2026-03-10T05:56:53.180760+0000 mon.a (mon.0) 636 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:56:54.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:56:53 vm02 bash[56371]: audit 2026-03-10T05:56:53.180760+0000 mon.a (mon.0) 636 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:56:54.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:56:53 vm02 bash[56371]: audit 2026-03-10T05:56:53.183604+0000 mon.a (mon.0) 637 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "dashboard set-iscsi-api-ssl-verification", "value": "true"}]: dispatch 2026-03-10T05:56:54.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:56:53 vm02 bash[56371]: audit 2026-03-10T05:56:53.183604+0000 mon.a (mon.0) 637 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "dashboard set-iscsi-api-ssl-verification", "value": "true"}]: dispatch 2026-03-10T05:56:54.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:56:53 vm02 bash[56371]: audit 2026-03-10T05:56:53.184476+0000 mon.a (mon.0) 638 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "dashboard iscsi-gateway-add", "name": "vm02"}]: dispatch 2026-03-10T05:56:54.086 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:56:53 vm02 bash[56371]: audit 2026-03-10T05:56:53.184476+0000 mon.a (mon.0) 638 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "dashboard iscsi-gateway-add", "name": "vm02"}]: dispatch 2026-03-10T05:56:54.086 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:56:53 vm02 bash[56371]: audit 2026-03-10T05:56:53.188143+0000 mon.a (mon.0) 639 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:56:54.086 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:56:53 vm02 bash[56371]: audit 2026-03-10T05:56:53.188143+0000 mon.a (mon.0) 639 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:56:54.086 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:56:53 vm02 bash[56371]: audit 2026-03-10T05:56:53.217394+0000 mon.a (mon.0) 640 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T05:56:54.086 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:56:53 vm02 bash[56371]: audit 2026-03-10T05:56:53.217394+0000 mon.a (mon.0) 640 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T05:56:54.086 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:56:53 vm02 bash[56371]: audit 2026-03-10T05:56:53.218405+0000 mon.a (mon.0) 641 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T05:56:54.086 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:56:53 vm02 bash[56371]: audit 2026-03-10T05:56:53.218405+0000 mon.a (mon.0) 641 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T05:56:54.086 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:56:53 vm02 bash[56371]: audit 2026-03-10T05:56:53.219031+0000 mon.a (mon.0) 642 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T05:56:54.086 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:56:53 vm02 bash[56371]: audit 2026-03-10T05:56:53.219031+0000 mon.a (mon.0) 642 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T05:56:54.086 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:56:53 vm02 bash[56371]: audit 2026-03-10T05:56:53.219501+0000 mon.a (mon.0) 643 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T05:56:54.086 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:56:53 vm02 bash[56371]: audit 2026-03-10T05:56:53.219501+0000 mon.a (mon.0) 643 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T05:56:54.086 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:56:53 vm02 bash[56371]: audit 2026-03-10T05:56:53.220198+0000 mon.a (mon.0) 644 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T05:56:54.086 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:56:53 vm02 bash[56371]: audit 2026-03-10T05:56:53.220198+0000 mon.a (mon.0) 644 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T05:56:54.086 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:56:53 vm02 bash[56371]: audit 2026-03-10T05:56:53.221134+0000 mon.a (mon.0) 645 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T05:56:54.086 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:56:53 vm02 bash[56371]: audit 2026-03-10T05:56:53.221134+0000 mon.a (mon.0) 645 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T05:56:54.086 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:56:53 vm02 bash[56371]: audit 2026-03-10T05:56:53.221733+0000 mon.a (mon.0) 646 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T05:56:54.086 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:56:53 vm02 bash[56371]: audit 2026-03-10T05:56:53.221733+0000 mon.a (mon.0) 646 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T05:56:54.086 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:56:53 vm02 bash[56371]: audit 2026-03-10T05:56:53.222232+0000 mon.a (mon.0) 647 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T05:56:54.086 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:56:53 vm02 bash[56371]: audit 2026-03-10T05:56:53.222232+0000 mon.a (mon.0) 647 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T05:56:54.086 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:56:53 vm02 bash[56371]: audit 2026-03-10T05:56:53.222700+0000 mon.a (mon.0) 648 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T05:56:54.086 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:56:53 vm02 bash[56371]: audit 2026-03-10T05:56:53.222700+0000 mon.a (mon.0) 648 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T05:56:54.086 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:56:53 vm02 bash[56371]: audit 2026-03-10T05:56:53.223191+0000 mon.a (mon.0) 649 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T05:56:54.086 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:56:53 vm02 bash[56371]: audit 2026-03-10T05:56:53.223191+0000 mon.a (mon.0) 649 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T05:56:54.086 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:56:53 vm02 bash[56371]: audit 2026-03-10T05:56:53.223673+0000 mon.a (mon.0) 650 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T05:56:54.086 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:56:53 vm02 bash[56371]: audit 2026-03-10T05:56:53.223673+0000 mon.a (mon.0) 650 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T05:56:54.086 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:56:53 vm02 bash[56371]: audit 2026-03-10T05:56:53.227083+0000 mon.a (mon.0) 651 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:56:54.086 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:56:53 vm02 bash[56371]: audit 2026-03-10T05:56:53.227083+0000 mon.a (mon.0) 651 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:56:54.086 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:56:53 vm02 bash[56371]: audit 2026-03-10T05:56:53.229551+0000 mon.a (mon.0) 652 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.iscsi.foo.vm02.mxbwmh"}]: dispatch 2026-03-10T05:56:54.086 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:56:53 vm02 bash[56371]: audit 2026-03-10T05:56:53.229551+0000 mon.a (mon.0) 652 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.iscsi.foo.vm02.mxbwmh"}]: dispatch 2026-03-10T05:56:54.086 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:56:53 vm02 bash[56371]: audit 2026-03-10T05:56:53.231932+0000 mon.a (mon.0) 653 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.iscsi.foo.vm02.mxbwmh"}]': finished 2026-03-10T05:56:54.086 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:56:53 vm02 bash[56371]: audit 2026-03-10T05:56:53.231932+0000 mon.a (mon.0) 653 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.iscsi.foo.vm02.mxbwmh"}]': finished 2026-03-10T05:56:54.086 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:56:53 vm02 bash[56371]: audit 2026-03-10T05:56:53.234636+0000 mon.a (mon.0) 654 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T05:56:54.086 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:56:53 vm02 bash[56371]: audit 2026-03-10T05:56:53.234636+0000 mon.a (mon.0) 654 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T05:56:54.086 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:56:53 vm02 bash[56371]: audit 2026-03-10T05:56:53.237364+0000 mon.a (mon.0) 655 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:56:54.086 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:56:53 vm02 bash[56371]: audit 2026-03-10T05:56:53.237364+0000 mon.a (mon.0) 655 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:56:54.086 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:56:53 vm02 bash[56371]: audit 2026-03-10T05:56:53.239578+0000 mon.a (mon.0) 656 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T05:56:54.086 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:56:53 vm02 bash[56371]: audit 2026-03-10T05:56:53.239578+0000 mon.a (mon.0) 656 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T05:56:54.086 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:56:53 vm02 bash[56371]: audit 2026-03-10T05:56:53.242651+0000 mon.a (mon.0) 657 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:56:54.086 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:56:53 vm02 bash[56371]: audit 2026-03-10T05:56:53.242651+0000 mon.a (mon.0) 657 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:56:54.086 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:56:53 vm02 bash[56371]: audit 2026-03-10T05:56:53.243969+0000 mon.a (mon.0) 658 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T05:56:54.086 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:56:53 vm02 bash[56371]: audit 2026-03-10T05:56:53.243969+0000 mon.a (mon.0) 658 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T05:56:54.086 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:56:53 vm02 bash[56371]: audit 2026-03-10T05:56:53.245479+0000 mon.a (mon.0) 659 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T05:56:54.086 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:56:53 vm02 bash[56371]: audit 2026-03-10T05:56:53.245479+0000 mon.a (mon.0) 659 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T05:56:54.086 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:56:53 vm02 bash[56371]: audit 2026-03-10T05:56:53.246132+0000 mon.a (mon.0) 660 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T05:56:54.086 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:56:53 vm02 bash[56371]: audit 2026-03-10T05:56:53.246132+0000 mon.a (mon.0) 660 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T05:56:54.086 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:56:53 vm02 bash[55303]: audit 2026-03-10T05:56:52.038263+0000 mgr.y (mgr.24992) 265 : audit [DBG] from='client.34459 -' entity='client.iscsi.foo.vm02.mxbwmh' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T05:56:54.086 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:56:53 vm02 bash[55303]: audit 2026-03-10T05:56:52.038263+0000 mgr.y (mgr.24992) 265 : audit [DBG] from='client.34459 -' entity='client.iscsi.foo.vm02.mxbwmh' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T05:56:54.086 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:56:53 vm02 bash[55303]: audit 2026-03-10T05:56:53.000811+0000 mon.a (mon.0) 630 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:56:54.086 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:56:53 vm02 bash[55303]: audit 2026-03-10T05:56:53.000811+0000 mon.a (mon.0) 630 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:56:54.086 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:56:53 vm02 bash[55303]: audit 2026-03-10T05:56:53.007110+0000 mon.a (mon.0) 631 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:56:54.086 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:56:53 vm02 bash[55303]: audit 2026-03-10T05:56:53.007110+0000 mon.a (mon.0) 631 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:56:54.086 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:56:53 vm02 bash[55303]: audit 2026-03-10T05:56:53.009213+0000 mon.a (mon.0) 632 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:56:54.086 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:56:53 vm02 bash[55303]: audit 2026-03-10T05:56:53.009213+0000 mon.a (mon.0) 632 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:56:54.086 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:56:53 vm02 bash[55303]: audit 2026-03-10T05:56:53.009730+0000 mon.a (mon.0) 633 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T05:56:54.086 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:56:53 vm02 bash[55303]: audit 2026-03-10T05:56:53.009730+0000 mon.a (mon.0) 633 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T05:56:54.086 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:56:53 vm02 bash[55303]: audit 2026-03-10T05:56:53.155965+0000 mon.a (mon.0) 634 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:56:54.087 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:56:53 vm02 bash[55303]: audit 2026-03-10T05:56:53.155965+0000 mon.a (mon.0) 634 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:56:54.087 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:56:53 vm02 bash[55303]: audit 2026-03-10T05:56:53.172571+0000 mon.a (mon.0) 635 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "dashboard iscsi-gateway-list"}]: dispatch 2026-03-10T05:56:54.087 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:56:53 vm02 bash[55303]: audit 2026-03-10T05:56:53.172571+0000 mon.a (mon.0) 635 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "dashboard iscsi-gateway-list"}]: dispatch 2026-03-10T05:56:54.087 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:56:53 vm02 bash[55303]: audit 2026-03-10T05:56:53.180760+0000 mon.a (mon.0) 636 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:56:54.087 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:56:53 vm02 bash[55303]: audit 2026-03-10T05:56:53.180760+0000 mon.a (mon.0) 636 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:56:54.087 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:56:53 vm02 bash[55303]: audit 2026-03-10T05:56:53.183604+0000 mon.a (mon.0) 637 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "dashboard set-iscsi-api-ssl-verification", "value": "true"}]: dispatch 2026-03-10T05:56:54.087 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:56:53 vm02 bash[55303]: audit 2026-03-10T05:56:53.183604+0000 mon.a (mon.0) 637 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "dashboard set-iscsi-api-ssl-verification", "value": "true"}]: dispatch 2026-03-10T05:56:54.087 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:56:53 vm02 bash[55303]: audit 2026-03-10T05:56:53.184476+0000 mon.a (mon.0) 638 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "dashboard iscsi-gateway-add", "name": "vm02"}]: dispatch 2026-03-10T05:56:54.087 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:56:53 vm02 bash[55303]: audit 2026-03-10T05:56:53.184476+0000 mon.a (mon.0) 638 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "dashboard iscsi-gateway-add", "name": "vm02"}]: dispatch 2026-03-10T05:56:54.087 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:56:53 vm02 bash[55303]: audit 2026-03-10T05:56:53.188143+0000 mon.a (mon.0) 639 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:56:54.087 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:56:53 vm02 bash[55303]: audit 2026-03-10T05:56:53.188143+0000 mon.a (mon.0) 639 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:56:54.087 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:56:53 vm02 bash[55303]: audit 2026-03-10T05:56:53.217394+0000 mon.a (mon.0) 640 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T05:56:54.087 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:56:53 vm02 bash[55303]: audit 2026-03-10T05:56:53.217394+0000 mon.a (mon.0) 640 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T05:56:54.087 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:56:53 vm02 bash[55303]: audit 2026-03-10T05:56:53.218405+0000 mon.a (mon.0) 641 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T05:56:54.087 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:56:53 vm02 bash[55303]: audit 2026-03-10T05:56:53.218405+0000 mon.a (mon.0) 641 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T05:56:54.087 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:56:53 vm02 bash[55303]: audit 2026-03-10T05:56:53.219031+0000 mon.a (mon.0) 642 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T05:56:54.087 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:56:53 vm02 bash[55303]: audit 2026-03-10T05:56:53.219031+0000 mon.a (mon.0) 642 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T05:56:54.087 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:56:53 vm02 bash[55303]: audit 2026-03-10T05:56:53.219501+0000 mon.a (mon.0) 643 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T05:56:54.087 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:56:53 vm02 bash[55303]: audit 2026-03-10T05:56:53.219501+0000 mon.a (mon.0) 643 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T05:56:54.087 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:56:53 vm02 bash[55303]: audit 2026-03-10T05:56:53.220198+0000 mon.a (mon.0) 644 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T05:56:54.087 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:56:53 vm02 bash[55303]: audit 2026-03-10T05:56:53.220198+0000 mon.a (mon.0) 644 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T05:56:54.087 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:56:53 vm02 bash[55303]: audit 2026-03-10T05:56:53.221134+0000 mon.a (mon.0) 645 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T05:56:54.087 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:56:53 vm02 bash[55303]: audit 2026-03-10T05:56:53.221134+0000 mon.a (mon.0) 645 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T05:56:54.087 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:56:53 vm02 bash[55303]: audit 2026-03-10T05:56:53.221733+0000 mon.a (mon.0) 646 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T05:56:54.087 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:56:53 vm02 bash[55303]: audit 2026-03-10T05:56:53.221733+0000 mon.a (mon.0) 646 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T05:56:54.087 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:56:53 vm02 bash[55303]: audit 2026-03-10T05:56:53.222232+0000 mon.a (mon.0) 647 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T05:56:54.087 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:56:53 vm02 bash[55303]: audit 2026-03-10T05:56:53.222232+0000 mon.a (mon.0) 647 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T05:56:54.087 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:56:53 vm02 bash[55303]: audit 2026-03-10T05:56:53.222700+0000 mon.a (mon.0) 648 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T05:56:54.087 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:56:53 vm02 bash[55303]: audit 2026-03-10T05:56:53.222700+0000 mon.a (mon.0) 648 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T05:56:54.087 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:56:53 vm02 bash[55303]: audit 2026-03-10T05:56:53.223191+0000 mon.a (mon.0) 649 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T05:56:54.087 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:56:53 vm02 bash[55303]: audit 2026-03-10T05:56:53.223191+0000 mon.a (mon.0) 649 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T05:56:54.087 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:56:53 vm02 bash[55303]: audit 2026-03-10T05:56:53.223673+0000 mon.a (mon.0) 650 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T05:56:54.087 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:56:53 vm02 bash[55303]: audit 2026-03-10T05:56:53.223673+0000 mon.a (mon.0) 650 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T05:56:54.087 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:56:53 vm02 bash[55303]: audit 2026-03-10T05:56:53.227083+0000 mon.a (mon.0) 651 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:56:54.087 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:56:53 vm02 bash[55303]: audit 2026-03-10T05:56:53.227083+0000 mon.a (mon.0) 651 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:56:54.087 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:56:53 vm02 bash[55303]: audit 2026-03-10T05:56:53.229551+0000 mon.a (mon.0) 652 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.iscsi.foo.vm02.mxbwmh"}]: dispatch 2026-03-10T05:56:54.087 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:56:53 vm02 bash[55303]: audit 2026-03-10T05:56:53.229551+0000 mon.a (mon.0) 652 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.iscsi.foo.vm02.mxbwmh"}]: dispatch 2026-03-10T05:56:54.087 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:56:53 vm02 bash[55303]: audit 2026-03-10T05:56:53.231932+0000 mon.a (mon.0) 653 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.iscsi.foo.vm02.mxbwmh"}]': finished 2026-03-10T05:56:54.087 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:56:53 vm02 bash[55303]: audit 2026-03-10T05:56:53.231932+0000 mon.a (mon.0) 653 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.iscsi.foo.vm02.mxbwmh"}]': finished 2026-03-10T05:56:54.087 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:56:53 vm02 bash[55303]: audit 2026-03-10T05:56:53.234636+0000 mon.a (mon.0) 654 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T05:56:54.087 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:56:53 vm02 bash[55303]: audit 2026-03-10T05:56:53.234636+0000 mon.a (mon.0) 654 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T05:56:54.087 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:56:53 vm02 bash[55303]: audit 2026-03-10T05:56:53.237364+0000 mon.a (mon.0) 655 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:56:54.087 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:56:53 vm02 bash[55303]: audit 2026-03-10T05:56:53.237364+0000 mon.a (mon.0) 655 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:56:54.087 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:56:53 vm02 bash[55303]: audit 2026-03-10T05:56:53.239578+0000 mon.a (mon.0) 656 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T05:56:54.087 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:56:53 vm02 bash[55303]: audit 2026-03-10T05:56:53.239578+0000 mon.a (mon.0) 656 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T05:56:54.087 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:56:53 vm02 bash[55303]: audit 2026-03-10T05:56:53.242651+0000 mon.a (mon.0) 657 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:56:54.087 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:56:53 vm02 bash[55303]: audit 2026-03-10T05:56:53.242651+0000 mon.a (mon.0) 657 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:56:54.087 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:56:53 vm02 bash[55303]: audit 2026-03-10T05:56:53.243969+0000 mon.a (mon.0) 658 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T05:56:54.087 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:56:53 vm02 bash[55303]: audit 2026-03-10T05:56:53.243969+0000 mon.a (mon.0) 658 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T05:56:54.087 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:56:53 vm02 bash[55303]: audit 2026-03-10T05:56:53.245479+0000 mon.a (mon.0) 659 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T05:56:54.087 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:56:53 vm02 bash[55303]: audit 2026-03-10T05:56:53.245479+0000 mon.a (mon.0) 659 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T05:56:54.087 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:56:53 vm02 bash[55303]: audit 2026-03-10T05:56:53.246132+0000 mon.a (mon.0) 660 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T05:56:54.087 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:56:53 vm02 bash[55303]: audit 2026-03-10T05:56:53.246132+0000 mon.a (mon.0) 660 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T05:56:54.498 INFO:journalctl@ceph.prometheus.a.vm05.stdout:Mar 10 05:56:54 vm05 bash[41269]: ts=2026-03-10T05:56:54.147Z caller=group.go:483 level=warn name=CephOSDFlapping index=13 component="rule manager" file=/etc/prometheus/alerting/ceph_alerts.yml group=osd msg="Evaluating rule failed" rule="alert: CephOSDFlapping\nexpr: (rate(ceph_osd_up[5m]) * on (ceph_daemon) group_left (hostname) ceph_osd_metadata)\n * 60 > 1\nlabels:\n oid: 1.3.6.1.4.1.50495.1.2.1.4.4\n severity: warning\n type: ceph_default\nannotations:\n description: OSD {{ $labels.ceph_daemon }} on {{ $labels.hostname }} was marked\n down and back up {{ $value | humanize }} times once a minute for 5 minutes. This\n may indicate a network issue (latency, packet loss, MTU mismatch) on the cluster\n network, or the public network if no cluster network is deployed. Check the network\n stats on the listed host(s).\n documentation: https://docs.ceph.com/en/latest/rados/troubleshooting/troubleshooting-osd#flapping-osds\n summary: Network issues are causing OSDs to flap (mark each other down)\n" err="found duplicate series for the match group {ceph_daemon=\"osd.3\"} on the right hand-side of the operation: [{__name__=\"ceph_osd_metadata\", ceph_daemon=\"osd.3\", ceph_version=\"ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable)\", cluster=\"107483ae-1c44-11f1-b530-c1172cd6122a\", cluster_addr=\"192.168.123.102\", device_class=\"hdd\", hostname=\"vm02\", instance=\"ceph_cluster\", job=\"ceph\", objectstore=\"bluestore\", public_addr=\"192.168.123.102\"}, {__name__=\"ceph_osd_metadata\", ceph_daemon=\"osd.3\", ceph_version=\"ceph version 17.2.0 (43e2e60a7559d3f46c9d53f1ca875fd499a1e35e) quincy (stable)\", cluster_addr=\"192.168.123.102\", device_class=\"hdd\", hostname=\"vm02\", instance=\"192.168.123.105:9283\", job=\"ceph\", objectstore=\"bluestore\", public_addr=\"192.168.123.102\"}];many-to-many matching not allowed: matching labels must be unique on one side" 2026-03-10T05:56:54.999 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:56:54 vm05 bash[43541]: cluster 2026-03-10T05:56:52.867818+0000 mgr.y (mgr.24992) 266 : cluster [DBG] pgmap v161: 161 pgs: 161 active+clean; 457 KiB data, 325 MiB used, 160 GiB / 160 GiB avail; 1.4 KiB/s rd, 1 op/s 2026-03-10T05:56:54.999 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:56:54 vm05 bash[43541]: cluster 2026-03-10T05:56:52.867818+0000 mgr.y (mgr.24992) 266 : cluster [DBG] pgmap v161: 161 pgs: 161 active+clean; 457 KiB data, 325 MiB used, 160 GiB / 160 GiB avail; 1.4 KiB/s rd, 1 op/s 2026-03-10T05:56:54.999 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:56:54 vm05 bash[43541]: cephadm 2026-03-10T05:56:53.012134+0000 mgr.y (mgr.24992) 267 : cephadm [INF] Checking dashboard <-> RGW credentials 2026-03-10T05:56:54.999 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:56:54 vm05 bash[43541]: cephadm 2026-03-10T05:56:53.012134+0000 mgr.y (mgr.24992) 267 : cephadm [INF] Checking dashboard <-> RGW credentials 2026-03-10T05:56:54.999 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:56:54 vm05 bash[43541]: audit 2026-03-10T05:56:53.173082+0000 mgr.y (mgr.24992) 268 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard iscsi-gateway-list"}]: dispatch 2026-03-10T05:56:54.999 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:56:54 vm05 bash[43541]: audit 2026-03-10T05:56:53.173082+0000 mgr.y (mgr.24992) 268 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard iscsi-gateway-list"}]: dispatch 2026-03-10T05:56:54.999 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:56:54 vm05 bash[43541]: cephadm 2026-03-10T05:56:53.183496+0000 mgr.y (mgr.24992) 269 : cephadm [INF] Adding iSCSI gateway http://:@192.168.123.102:5000 to Dashboard 2026-03-10T05:56:54.999 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:56:54 vm05 bash[43541]: cephadm 2026-03-10T05:56:53.183496+0000 mgr.y (mgr.24992) 269 : cephadm [INF] Adding iSCSI gateway http://:@192.168.123.102:5000 to Dashboard 2026-03-10T05:56:54.999 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:56:54 vm05 bash[43541]: audit 2026-03-10T05:56:53.183824+0000 mgr.y (mgr.24992) 270 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard set-iscsi-api-ssl-verification", "value": "true"}]: dispatch 2026-03-10T05:56:54.999 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:56:54 vm05 bash[43541]: audit 2026-03-10T05:56:53.183824+0000 mgr.y (mgr.24992) 270 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard set-iscsi-api-ssl-verification", "value": "true"}]: dispatch 2026-03-10T05:56:54.999 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:56:54 vm05 bash[43541]: audit 2026-03-10T05:56:53.184697+0000 mgr.y (mgr.24992) 271 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard iscsi-gateway-add", "name": "vm02"}]: dispatch 2026-03-10T05:56:54.999 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:56:54 vm05 bash[43541]: audit 2026-03-10T05:56:53.184697+0000 mgr.y (mgr.24992) 271 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard iscsi-gateway-add", "name": "vm02"}]: dispatch 2026-03-10T05:56:54.999 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:56:54 vm05 bash[43541]: cephadm 2026-03-10T05:56:53.224065+0000 mgr.y (mgr.24992) 272 : cephadm [INF] Upgrade: Setting container_image for all iscsi 2026-03-10T05:56:54.999 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:56:54 vm05 bash[43541]: cephadm 2026-03-10T05:56:53.224065+0000 mgr.y (mgr.24992) 272 : cephadm [INF] Upgrade: Setting container_image for all iscsi 2026-03-10T05:56:54.999 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:56:54 vm05 bash[43541]: cephadm 2026-03-10T05:56:53.234956+0000 mgr.y (mgr.24992) 273 : cephadm [INF] Upgrade: Setting container_image for all nfs 2026-03-10T05:56:54.999 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:56:54 vm05 bash[43541]: cephadm 2026-03-10T05:56:53.234956+0000 mgr.y (mgr.24992) 273 : cephadm [INF] Upgrade: Setting container_image for all nfs 2026-03-10T05:56:54.999 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:56:54 vm05 bash[43541]: cephadm 2026-03-10T05:56:53.239923+0000 mgr.y (mgr.24992) 274 : cephadm [INF] Upgrade: Setting container_image for all nvmeof 2026-03-10T05:56:54.999 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:56:54 vm05 bash[43541]: cephadm 2026-03-10T05:56:53.239923+0000 mgr.y (mgr.24992) 274 : cephadm [INF] Upgrade: Setting container_image for all nvmeof 2026-03-10T05:56:54.999 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:56:54 vm05 bash[43541]: cephadm 2026-03-10T05:56:53.652884+0000 mgr.y (mgr.24992) 275 : cephadm [INF] Upgrade: Updating grafana.a 2026-03-10T05:56:54.999 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:56:54 vm05 bash[43541]: cephadm 2026-03-10T05:56:53.652884+0000 mgr.y (mgr.24992) 275 : cephadm [INF] Upgrade: Updating grafana.a 2026-03-10T05:56:54.999 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:56:54 vm05 bash[43541]: cephadm 2026-03-10T05:56:53.685391+0000 mgr.y (mgr.24992) 276 : cephadm [INF] Deploying daemon grafana.a on vm05 2026-03-10T05:56:54.999 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:56:54 vm05 bash[43541]: cephadm 2026-03-10T05:56:53.685391+0000 mgr.y (mgr.24992) 276 : cephadm [INF] Deploying daemon grafana.a on vm05 2026-03-10T05:56:55.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:56:54 vm02 bash[56371]: cluster 2026-03-10T05:56:52.867818+0000 mgr.y (mgr.24992) 266 : cluster [DBG] pgmap v161: 161 pgs: 161 active+clean; 457 KiB data, 325 MiB used, 160 GiB / 160 GiB avail; 1.4 KiB/s rd, 1 op/s 2026-03-10T05:56:55.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:56:54 vm02 bash[56371]: cluster 2026-03-10T05:56:52.867818+0000 mgr.y (mgr.24992) 266 : cluster [DBG] pgmap v161: 161 pgs: 161 active+clean; 457 KiB data, 325 MiB used, 160 GiB / 160 GiB avail; 1.4 KiB/s rd, 1 op/s 2026-03-10T05:56:55.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:56:54 vm02 bash[56371]: cephadm 2026-03-10T05:56:53.012134+0000 mgr.y (mgr.24992) 267 : cephadm [INF] Checking dashboard <-> RGW credentials 2026-03-10T05:56:55.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:56:54 vm02 bash[56371]: cephadm 2026-03-10T05:56:53.012134+0000 mgr.y (mgr.24992) 267 : cephadm [INF] Checking dashboard <-> RGW credentials 2026-03-10T05:56:55.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:56:54 vm02 bash[56371]: audit 2026-03-10T05:56:53.173082+0000 mgr.y (mgr.24992) 268 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard iscsi-gateway-list"}]: dispatch 2026-03-10T05:56:55.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:56:54 vm02 bash[56371]: audit 2026-03-10T05:56:53.173082+0000 mgr.y (mgr.24992) 268 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard iscsi-gateway-list"}]: dispatch 2026-03-10T05:56:55.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:56:54 vm02 bash[56371]: cephadm 2026-03-10T05:56:53.183496+0000 mgr.y (mgr.24992) 269 : cephadm [INF] Adding iSCSI gateway http://:@192.168.123.102:5000 to Dashboard 2026-03-10T05:56:55.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:56:54 vm02 bash[56371]: cephadm 2026-03-10T05:56:53.183496+0000 mgr.y (mgr.24992) 269 : cephadm [INF] Adding iSCSI gateway http://:@192.168.123.102:5000 to Dashboard 2026-03-10T05:56:55.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:56:54 vm02 bash[56371]: audit 2026-03-10T05:56:53.183824+0000 mgr.y (mgr.24992) 270 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard set-iscsi-api-ssl-verification", "value": "true"}]: dispatch 2026-03-10T05:56:55.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:56:54 vm02 bash[56371]: audit 2026-03-10T05:56:53.183824+0000 mgr.y (mgr.24992) 270 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard set-iscsi-api-ssl-verification", "value": "true"}]: dispatch 2026-03-10T05:56:55.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:56:54 vm02 bash[56371]: audit 2026-03-10T05:56:53.184697+0000 mgr.y (mgr.24992) 271 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard iscsi-gateway-add", "name": "vm02"}]: dispatch 2026-03-10T05:56:55.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:56:54 vm02 bash[56371]: audit 2026-03-10T05:56:53.184697+0000 mgr.y (mgr.24992) 271 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard iscsi-gateway-add", "name": "vm02"}]: dispatch 2026-03-10T05:56:55.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:56:54 vm02 bash[56371]: cephadm 2026-03-10T05:56:53.224065+0000 mgr.y (mgr.24992) 272 : cephadm [INF] Upgrade: Setting container_image for all iscsi 2026-03-10T05:56:55.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:56:54 vm02 bash[56371]: cephadm 2026-03-10T05:56:53.224065+0000 mgr.y (mgr.24992) 272 : cephadm [INF] Upgrade: Setting container_image for all iscsi 2026-03-10T05:56:55.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:56:54 vm02 bash[56371]: cephadm 2026-03-10T05:56:53.234956+0000 mgr.y (mgr.24992) 273 : cephadm [INF] Upgrade: Setting container_image for all nfs 2026-03-10T05:56:55.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:56:54 vm02 bash[56371]: cephadm 2026-03-10T05:56:53.234956+0000 mgr.y (mgr.24992) 273 : cephadm [INF] Upgrade: Setting container_image for all nfs 2026-03-10T05:56:55.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:56:54 vm02 bash[56371]: cephadm 2026-03-10T05:56:53.239923+0000 mgr.y (mgr.24992) 274 : cephadm [INF] Upgrade: Setting container_image for all nvmeof 2026-03-10T05:56:55.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:56:54 vm02 bash[56371]: cephadm 2026-03-10T05:56:53.239923+0000 mgr.y (mgr.24992) 274 : cephadm [INF] Upgrade: Setting container_image for all nvmeof 2026-03-10T05:56:55.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:56:54 vm02 bash[56371]: cephadm 2026-03-10T05:56:53.652884+0000 mgr.y (mgr.24992) 275 : cephadm [INF] Upgrade: Updating grafana.a 2026-03-10T05:56:55.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:56:54 vm02 bash[56371]: cephadm 2026-03-10T05:56:53.652884+0000 mgr.y (mgr.24992) 275 : cephadm [INF] Upgrade: Updating grafana.a 2026-03-10T05:56:55.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:56:54 vm02 bash[56371]: cephadm 2026-03-10T05:56:53.685391+0000 mgr.y (mgr.24992) 276 : cephadm [INF] Deploying daemon grafana.a on vm05 2026-03-10T05:56:55.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:56:54 vm02 bash[56371]: cephadm 2026-03-10T05:56:53.685391+0000 mgr.y (mgr.24992) 276 : cephadm [INF] Deploying daemon grafana.a on vm05 2026-03-10T05:56:55.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:56:54 vm02 bash[55303]: cluster 2026-03-10T05:56:52.867818+0000 mgr.y (mgr.24992) 266 : cluster [DBG] pgmap v161: 161 pgs: 161 active+clean; 457 KiB data, 325 MiB used, 160 GiB / 160 GiB avail; 1.4 KiB/s rd, 1 op/s 2026-03-10T05:56:55.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:56:54 vm02 bash[55303]: cluster 2026-03-10T05:56:52.867818+0000 mgr.y (mgr.24992) 266 : cluster [DBG] pgmap v161: 161 pgs: 161 active+clean; 457 KiB data, 325 MiB used, 160 GiB / 160 GiB avail; 1.4 KiB/s rd, 1 op/s 2026-03-10T05:56:55.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:56:54 vm02 bash[55303]: cephadm 2026-03-10T05:56:53.012134+0000 mgr.y (mgr.24992) 267 : cephadm [INF] Checking dashboard <-> RGW credentials 2026-03-10T05:56:55.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:56:54 vm02 bash[55303]: cephadm 2026-03-10T05:56:53.012134+0000 mgr.y (mgr.24992) 267 : cephadm [INF] Checking dashboard <-> RGW credentials 2026-03-10T05:56:55.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:56:54 vm02 bash[55303]: audit 2026-03-10T05:56:53.173082+0000 mgr.y (mgr.24992) 268 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard iscsi-gateway-list"}]: dispatch 2026-03-10T05:56:55.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:56:54 vm02 bash[55303]: audit 2026-03-10T05:56:53.173082+0000 mgr.y (mgr.24992) 268 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard iscsi-gateway-list"}]: dispatch 2026-03-10T05:56:55.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:56:54 vm02 bash[55303]: cephadm 2026-03-10T05:56:53.183496+0000 mgr.y (mgr.24992) 269 : cephadm [INF] Adding iSCSI gateway http://:@192.168.123.102:5000 to Dashboard 2026-03-10T05:56:55.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:56:54 vm02 bash[55303]: cephadm 2026-03-10T05:56:53.183496+0000 mgr.y (mgr.24992) 269 : cephadm [INF] Adding iSCSI gateway http://:@192.168.123.102:5000 to Dashboard 2026-03-10T05:56:55.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:56:54 vm02 bash[55303]: audit 2026-03-10T05:56:53.183824+0000 mgr.y (mgr.24992) 270 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard set-iscsi-api-ssl-verification", "value": "true"}]: dispatch 2026-03-10T05:56:55.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:56:54 vm02 bash[55303]: audit 2026-03-10T05:56:53.183824+0000 mgr.y (mgr.24992) 270 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard set-iscsi-api-ssl-verification", "value": "true"}]: dispatch 2026-03-10T05:56:55.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:56:54 vm02 bash[55303]: audit 2026-03-10T05:56:53.184697+0000 mgr.y (mgr.24992) 271 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard iscsi-gateway-add", "name": "vm02"}]: dispatch 2026-03-10T05:56:55.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:56:54 vm02 bash[55303]: audit 2026-03-10T05:56:53.184697+0000 mgr.y (mgr.24992) 271 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard iscsi-gateway-add", "name": "vm02"}]: dispatch 2026-03-10T05:56:55.086 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:56:54 vm02 bash[55303]: cephadm 2026-03-10T05:56:53.224065+0000 mgr.y (mgr.24992) 272 : cephadm [INF] Upgrade: Setting container_image for all iscsi 2026-03-10T05:56:55.086 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:56:54 vm02 bash[55303]: cephadm 2026-03-10T05:56:53.224065+0000 mgr.y (mgr.24992) 272 : cephadm [INF] Upgrade: Setting container_image for all iscsi 2026-03-10T05:56:55.086 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:56:54 vm02 bash[55303]: cephadm 2026-03-10T05:56:53.234956+0000 mgr.y (mgr.24992) 273 : cephadm [INF] Upgrade: Setting container_image for all nfs 2026-03-10T05:56:55.086 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:56:54 vm02 bash[55303]: cephadm 2026-03-10T05:56:53.234956+0000 mgr.y (mgr.24992) 273 : cephadm [INF] Upgrade: Setting container_image for all nfs 2026-03-10T05:56:55.086 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:56:54 vm02 bash[55303]: cephadm 2026-03-10T05:56:53.239923+0000 mgr.y (mgr.24992) 274 : cephadm [INF] Upgrade: Setting container_image for all nvmeof 2026-03-10T05:56:55.086 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:56:54 vm02 bash[55303]: cephadm 2026-03-10T05:56:53.239923+0000 mgr.y (mgr.24992) 274 : cephadm [INF] Upgrade: Setting container_image for all nvmeof 2026-03-10T05:56:55.086 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:56:54 vm02 bash[55303]: cephadm 2026-03-10T05:56:53.652884+0000 mgr.y (mgr.24992) 275 : cephadm [INF] Upgrade: Updating grafana.a 2026-03-10T05:56:55.086 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:56:54 vm02 bash[55303]: cephadm 2026-03-10T05:56:53.652884+0000 mgr.y (mgr.24992) 275 : cephadm [INF] Upgrade: Updating grafana.a 2026-03-10T05:56:55.086 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:56:54 vm02 bash[55303]: cephadm 2026-03-10T05:56:53.685391+0000 mgr.y (mgr.24992) 276 : cephadm [INF] Deploying daemon grafana.a on vm05 2026-03-10T05:56:55.086 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:56:54 vm02 bash[55303]: cephadm 2026-03-10T05:56:53.685391+0000 mgr.y (mgr.24992) 276 : cephadm [INF] Deploying daemon grafana.a on vm05 2026-03-10T05:56:56.998 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:56:56 vm05 bash[43541]: cluster 2026-03-10T05:56:54.868249+0000 mgr.y (mgr.24992) 277 : cluster [DBG] pgmap v162: 161 pgs: 161 active+clean; 457 KiB data, 325 MiB used, 160 GiB / 160 GiB avail; 1.6 KiB/s rd, 1 op/s 2026-03-10T05:56:56.998 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:56:56 vm05 bash[43541]: cluster 2026-03-10T05:56:54.868249+0000 mgr.y (mgr.24992) 277 : cluster [DBG] pgmap v162: 161 pgs: 161 active+clean; 457 KiB data, 325 MiB used, 160 GiB / 160 GiB avail; 1.6 KiB/s rd, 1 op/s 2026-03-10T05:56:56.998 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:56:56 vm05 bash[43541]: audit 2026-03-10T05:56:55.882692+0000 mon.a (mon.0) 661 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T05:56:56.998 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:56:56 vm05 bash[43541]: audit 2026-03-10T05:56:55.882692+0000 mon.a (mon.0) 661 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T05:56:56.999 INFO:journalctl@ceph.prometheus.a.vm05.stdout:Mar 10 05:56:56 vm05 bash[41269]: ts=2026-03-10T05:56:56.948Z caller=group.go:483 level=warn name=CephNodeDiskspaceWarning index=4 component="rule manager" file=/etc/prometheus/alerting/ceph_alerts.yml group=nodes msg="Evaluating rule failed" rule="alert: CephNodeDiskspaceWarning\nexpr: predict_linear(node_filesystem_free_bytes{device=~\"/.*\"}[2d], 3600 * 24 * 5)\n * on (instance) group_left (nodename) node_uname_info < 0\nlabels:\n oid: 1.3.6.1.4.1.50495.1.2.1.8.4\n severity: warning\n type: ceph_default\nannotations:\n description: Mountpoint {{ $labels.mountpoint }} on {{ $labels.nodename }} will\n be full in less than 5 days based on the 48 hour trailing fill rate.\n summary: Host filesystem free space is getting low\n" err="found duplicate series for the match group {instance=\"vm05\"} on the right hand-side of the operation: [{__name__=\"node_uname_info\", cluster=\"107483ae-1c44-11f1-b530-c1172cd6122a\", domainname=\"(none)\", instance=\"vm05\", job=\"node\", machine=\"x86_64\", nodename=\"vm05\", release=\"5.15.0-1092-kvm\", sysname=\"Linux\", version=\"#97-Ubuntu SMP Fri Jan 23 15:00:24 UTC 2026\"}, {__name__=\"node_uname_info\", domainname=\"(none)\", instance=\"vm05\", job=\"node\", machine=\"x86_64\", nodename=\"vm05\", release=\"5.15.0-1092-kvm\", sysname=\"Linux\", version=\"#97-Ubuntu SMP Fri Jan 23 15:00:24 UTC 2026\"}];many-to-many matching not allowed: matching labels must be unique on one side" 2026-03-10T05:56:57.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:56:56 vm02 bash[56371]: cluster 2026-03-10T05:56:54.868249+0000 mgr.y (mgr.24992) 277 : cluster [DBG] pgmap v162: 161 pgs: 161 active+clean; 457 KiB data, 325 MiB used, 160 GiB / 160 GiB avail; 1.6 KiB/s rd, 1 op/s 2026-03-10T05:56:57.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:56:56 vm02 bash[56371]: cluster 2026-03-10T05:56:54.868249+0000 mgr.y (mgr.24992) 277 : cluster [DBG] pgmap v162: 161 pgs: 161 active+clean; 457 KiB data, 325 MiB used, 160 GiB / 160 GiB avail; 1.6 KiB/s rd, 1 op/s 2026-03-10T05:56:57.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:56:56 vm02 bash[56371]: audit 2026-03-10T05:56:55.882692+0000 mon.a (mon.0) 661 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T05:56:57.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:56:56 vm02 bash[56371]: audit 2026-03-10T05:56:55.882692+0000 mon.a (mon.0) 661 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T05:56:57.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:56:56 vm02 bash[55303]: cluster 2026-03-10T05:56:54.868249+0000 mgr.y (mgr.24992) 277 : cluster [DBG] pgmap v162: 161 pgs: 161 active+clean; 457 KiB data, 325 MiB used, 160 GiB / 160 GiB avail; 1.6 KiB/s rd, 1 op/s 2026-03-10T05:56:57.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:56:56 vm02 bash[55303]: cluster 2026-03-10T05:56:54.868249+0000 mgr.y (mgr.24992) 277 : cluster [DBG] pgmap v162: 161 pgs: 161 active+clean; 457 KiB data, 325 MiB used, 160 GiB / 160 GiB avail; 1.6 KiB/s rd, 1 op/s 2026-03-10T05:56:57.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:56:56 vm02 bash[55303]: audit 2026-03-10T05:56:55.882692+0000 mon.a (mon.0) 661 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T05:56:57.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:56:56 vm02 bash[55303]: audit 2026-03-10T05:56:55.882692+0000 mon.a (mon.0) 661 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T05:56:57.955 INFO:teuthology.orchestra.run.vm02.stdout:true 2026-03-10T05:56:58.354 INFO:teuthology.orchestra.run.vm02.stdout:NAME HOST PORTS STATUS REFRESHED AGE MEM USE MEM LIM VERSION IMAGE ID CONTAINER ID 2026-03-10T05:56:58.354 INFO:teuthology.orchestra.run.vm02.stdout:alertmanager.a vm02 *:9093,9094 running (5m) 11s ago 9m 13.2M - 0.25.0 c8568f914cd2 7a7c5c2cddb6 2026-03-10T05:56:58.355 INFO:teuthology.orchestra.run.vm02.stdout:grafana.a vm05 *:3000 running (4m) 11s ago 9m 40.3M - dad864ee21e9 95c6d977988a 2026-03-10T05:56:58.355 INFO:teuthology.orchestra.run.vm02.stdout:iscsi.foo.vm02.mxbwmh vm02 running (16s) 11s ago 9m 75.5M - 3.9 654f31e6858e f1b577537dcd 2026-03-10T05:56:58.355 INFO:teuthology.orchestra.run.vm02.stdout:mgr.x vm05 *:8443,9283,8765 running (4m) 11s ago 12m 465M - 19.2.3-678-ge911bdeb 654f31e6858e 7579626ada90 2026-03-10T05:56:58.355 INFO:teuthology.orchestra.run.vm02.stdout:mgr.y vm02 *:8443,9283,8765 running (4m) 11s ago 13m 537M - 19.2.3-678-ge911bdeb 654f31e6858e ef46d0f7b15e 2026-03-10T05:56:58.355 INFO:teuthology.orchestra.run.vm02.stdout:mon.a vm02 running (3m) 11s ago 13m 55.0M 2048M 19.2.3-678-ge911bdeb 654f31e6858e df3a0a290a95 2026-03-10T05:56:58.355 INFO:teuthology.orchestra.run.vm02.stdout:mon.b vm05 running (3m) 11s ago 12m 46.0M 2048M 19.2.3-678-ge911bdeb 654f31e6858e 1da04b90d16b 2026-03-10T05:56:58.355 INFO:teuthology.orchestra.run.vm02.stdout:mon.c vm02 running (4m) 11s ago 12m 52.4M 2048M 19.2.3-678-ge911bdeb 654f31e6858e 7f2cdf1b7aa6 2026-03-10T05:56:58.355 INFO:teuthology.orchestra.run.vm02.stdout:node-exporter.a vm02 *:9100 running (4m) 11s ago 10m 7539k - 1.7.0 72c9c2088986 90288450bd1f 2026-03-10T05:56:58.355 INFO:teuthology.orchestra.run.vm02.stdout:node-exporter.b vm05 *:9100 running (4m) 11s ago 10m 7579k - 1.7.0 72c9c2088986 4e859143cb0e 2026-03-10T05:56:58.355 INFO:teuthology.orchestra.run.vm02.stdout:osd.0 vm02 running (2m) 11s ago 12m 75.4M 4096M 19.2.3-678-ge911bdeb 654f31e6858e 640360275f83 2026-03-10T05:56:58.355 INFO:teuthology.orchestra.run.vm02.stdout:osd.1 vm02 running (2m) 11s ago 11m 56.6M 4096M 19.2.3-678-ge911bdeb 654f31e6858e 4de5c460789a 2026-03-10T05:56:58.355 INFO:teuthology.orchestra.run.vm02.stdout:osd.2 vm02 running (2m) 11s ago 11m 51.8M 4096M 19.2.3-678-ge911bdeb 654f31e6858e 51dac2f581d9 2026-03-10T05:56:58.355 INFO:teuthology.orchestra.run.vm02.stdout:osd.3 vm02 running (3m) 11s ago 11m 81.2M 4096M 19.2.3-678-ge911bdeb 654f31e6858e 0eca961791f4 2026-03-10T05:56:58.355 INFO:teuthology.orchestra.run.vm02.stdout:osd.4 vm05 running (108s) 11s ago 11m 57.8M 4096M 19.2.3-678-ge911bdeb 654f31e6858e 2c1b499265f4 2026-03-10T05:56:58.355 INFO:teuthology.orchestra.run.vm02.stdout:osd.5 vm05 running (92s) 11s ago 10m 75.5M 4096M 19.2.3-678-ge911bdeb 654f31e6858e 7ec1a1246098 2026-03-10T05:56:58.355 INFO:teuthology.orchestra.run.vm02.stdout:osd.6 vm05 running (75s) 11s ago 10m 73.3M 4096M 19.2.3-678-ge911bdeb 654f31e6858e bd151ab03026 2026-03-10T05:56:58.355 INFO:teuthology.orchestra.run.vm02.stdout:osd.7 vm05 running (59s) 11s ago 10m 72.7M 4096M 19.2.3-678-ge911bdeb 654f31e6858e 83fe4a7f26f5 2026-03-10T05:56:58.355 INFO:teuthology.orchestra.run.vm02.stdout:prometheus.a vm05 *:9095 running (4m) 11s ago 9m 39.4M - 2.51.0 1d3b7f56885b 3328811f8f28 2026-03-10T05:56:58.355 INFO:teuthology.orchestra.run.vm02.stdout:rgw.foo.vm02.pbogjd vm02 *:8000 running (45s) 11s ago 9m 92.4M - 19.2.3-678-ge911bdeb 654f31e6858e 4e1a47dc4ede 2026-03-10T05:56:58.355 INFO:teuthology.orchestra.run.vm02.stdout:rgw.foo.vm05.hvmsxl vm05 *:8000 running (41s) 11s ago 9m 92.4M - 19.2.3-678-ge911bdeb 654f31e6858e 51931a978021 2026-03-10T05:56:58.355 INFO:teuthology.orchestra.run.vm02.stdout:rgw.smpl.vm02.pglcfm vm02 *:80 running (43s) 11s ago 9m 92.3M - 19.2.3-678-ge911bdeb 654f31e6858e a59d6d93b54c 2026-03-10T05:56:58.355 INFO:teuthology.orchestra.run.vm02.stdout:rgw.smpl.vm05.hqqmap vm05 *:80 running (39s) 11s ago 9m 92.2M - 19.2.3-678-ge911bdeb 654f31e6858e 62b012e7d3ec 2026-03-10T05:56:58.611 INFO:teuthology.orchestra.run.vm02.stdout:{ 2026-03-10T05:56:58.611 INFO:teuthology.orchestra.run.vm02.stdout: "mon": { 2026-03-10T05:56:58.611 INFO:teuthology.orchestra.run.vm02.stdout: "ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable)": 3 2026-03-10T05:56:58.611 INFO:teuthology.orchestra.run.vm02.stdout: }, 2026-03-10T05:56:58.611 INFO:teuthology.orchestra.run.vm02.stdout: "mgr": { 2026-03-10T05:56:58.611 INFO:teuthology.orchestra.run.vm02.stdout: "ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable)": 2 2026-03-10T05:56:58.611 INFO:teuthology.orchestra.run.vm02.stdout: }, 2026-03-10T05:56:58.611 INFO:teuthology.orchestra.run.vm02.stdout: "osd": { 2026-03-10T05:56:58.611 INFO:teuthology.orchestra.run.vm02.stdout: "ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable)": 8 2026-03-10T05:56:58.611 INFO:teuthology.orchestra.run.vm02.stdout: }, 2026-03-10T05:56:58.611 INFO:teuthology.orchestra.run.vm02.stdout: "rgw": { 2026-03-10T05:56:58.611 INFO:teuthology.orchestra.run.vm02.stdout: "ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable)": 4 2026-03-10T05:56:58.611 INFO:teuthology.orchestra.run.vm02.stdout: }, 2026-03-10T05:56:58.611 INFO:teuthology.orchestra.run.vm02.stdout: "overall": { 2026-03-10T05:56:58.611 INFO:teuthology.orchestra.run.vm02.stdout: "ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable)": 17 2026-03-10T05:56:58.611 INFO:teuthology.orchestra.run.vm02.stdout: } 2026-03-10T05:56:58.611 INFO:teuthology.orchestra.run.vm02.stdout:} 2026-03-10T05:56:58.816 INFO:teuthology.orchestra.run.vm02.stdout:{ 2026-03-10T05:56:58.816 INFO:teuthology.orchestra.run.vm02.stdout: "target_image": "quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df", 2026-03-10T05:56:58.816 INFO:teuthology.orchestra.run.vm02.stdout: "in_progress": true, 2026-03-10T05:56:58.816 INFO:teuthology.orchestra.run.vm02.stdout: "which": "Upgrading all daemon types on all hosts", 2026-03-10T05:56:58.816 INFO:teuthology.orchestra.run.vm02.stdout: "services_complete": [ 2026-03-10T05:56:58.817 INFO:teuthology.orchestra.run.vm02.stdout: "mgr", 2026-03-10T05:56:58.817 INFO:teuthology.orchestra.run.vm02.stdout: "mon", 2026-03-10T05:56:58.817 INFO:teuthology.orchestra.run.vm02.stdout: "rgw", 2026-03-10T05:56:58.817 INFO:teuthology.orchestra.run.vm02.stdout: "osd", 2026-03-10T05:56:58.817 INFO:teuthology.orchestra.run.vm02.stdout: "iscsi" 2026-03-10T05:56:58.817 INFO:teuthology.orchestra.run.vm02.stdout: ], 2026-03-10T05:56:58.817 INFO:teuthology.orchestra.run.vm02.stdout: "progress": "18/23 daemons upgraded", 2026-03-10T05:56:58.817 INFO:teuthology.orchestra.run.vm02.stdout: "message": "Currently upgrading grafana daemons", 2026-03-10T05:56:58.817 INFO:teuthology.orchestra.run.vm02.stdout: "is_paused": false 2026-03-10T05:56:58.817 INFO:teuthology.orchestra.run.vm02.stdout:} 2026-03-10T05:56:59.054 INFO:teuthology.orchestra.run.vm02.stdout:HEALTH_OK 2026-03-10T05:56:59.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:56:58 vm02 bash[56371]: cluster 2026-03-10T05:56:56.868710+0000 mgr.y (mgr.24992) 278 : cluster [DBG] pgmap v163: 161 pgs: 161 active+clean; 457 KiB data, 325 MiB used, 160 GiB / 160 GiB avail; 2.7 KiB/s rd, 2 op/s 2026-03-10T05:56:59.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:56:58 vm02 bash[56371]: cluster 2026-03-10T05:56:56.868710+0000 mgr.y (mgr.24992) 278 : cluster [DBG] pgmap v163: 161 pgs: 161 active+clean; 457 KiB data, 325 MiB used, 160 GiB / 160 GiB avail; 2.7 KiB/s rd, 2 op/s 2026-03-10T05:56:59.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:56:58 vm02 bash[56371]: audit 2026-03-10T05:56:58.609631+0000 mon.c (mon.1) 19 : audit [DBG] from='client.? 192.168.123.102:0/3570326278' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T05:56:59.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:56:58 vm02 bash[56371]: audit 2026-03-10T05:56:58.609631+0000 mon.c (mon.1) 19 : audit [DBG] from='client.? 192.168.123.102:0/3570326278' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T05:56:59.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:56:58 vm02 bash[55303]: cluster 2026-03-10T05:56:56.868710+0000 mgr.y (mgr.24992) 278 : cluster [DBG] pgmap v163: 161 pgs: 161 active+clean; 457 KiB data, 325 MiB used, 160 GiB / 160 GiB avail; 2.7 KiB/s rd, 2 op/s 2026-03-10T05:56:59.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:56:58 vm02 bash[55303]: cluster 2026-03-10T05:56:56.868710+0000 mgr.y (mgr.24992) 278 : cluster [DBG] pgmap v163: 161 pgs: 161 active+clean; 457 KiB data, 325 MiB used, 160 GiB / 160 GiB avail; 2.7 KiB/s rd, 2 op/s 2026-03-10T05:56:59.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:56:58 vm02 bash[55303]: audit 2026-03-10T05:56:58.609631+0000 mon.c (mon.1) 19 : audit [DBG] from='client.? 192.168.123.102:0/3570326278' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T05:56:59.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:56:58 vm02 bash[55303]: audit 2026-03-10T05:56:58.609631+0000 mon.c (mon.1) 19 : audit [DBG] from='client.? 192.168.123.102:0/3570326278' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T05:56:59.248 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:56:58 vm05 bash[43541]: cluster 2026-03-10T05:56:56.868710+0000 mgr.y (mgr.24992) 278 : cluster [DBG] pgmap v163: 161 pgs: 161 active+clean; 457 KiB data, 325 MiB used, 160 GiB / 160 GiB avail; 2.7 KiB/s rd, 2 op/s 2026-03-10T05:56:59.249 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:56:58 vm05 bash[43541]: cluster 2026-03-10T05:56:56.868710+0000 mgr.y (mgr.24992) 278 : cluster [DBG] pgmap v163: 161 pgs: 161 active+clean; 457 KiB data, 325 MiB used, 160 GiB / 160 GiB avail; 2.7 KiB/s rd, 2 op/s 2026-03-10T05:56:59.249 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:56:58 vm05 bash[43541]: audit 2026-03-10T05:56:58.609631+0000 mon.c (mon.1) 19 : audit [DBG] from='client.? 192.168.123.102:0/3570326278' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T05:56:59.249 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:56:58 vm05 bash[43541]: audit 2026-03-10T05:56:58.609631+0000 mon.c (mon.1) 19 : audit [DBG] from='client.? 192.168.123.102:0/3570326278' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T05:57:00.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:56:59 vm02 bash[56371]: audit 2026-03-10T05:56:57.941038+0000 mgr.y (mgr.24992) 279 : audit [DBG] from='client.54440 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T05:57:00.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:56:59 vm02 bash[56371]: audit 2026-03-10T05:56:57.941038+0000 mgr.y (mgr.24992) 279 : audit [DBG] from='client.54440 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T05:57:00.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:56:59 vm02 bash[56371]: audit 2026-03-10T05:56:58.143978+0000 mgr.y (mgr.24992) 280 : audit [DBG] from='client.34528 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T05:57:00.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:56:59 vm02 bash[56371]: audit 2026-03-10T05:56:58.143978+0000 mgr.y (mgr.24992) 280 : audit [DBG] from='client.34528 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T05:57:00.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:56:59 vm02 bash[56371]: audit 2026-03-10T05:56:58.347694+0000 mgr.y (mgr.24992) 281 : audit [DBG] from='client.44532 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T05:57:00.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:56:59 vm02 bash[56371]: audit 2026-03-10T05:56:58.347694+0000 mgr.y (mgr.24992) 281 : audit [DBG] from='client.44532 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T05:57:00.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:56:59 vm02 bash[56371]: audit 2026-03-10T05:56:58.815317+0000 mgr.y (mgr.24992) 282 : audit [DBG] from='client.34540 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T05:57:00.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:56:59 vm02 bash[56371]: audit 2026-03-10T05:56:58.815317+0000 mgr.y (mgr.24992) 282 : audit [DBG] from='client.34540 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T05:57:00.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:56:59 vm02 bash[56371]: cluster 2026-03-10T05:56:58.869101+0000 mgr.y (mgr.24992) 283 : cluster [DBG] pgmap v164: 161 pgs: 161 active+clean; 457 KiB data, 325 MiB used, 160 GiB / 160 GiB avail; 2.9 KiB/s rd, 3 op/s 2026-03-10T05:57:00.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:56:59 vm02 bash[56371]: cluster 2026-03-10T05:56:58.869101+0000 mgr.y (mgr.24992) 283 : cluster [DBG] pgmap v164: 161 pgs: 161 active+clean; 457 KiB data, 325 MiB used, 160 GiB / 160 GiB avail; 2.9 KiB/s rd, 3 op/s 2026-03-10T05:57:00.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:56:59 vm02 bash[56371]: audit 2026-03-10T05:56:59.052784+0000 mon.a (mon.0) 662 : audit [DBG] from='client.? 192.168.123.102:0/1382740229' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch 2026-03-10T05:57:00.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:56:59 vm02 bash[56371]: audit 2026-03-10T05:56:59.052784+0000 mon.a (mon.0) 662 : audit [DBG] from='client.? 192.168.123.102:0/1382740229' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch 2026-03-10T05:57:00.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:56:59 vm02 bash[55303]: audit 2026-03-10T05:56:57.941038+0000 mgr.y (mgr.24992) 279 : audit [DBG] from='client.54440 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T05:57:00.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:56:59 vm02 bash[55303]: audit 2026-03-10T05:56:57.941038+0000 mgr.y (mgr.24992) 279 : audit [DBG] from='client.54440 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T05:57:00.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:56:59 vm02 bash[55303]: audit 2026-03-10T05:56:58.143978+0000 mgr.y (mgr.24992) 280 : audit [DBG] from='client.34528 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T05:57:00.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:56:59 vm02 bash[55303]: audit 2026-03-10T05:56:58.143978+0000 mgr.y (mgr.24992) 280 : audit [DBG] from='client.34528 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T05:57:00.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:56:59 vm02 bash[55303]: audit 2026-03-10T05:56:58.347694+0000 mgr.y (mgr.24992) 281 : audit [DBG] from='client.44532 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T05:57:00.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:56:59 vm02 bash[55303]: audit 2026-03-10T05:56:58.347694+0000 mgr.y (mgr.24992) 281 : audit [DBG] from='client.44532 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T05:57:00.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:56:59 vm02 bash[55303]: audit 2026-03-10T05:56:58.815317+0000 mgr.y (mgr.24992) 282 : audit [DBG] from='client.34540 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T05:57:00.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:56:59 vm02 bash[55303]: audit 2026-03-10T05:56:58.815317+0000 mgr.y (mgr.24992) 282 : audit [DBG] from='client.34540 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T05:57:00.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:56:59 vm02 bash[55303]: cluster 2026-03-10T05:56:58.869101+0000 mgr.y (mgr.24992) 283 : cluster [DBG] pgmap v164: 161 pgs: 161 active+clean; 457 KiB data, 325 MiB used, 160 GiB / 160 GiB avail; 2.9 KiB/s rd, 3 op/s 2026-03-10T05:57:00.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:56:59 vm02 bash[55303]: cluster 2026-03-10T05:56:58.869101+0000 mgr.y (mgr.24992) 283 : cluster [DBG] pgmap v164: 161 pgs: 161 active+clean; 457 KiB data, 325 MiB used, 160 GiB / 160 GiB avail; 2.9 KiB/s rd, 3 op/s 2026-03-10T05:57:00.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:56:59 vm02 bash[55303]: audit 2026-03-10T05:56:59.052784+0000 mon.a (mon.0) 662 : audit [DBG] from='client.? 192.168.123.102:0/1382740229' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch 2026-03-10T05:57:00.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:56:59 vm02 bash[55303]: audit 2026-03-10T05:56:59.052784+0000 mon.a (mon.0) 662 : audit [DBG] from='client.? 192.168.123.102:0/1382740229' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch 2026-03-10T05:57:00.248 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:56:59 vm05 bash[43541]: audit 2026-03-10T05:56:57.941038+0000 mgr.y (mgr.24992) 279 : audit [DBG] from='client.54440 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T05:57:00.249 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:56:59 vm05 bash[43541]: audit 2026-03-10T05:56:57.941038+0000 mgr.y (mgr.24992) 279 : audit [DBG] from='client.54440 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T05:57:00.249 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:56:59 vm05 bash[43541]: audit 2026-03-10T05:56:58.143978+0000 mgr.y (mgr.24992) 280 : audit [DBG] from='client.34528 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T05:57:00.249 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:56:59 vm05 bash[43541]: audit 2026-03-10T05:56:58.143978+0000 mgr.y (mgr.24992) 280 : audit [DBG] from='client.34528 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T05:57:00.249 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:56:59 vm05 bash[43541]: audit 2026-03-10T05:56:58.347694+0000 mgr.y (mgr.24992) 281 : audit [DBG] from='client.44532 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T05:57:00.249 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:56:59 vm05 bash[43541]: audit 2026-03-10T05:56:58.347694+0000 mgr.y (mgr.24992) 281 : audit [DBG] from='client.44532 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T05:57:00.249 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:56:59 vm05 bash[43541]: audit 2026-03-10T05:56:58.815317+0000 mgr.y (mgr.24992) 282 : audit [DBG] from='client.34540 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T05:57:00.249 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:56:59 vm05 bash[43541]: audit 2026-03-10T05:56:58.815317+0000 mgr.y (mgr.24992) 282 : audit [DBG] from='client.34540 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T05:57:00.249 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:56:59 vm05 bash[43541]: cluster 2026-03-10T05:56:58.869101+0000 mgr.y (mgr.24992) 283 : cluster [DBG] pgmap v164: 161 pgs: 161 active+clean; 457 KiB data, 325 MiB used, 160 GiB / 160 GiB avail; 2.9 KiB/s rd, 3 op/s 2026-03-10T05:57:00.249 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:56:59 vm05 bash[43541]: cluster 2026-03-10T05:56:58.869101+0000 mgr.y (mgr.24992) 283 : cluster [DBG] pgmap v164: 161 pgs: 161 active+clean; 457 KiB data, 325 MiB used, 160 GiB / 160 GiB avail; 2.9 KiB/s rd, 3 op/s 2026-03-10T05:57:00.249 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:56:59 vm05 bash[43541]: audit 2026-03-10T05:56:59.052784+0000 mon.a (mon.0) 662 : audit [DBG] from='client.? 192.168.123.102:0/1382740229' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch 2026-03-10T05:57:00.249 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:56:59 vm05 bash[43541]: audit 2026-03-10T05:56:59.052784+0000 mon.a (mon.0) 662 : audit [DBG] from='client.? 192.168.123.102:0/1382740229' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch 2026-03-10T05:57:02.070 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:57:01 vm05 bash[43541]: cluster 2026-03-10T05:57:00.869603+0000 mgr.y (mgr.24992) 284 : cluster [DBG] pgmap v165: 161 pgs: 161 active+clean; 457 KiB data, 325 MiB used, 160 GiB / 160 GiB avail; 2.7 KiB/s rd, 2 op/s 2026-03-10T05:57:02.070 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:57:01 vm05 bash[43541]: cluster 2026-03-10T05:57:00.869603+0000 mgr.y (mgr.24992) 284 : cluster [DBG] pgmap v165: 161 pgs: 161 active+clean; 457 KiB data, 325 MiB used, 160 GiB / 160 GiB avail; 2.7 KiB/s rd, 2 op/s 2026-03-10T05:57:02.335 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:57:01 vm02 bash[56371]: cluster 2026-03-10T05:57:00.869603+0000 mgr.y (mgr.24992) 284 : cluster [DBG] pgmap v165: 161 pgs: 161 active+clean; 457 KiB data, 325 MiB used, 160 GiB / 160 GiB avail; 2.7 KiB/s rd, 2 op/s 2026-03-10T05:57:02.335 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:57:01 vm02 bash[56371]: cluster 2026-03-10T05:57:00.869603+0000 mgr.y (mgr.24992) 284 : cluster [DBG] pgmap v165: 161 pgs: 161 active+clean; 457 KiB data, 325 MiB used, 160 GiB / 160 GiB avail; 2.7 KiB/s rd, 2 op/s 2026-03-10T05:57:02.335 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:57:01 vm02 bash[55303]: cluster 2026-03-10T05:57:00.869603+0000 mgr.y (mgr.24992) 284 : cluster [DBG] pgmap v165: 161 pgs: 161 active+clean; 457 KiB data, 325 MiB used, 160 GiB / 160 GiB avail; 2.7 KiB/s rd, 2 op/s 2026-03-10T05:57:02.335 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:57:01 vm02 bash[55303]: cluster 2026-03-10T05:57:00.869603+0000 mgr.y (mgr.24992) 284 : cluster [DBG] pgmap v165: 161 pgs: 161 active+clean; 457 KiB data, 325 MiB used, 160 GiB / 160 GiB avail; 2.7 KiB/s rd, 2 op/s 2026-03-10T05:57:02.368 INFO:journalctl@ceph.mgr.x.vm05.stdout:Mar 10 05:57:02 vm05 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:57:02.368 INFO:journalctl@ceph.osd.4.vm05.stdout:Mar 10 05:57:02 vm05 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:57:02.368 INFO:journalctl@ceph.osd.7.vm05.stdout:Mar 10 05:57:02 vm05 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:57:02.368 INFO:journalctl@ceph.prometheus.a.vm05.stdout:Mar 10 05:57:02 vm05 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:57:02.368 INFO:journalctl@ceph.node-exporter.b.vm05.stdout:Mar 10 05:57:02 vm05 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:57:02.369 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:57:02 vm05 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:57:02.369 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:57:02 vm05 systemd[1]: Stopping Ceph grafana.a for 107483ae-1c44-11f1-b530-c1172cd6122a... 2026-03-10T05:57:02.369 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:57:02 vm05 bash[39420]: t=2026-03-10T05:57:02+0000 lvl=info msg="Shutdown started" logger=server reason="System signal: terminated" 2026-03-10T05:57:02.369 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:57:02 vm05 bash[58900]: ceph-107483ae-1c44-11f1-b530-c1172cd6122a-grafana-a 2026-03-10T05:57:02.369 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:57:02 vm05 systemd[1]: ceph-107483ae-1c44-11f1-b530-c1172cd6122a@grafana.a.service: Deactivated successfully. 2026-03-10T05:57:02.369 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:57:02 vm05 systemd[1]: Stopped Ceph grafana.a for 107483ae-1c44-11f1-b530-c1172cd6122a. 2026-03-10T05:57:02.369 INFO:journalctl@ceph.osd.5.vm05.stdout:Mar 10 05:57:02 vm05 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:57:02.369 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:57:02 vm05 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:57:02.369 INFO:journalctl@ceph.osd.6.vm05.stdout:Mar 10 05:57:02 vm05 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:57:02.674 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:57:02 vm05 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:57:02.674 INFO:journalctl@ceph.mgr.x.vm05.stdout:Mar 10 05:57:02 vm05 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:57:02.674 INFO:journalctl@ceph.osd.4.vm05.stdout:Mar 10 05:57:02 vm05 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:57:02.674 INFO:journalctl@ceph.osd.5.vm05.stdout:Mar 10 05:57:02 vm05 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:57:02.675 INFO:journalctl@ceph.osd.6.vm05.stdout:Mar 10 05:57:02 vm05 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:57:02.675 INFO:journalctl@ceph.osd.7.vm05.stdout:Mar 10 05:57:02 vm05 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:57:02.675 INFO:journalctl@ceph.prometheus.a.vm05.stdout:Mar 10 05:57:02 vm05 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:57:02.675 INFO:journalctl@ceph.node-exporter.b.vm05.stdout:Mar 10 05:57:02 vm05 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:57:02.675 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:57:02 vm05 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:57:02.675 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:57:02 vm05 systemd[1]: Started Ceph grafana.a for 107483ae-1c44-11f1-b530-c1172cd6122a. 2026-03-10T05:57:02.923 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:57:02 vm05 bash[59013]: logger=settings t=2026-03-10T05:57:02.677458112Z level=info msg="Starting Grafana" version=10.4.0 commit=03f502a94d17f7dc4e6c34acdf8428aedd986e4c branch=HEAD compiled=2026-03-10T05:57:02Z 2026-03-10T05:57:02.923 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:57:02 vm05 bash[59013]: logger=settings t=2026-03-10T05:57:02.677995054Z level=info msg="Config loaded from" file=/usr/share/grafana/conf/defaults.ini 2026-03-10T05:57:02.923 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:57:02 vm05 bash[59013]: logger=settings t=2026-03-10T05:57:02.678071988Z level=info msg="Config loaded from" file=/etc/grafana/grafana.ini 2026-03-10T05:57:02.923 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:57:02 vm05 bash[59013]: logger=settings t=2026-03-10T05:57:02.67813114Z level=info msg="Config overridden from command line" arg="default.paths.data=/var/lib/grafana" 2026-03-10T05:57:02.923 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:57:02 vm05 bash[59013]: logger=settings t=2026-03-10T05:57:02.678169852Z level=info msg="Config overridden from command line" arg="default.paths.logs=/var/log/grafana" 2026-03-10T05:57:02.923 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:57:02 vm05 bash[59013]: logger=settings t=2026-03-10T05:57:02.678221329Z level=info msg="Config overridden from command line" arg="default.paths.plugins=/var/lib/grafana/plugins" 2026-03-10T05:57:02.923 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:57:02 vm05 bash[59013]: logger=settings t=2026-03-10T05:57:02.678259601Z level=info msg="Config overridden from command line" arg="default.paths.provisioning=/etc/grafana/provisioning" 2026-03-10T05:57:02.923 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:57:02 vm05 bash[59013]: logger=settings t=2026-03-10T05:57:02.678309145Z level=info msg="Config overridden from command line" arg="default.log.mode=console" 2026-03-10T05:57:02.923 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:57:02 vm05 bash[59013]: logger=settings t=2026-03-10T05:57:02.678345433Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_DATA=/var/lib/grafana" 2026-03-10T05:57:02.923 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:57:02 vm05 bash[59013]: logger=settings t=2026-03-10T05:57:02.678390668Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_LOGS=/var/log/grafana" 2026-03-10T05:57:02.923 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:57:02 vm05 bash[59013]: logger=settings t=2026-03-10T05:57:02.678428469Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_PLUGINS=/var/lib/grafana/plugins" 2026-03-10T05:57:02.923 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:57:02 vm05 bash[59013]: logger=settings t=2026-03-10T05:57:02.678495135Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_PROVISIONING=/etc/grafana/provisioning" 2026-03-10T05:57:02.923 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:57:02 vm05 bash[59013]: logger=settings t=2026-03-10T05:57:02.678536714Z level=info msg=Target target=[all] 2026-03-10T05:57:02.923 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:57:02 vm05 bash[59013]: logger=settings t=2026-03-10T05:57:02.678591045Z level=info msg="Path Home" path=/usr/share/grafana 2026-03-10T05:57:02.923 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:57:02 vm05 bash[59013]: logger=settings t=2026-03-10T05:57:02.678629057Z level=info msg="Path Data" path=/var/lib/grafana 2026-03-10T05:57:02.923 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:57:02 vm05 bash[59013]: logger=settings t=2026-03-10T05:57:02.678675054Z level=info msg="Path Logs" path=/var/log/grafana 2026-03-10T05:57:02.923 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:57:02 vm05 bash[59013]: logger=settings t=2026-03-10T05:57:02.678714308Z level=info msg="Path Plugins" path=/var/lib/grafana/plugins 2026-03-10T05:57:02.924 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:57:02 vm05 bash[59013]: logger=settings t=2026-03-10T05:57:02.67874741Z level=info msg="Path Provisioning" path=/etc/grafana/provisioning 2026-03-10T05:57:02.924 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:57:02 vm05 bash[59013]: logger=settings t=2026-03-10T05:57:02.678797334Z level=info msg="App mode production" 2026-03-10T05:57:02.924 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:57:02 vm05 bash[59013]: logger=sqlstore t=2026-03-10T05:57:02.679000777Z level=info msg="Connecting to DB" dbtype=sqlite3 2026-03-10T05:57:02.924 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:57:02 vm05 bash[59013]: logger=sqlstore t=2026-03-10T05:57:02.679063626Z level=warn msg="SQLite database file has broader permissions than it should" path=/var/lib/grafana/grafana.db mode=-rw-r--r-- expected=-rw-r----- 2026-03-10T05:57:02.924 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:57:02 vm05 bash[59013]: logger=migrator t=2026-03-10T05:57:02.679483826Z level=info msg="Starting DB migrations" 2026-03-10T05:57:02.924 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:57:02 vm05 bash[59013]: logger=migrator t=2026-03-10T05:57:02.69537809Z level=info msg="Executing migration" id="Update is_service_account column to nullable" 2026-03-10T05:57:02.924 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:57:02 vm05 bash[59013]: logger=migrator t=2026-03-10T05:57:02.722322663Z level=info msg="Migration successfully executed" id="Update is_service_account column to nullable" duration=26.937219ms 2026-03-10T05:57:02.924 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:57:02 vm05 bash[59013]: logger=migrator t=2026-03-10T05:57:02.724146789Z level=info msg="Executing migration" id="Add uid column to user" 2026-03-10T05:57:02.924 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:57:02 vm05 bash[59013]: logger=migrator t=2026-03-10T05:57:02.72657397Z level=info msg="Migration successfully executed" id="Add uid column to user" duration=2.427362ms 2026-03-10T05:57:02.924 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:57:02 vm05 bash[59013]: logger=migrator t=2026-03-10T05:57:02.7308271Z level=info msg="Executing migration" id="Update uid column values for users" 2026-03-10T05:57:02.924 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:57:02 vm05 bash[59013]: logger=migrator t=2026-03-10T05:57:02.731096227Z level=info msg="Migration successfully executed" id="Update uid column values for users" duration=269.238µs 2026-03-10T05:57:02.924 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:57:02 vm05 bash[59013]: logger=migrator t=2026-03-10T05:57:02.733632955Z level=info msg="Executing migration" id="Add unique index user_uid" 2026-03-10T05:57:02.924 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:57:02 vm05 bash[59013]: logger=migrator t=2026-03-10T05:57:02.734277228Z level=info msg="Migration successfully executed" id="Add unique index user_uid" duration=645.405µs 2026-03-10T05:57:02.924 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:57:02 vm05 bash[59013]: logger=migrator t=2026-03-10T05:57:02.736098197Z level=info msg="Executing migration" id="Add isPublic for dashboard" 2026-03-10T05:57:02.924 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:57:02 vm05 bash[59013]: logger=migrator t=2026-03-10T05:57:02.73830368Z level=info msg="Migration successfully executed" id="Add isPublic for dashboard" duration=2.206336ms 2026-03-10T05:57:02.924 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:57:02 vm05 bash[59013]: logger=migrator t=2026-03-10T05:57:02.739336345Z level=info msg="Executing migration" id="set service account foreign key to nil if 0" 2026-03-10T05:57:02.924 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:57:02 vm05 bash[59013]: logger=migrator t=2026-03-10T05:57:02.739563994Z level=info msg="Migration successfully executed" id="set service account foreign key to nil if 0" duration=227.478µs 2026-03-10T05:57:02.924 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:57:02 vm05 bash[59013]: logger=migrator t=2026-03-10T05:57:02.740482314Z level=info msg="Executing migration" id="Add last_used_at to api_key table" 2026-03-10T05:57:02.924 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:57:02 vm05 bash[59013]: logger=migrator t=2026-03-10T05:57:02.742608678Z level=info msg="Migration successfully executed" id="Add last_used_at to api_key table" duration=2.126344ms 2026-03-10T05:57:02.924 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:57:02 vm05 bash[59013]: logger=migrator t=2026-03-10T05:57:02.743617738Z level=info msg="Executing migration" id="Add is_revoked column to api_key table" 2026-03-10T05:57:02.924 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:57:02 vm05 bash[59013]: logger=migrator t=2026-03-10T05:57:02.745689961Z level=info msg="Migration successfully executed" id="Add is_revoked column to api_key table" duration=2.07149ms 2026-03-10T05:57:02.924 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:57:02 vm05 bash[59013]: logger=migrator t=2026-03-10T05:57:02.746931569Z level=info msg="Executing migration" id="Add playlist column created_at" 2026-03-10T05:57:02.924 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:57:02 vm05 bash[59013]: logger=migrator t=2026-03-10T05:57:02.74933747Z level=info msg="Migration successfully executed" id="Add playlist column created_at" duration=2.405109ms 2026-03-10T05:57:02.924 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:57:02 vm05 bash[59013]: logger=migrator t=2026-03-10T05:57:02.750458702Z level=info msg="Executing migration" id="Add playlist column updated_at" 2026-03-10T05:57:02.924 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:57:02 vm05 bash[59013]: logger=migrator t=2026-03-10T05:57:02.752581389Z level=info msg="Migration successfully executed" id="Add playlist column updated_at" duration=2.121947ms 2026-03-10T05:57:02.924 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:57:02 vm05 bash[59013]: logger=migrator t=2026-03-10T05:57:02.753514156Z level=info msg="Executing migration" id="Add column preferences.json_data" 2026-03-10T05:57:02.924 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:57:02 vm05 bash[59013]: logger=migrator t=2026-03-10T05:57:02.755668203Z level=info msg="Migration successfully executed" id="Add column preferences.json_data" duration=2.153105ms 2026-03-10T05:57:02.924 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:57:02 vm05 bash[59013]: logger=migrator t=2026-03-10T05:57:02.756783413Z level=info msg="Executing migration" id="alter preferences.json_data to mediumtext v1" 2026-03-10T05:57:02.924 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:57:02 vm05 bash[59013]: logger=migrator t=2026-03-10T05:57:02.756970585Z level=info msg="Migration successfully executed" id="alter preferences.json_data to mediumtext v1" duration=186.581µs 2026-03-10T05:57:02.924 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:57:02 vm05 bash[59013]: logger=migrator t=2026-03-10T05:57:02.75814671Z level=info msg="Executing migration" id="Add preferences index org_id" 2026-03-10T05:57:02.924 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:57:02 vm05 bash[59013]: logger=migrator t=2026-03-10T05:57:02.758793038Z level=info msg="Migration successfully executed" id="Add preferences index org_id" duration=646.257µs 2026-03-10T05:57:02.924 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:57:02 vm05 bash[59013]: logger=migrator t=2026-03-10T05:57:02.76006889Z level=info msg="Executing migration" id="Add preferences index user_id" 2026-03-10T05:57:02.924 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:57:02 vm05 bash[59013]: logger=migrator t=2026-03-10T05:57:02.760682826Z level=info msg="Migration successfully executed" id="Add preferences index user_id" duration=613.956µs 2026-03-10T05:57:02.924 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:57:02 vm05 bash[59013]: logger=migrator t=2026-03-10T05:57:02.76199605Z level=info msg="Executing migration" id="Increase tags column to length 4096" 2026-03-10T05:57:02.924 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:57:02 vm05 bash[59013]: logger=migrator t=2026-03-10T05:57:02.762186117Z level=info msg="Migration successfully executed" id="Increase tags column to length 4096" duration=190.048µs 2026-03-10T05:57:02.924 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:57:02 vm05 bash[59013]: logger=migrator t=2026-03-10T05:57:02.763112141Z level=info msg="Executing migration" id="Add column uid in team" 2026-03-10T05:57:02.924 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:57:02 vm05 bash[59013]: logger=migrator t=2026-03-10T05:57:02.765231362Z level=info msg="Migration successfully executed" id="Add column uid in team" duration=2.118799ms 2026-03-10T05:57:02.924 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:57:02 vm05 bash[59013]: logger=migrator t=2026-03-10T05:57:02.766353296Z level=info msg="Executing migration" id="Update uid column values in team" 2026-03-10T05:57:02.924 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:57:02 vm05 bash[59013]: logger=migrator t=2026-03-10T05:57:02.766582527Z level=info msg="Migration successfully executed" id="Update uid column values in team" duration=229.352µs 2026-03-10T05:57:02.924 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:57:02 vm05 bash[59013]: logger=migrator t=2026-03-10T05:57:02.76746039Z level=info msg="Executing migration" id="Add unique index team_org_id_uid" 2026-03-10T05:57:02.924 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:57:02 vm05 bash[59013]: logger=migrator t=2026-03-10T05:57:02.768105665Z level=info msg="Migration successfully executed" id="Add unique index team_org_id_uid" duration=645.084µs 2026-03-10T05:57:02.924 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:57:02 vm05 bash[59013]: logger=migrator t=2026-03-10T05:57:02.769368915Z level=info msg="Executing migration" id="Add OAuth ID token to user_auth" 2026-03-10T05:57:02.924 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:57:02 vm05 bash[59013]: logger=migrator t=2026-03-10T05:57:02.771515407Z level=info msg="Migration successfully executed" id="Add OAuth ID token to user_auth" duration=2.146192ms 2026-03-10T05:57:02.924 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:57:02 vm05 bash[59013]: logger=migrator t=2026-03-10T05:57:02.772546749Z level=info msg="Executing migration" id="add index user_auth_token.revoked_at" 2026-03-10T05:57:02.924 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:57:02 vm05 bash[59013]: logger=migrator t=2026-03-10T05:57:02.773177676Z level=info msg="Migration successfully executed" id="add index user_auth_token.revoked_at" duration=630.486µs 2026-03-10T05:57:02.924 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:57:02 vm05 bash[59013]: logger=migrator t=2026-03-10T05:57:02.774321732Z level=info msg="Executing migration" id="alter table short_url alter column created_by type to bigint" 2026-03-10T05:57:02.924 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:57:02 vm05 bash[59013]: logger=migrator t=2026-03-10T05:57:02.774514505Z level=info msg="Migration successfully executed" id="alter table short_url alter column created_by type to bigint" duration=192.522µs 2026-03-10T05:57:02.924 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:57:02 vm05 bash[59013]: logger=migrator t=2026-03-10T05:57:02.775469142Z level=info msg="Executing migration" id="add current_reason column related to current_state" 2026-03-10T05:57:02.924 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:57:02 vm05 bash[59013]: logger=migrator t=2026-03-10T05:57:02.777886255Z level=info msg="Migration successfully executed" id="add current_reason column related to current_state" duration=2.416671ms 2026-03-10T05:57:02.924 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:57:02 vm05 bash[59013]: logger=migrator t=2026-03-10T05:57:02.778978492Z level=info msg="Executing migration" id="add result_fingerprint column to alert_instance" 2026-03-10T05:57:02.924 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:57:02 vm05 bash[59013]: logger=migrator t=2026-03-10T05:57:02.781172353Z level=info msg="Migration successfully executed" id="add result_fingerprint column to alert_instance" duration=2.193652ms 2026-03-10T05:57:02.924 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:57:02 vm05 bash[59013]: logger=migrator t=2026-03-10T05:57:02.782239934Z level=info msg="Executing migration" id="add rule_group_idx column to alert_rule" 2026-03-10T05:57:02.924 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:57:02 vm05 bash[59013]: logger=migrator t=2026-03-10T05:57:02.784275297Z level=info msg="Migration successfully executed" id="add rule_group_idx column to alert_rule" duration=2.034772ms 2026-03-10T05:57:02.924 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:57:02 vm05 bash[59013]: logger=migrator t=2026-03-10T05:57:02.784996114Z level=info msg="Executing migration" id="add is_paused column to alert_rule table" 2026-03-10T05:57:02.925 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:57:02 vm05 bash[59013]: logger=migrator t=2026-03-10T05:57:02.787028312Z level=info msg="Migration successfully executed" id="add is_paused column to alert_rule table" duration=2.032127ms 2026-03-10T05:57:02.925 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:57:02 vm05 bash[59013]: logger=migrator t=2026-03-10T05:57:02.787863254Z level=info msg="Executing migration" id="fix is_paused column for alert_rule table" 2026-03-10T05:57:02.925 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:57:02 vm05 bash[59013]: logger=migrator t=2026-03-10T05:57:02.787887138Z level=info msg="Migration successfully executed" id="fix is_paused column for alert_rule table" duration=24.595µs 2026-03-10T05:57:02.925 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:57:02 vm05 bash[59013]: logger=migrator t=2026-03-10T05:57:02.788651759Z level=info msg="Executing migration" id="add rule_group_idx column to alert_rule_version" 2026-03-10T05:57:02.925 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:57:02 vm05 bash[59013]: logger=migrator t=2026-03-10T05:57:02.790713252Z level=info msg="Migration successfully executed" id="add rule_group_idx column to alert_rule_version" duration=2.058968ms 2026-03-10T05:57:02.925 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:57:02 vm05 bash[59013]: logger=migrator t=2026-03-10T05:57:02.791623596Z level=info msg="Executing migration" id="add is_paused column to alert_rule_versions table" 2026-03-10T05:57:02.925 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:57:02 vm05 bash[59013]: logger=migrator t=2026-03-10T05:57:02.793631928Z level=info msg="Migration successfully executed" id="add is_paused column to alert_rule_versions table" duration=2.008202ms 2026-03-10T05:57:02.925 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:57:02 vm05 bash[59013]: logger=migrator t=2026-03-10T05:57:02.794547312Z level=info msg="Executing migration" id="fix is_paused column for alert_rule_version table" 2026-03-10T05:57:02.925 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:57:02 vm05 bash[59013]: logger=migrator t=2026-03-10T05:57:02.794571959Z level=info msg="Migration successfully executed" id="fix is_paused column for alert_rule_version table" duration=24.647µs 2026-03-10T05:57:02.925 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:57:02 vm05 bash[59013]: logger=migrator t=2026-03-10T05:57:02.795334524Z level=info msg="Executing migration" id="add configuration_hash column to alert_configuration" 2026-03-10T05:57:02.925 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:57:02 vm05 bash[59013]: logger=migrator t=2026-03-10T05:57:02.79737663Z level=info msg="Migration successfully executed" id="add configuration_hash column to alert_configuration" duration=2.040713ms 2026-03-10T05:57:02.925 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:57:02 vm05 bash[59013]: logger=migrator t=2026-03-10T05:57:02.798280162Z level=info msg="Executing migration" id="add column send_alerts_to in ngalert_configuration" 2026-03-10T05:57:02.925 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:57:02 vm05 bash[59013]: logger=migrator t=2026-03-10T05:57:02.800300817Z level=info msg="Migration successfully executed" id="add column send_alerts_to in ngalert_configuration" duration=2.020245ms 2026-03-10T05:57:02.925 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:57:02 vm05 bash[59013]: logger=migrator t=2026-03-10T05:57:02.801045039Z level=info msg="Executing migration" id="create provenance_type table" 2026-03-10T05:57:02.925 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:57:02 vm05 bash[59013]: logger=migrator t=2026-03-10T05:57:02.801438981Z level=info msg="Migration successfully executed" id="create provenance_type table" duration=393.702µs 2026-03-10T05:57:02.925 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:57:02 vm05 bash[59013]: logger=migrator t=2026-03-10T05:57:02.802486112Z level=info msg="Executing migration" id="add index to uniquify (record_key, record_type, org_id) columns" 2026-03-10T05:57:02.925 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:57:02 vm05 bash[59013]: logger=migrator t=2026-03-10T05:57:02.802972829Z level=info msg="Migration successfully executed" id="add index to uniquify (record_key, record_type, org_id) columns" duration=486.986µs 2026-03-10T05:57:02.925 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:57:02 vm05 bash[59013]: logger=migrator t=2026-03-10T05:57:02.804032525Z level=info msg="Executing migration" id="create alert_image table" 2026-03-10T05:57:02.925 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:57:02 vm05 bash[59013]: logger=migrator t=2026-03-10T05:57:02.804399366Z level=info msg="Migration successfully executed" id="create alert_image table" duration=366.951µs 2026-03-10T05:57:02.925 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:57:02 vm05 bash[59013]: logger=migrator t=2026-03-10T05:57:02.805425939Z level=info msg="Executing migration" id="add unique index on token to alert_image table" 2026-03-10T05:57:02.925 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:57:02 vm05 bash[59013]: logger=migrator t=2026-03-10T05:57:02.805861459Z level=info msg="Migration successfully executed" id="add unique index on token to alert_image table" duration=435.96µs 2026-03-10T05:57:02.925 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:57:02 vm05 bash[59013]: logger=migrator t=2026-03-10T05:57:02.806916496Z level=info msg="Executing migration" id="support longer URLs in alert_image table" 2026-03-10T05:57:02.925 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:57:02 vm05 bash[59013]: logger=migrator t=2026-03-10T05:57:02.806941213Z level=info msg="Migration successfully executed" id="support longer URLs in alert_image table" duration=24.716µs 2026-03-10T05:57:02.925 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:57:02 vm05 bash[59013]: logger=migrator t=2026-03-10T05:57:02.80773636Z level=info msg="Executing migration" id=create_alert_configuration_history_table 2026-03-10T05:57:02.925 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:57:02 vm05 bash[59013]: logger=migrator t=2026-03-10T05:57:02.808149378Z level=info msg="Migration successfully executed" id=create_alert_configuration_history_table duration=412.867µs 2026-03-10T05:57:02.925 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:57:02 vm05 bash[59013]: logger=migrator t=2026-03-10T05:57:02.809112653Z level=info msg="Executing migration" id="drop non-unique orgID index on alert_configuration" 2026-03-10T05:57:02.925 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:57:02 vm05 bash[59013]: logger=migrator t=2026-03-10T05:57:02.80960544Z level=info msg="Migration successfully executed" id="drop non-unique orgID index on alert_configuration" duration=492.627µs 2026-03-10T05:57:02.925 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:57:02 vm05 bash[59013]: logger=migrator t=2026-03-10T05:57:02.810510505Z level=info msg="Executing migration" id="drop unique orgID index on alert_configuration if exists" 2026-03-10T05:57:02.925 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:57:02 vm05 bash[59013]: logger=migrator t=2026-03-10T05:57:02.810696895Z level=warn msg="Skipping migration: Already executed, but not recorded in migration log" id="drop unique orgID index on alert_configuration if exists" 2026-03-10T05:57:02.925 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:57:02 vm05 bash[59013]: logger=migrator t=2026-03-10T05:57:02.811444695Z level=info msg="Executing migration" id="extract alertmanager configuration history to separate table" 2026-03-10T05:57:02.925 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:57:02 vm05 bash[59013]: logger=migrator t=2026-03-10T05:57:02.811918857Z level=info msg="Migration successfully executed" id="extract alertmanager configuration history to separate table" duration=474.184µs 2026-03-10T05:57:02.925 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:57:02 vm05 bash[59013]: logger=migrator t=2026-03-10T05:57:02.812640336Z level=info msg="Executing migration" id="add unique index on orgID to alert_configuration" 2026-03-10T05:57:02.925 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:57:02 vm05 bash[59013]: logger=migrator t=2026-03-10T05:57:02.813085914Z level=info msg="Migration successfully executed" id="add unique index on orgID to alert_configuration" duration=445.508µs 2026-03-10T05:57:02.925 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:57:02 vm05 bash[59013]: logger=migrator t=2026-03-10T05:57:02.813996981Z level=info msg="Executing migration" id="add last_applied column to alert_configuration_history" 2026-03-10T05:57:02.925 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:57:02 vm05 bash[59013]: logger=migrator t=2026-03-10T05:57:02.816129497Z level=info msg="Migration successfully executed" id="add last_applied column to alert_configuration_history" duration=2.131434ms 2026-03-10T05:57:02.925 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:57:02 vm05 bash[59013]: logger=migrator t=2026-03-10T05:57:02.816988676Z level=info msg="Executing migration" id="increase max description length to 2048" 2026-03-10T05:57:02.925 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:57:02 vm05 bash[59013]: logger=migrator t=2026-03-10T05:57:02.816999726Z level=info msg="Migration successfully executed" id="increase max description length to 2048" duration=11.221µs 2026-03-10T05:57:02.925 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:57:02 vm05 bash[59013]: logger=migrator t=2026-03-10T05:57:02.817905802Z level=info msg="Executing migration" id="alter library_element model to mediumtext" 2026-03-10T05:57:02.925 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:57:02 vm05 bash[59013]: logger=migrator t=2026-03-10T05:57:02.817929877Z level=info msg="Migration successfully executed" id="alter library_element model to mediumtext" duration=24.425µs 2026-03-10T05:57:02.925 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:57:02 vm05 bash[59013]: logger=migrator t=2026-03-10T05:57:02.818737038Z level=info msg="Executing migration" id="create secrets table" 2026-03-10T05:57:02.925 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:57:02 vm05 bash[59013]: logger=migrator t=2026-03-10T05:57:02.819102175Z level=info msg="Migration successfully executed" id="create secrets table" duration=364.345µs 2026-03-10T05:57:02.925 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:57:02 vm05 bash[59013]: logger=migrator t=2026-03-10T05:57:02.820155359Z level=info msg="Executing migration" id="rename data_keys name column to id" 2026-03-10T05:57:02.925 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:57:02 vm05 bash[59013]: logger=migrator t=2026-03-10T05:57:02.830837416Z level=info msg="Migration successfully executed" id="rename data_keys name column to id" duration=10.681718ms 2026-03-10T05:57:02.925 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:57:02 vm05 bash[59013]: logger=migrator t=2026-03-10T05:57:02.831788347Z level=info msg="Executing migration" id="add name column into data_keys" 2026-03-10T05:57:02.925 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:57:02 vm05 bash[59013]: logger=migrator t=2026-03-10T05:57:02.833927766Z level=info msg="Migration successfully executed" id="add name column into data_keys" duration=2.13987ms 2026-03-10T05:57:02.925 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:57:02 vm05 bash[59013]: logger=migrator t=2026-03-10T05:57:02.83488035Z level=info msg="Executing migration" id="copy data_keys id column values into name" 2026-03-10T05:57:02.925 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:57:02 vm05 bash[59013]: logger=migrator t=2026-03-10T05:57:02.835022086Z level=info msg="Migration successfully executed" id="copy data_keys id column values into name" duration=141.256µs 2026-03-10T05:57:02.925 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:57:02 vm05 bash[59013]: logger=migrator t=2026-03-10T05:57:02.835791796Z level=info msg="Executing migration" id="rename data_keys name column to label" 2026-03-10T05:57:02.925 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:57:02 vm05 bash[59013]: logger=migrator t=2026-03-10T05:57:02.846179339Z level=info msg="Migration successfully executed" id="rename data_keys name column to label" duration=10.387073ms 2026-03-10T05:57:02.925 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:57:02 vm05 bash[59013]: logger=migrator t=2026-03-10T05:57:02.847110482Z level=info msg="Executing migration" id="rename data_keys id column back to name" 2026-03-10T05:57:02.925 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:57:02 vm05 bash[59013]: logger=migrator t=2026-03-10T05:57:02.857018692Z level=info msg="Migration successfully executed" id="rename data_keys id column back to name" duration=9.90803ms 2026-03-10T05:57:02.925 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:57:02 vm05 bash[59013]: logger=migrator t=2026-03-10T05:57:02.858044464Z level=info msg="Executing migration" id="add column hidden to role table" 2026-03-10T05:57:02.925 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:57:02 vm05 bash[59013]: logger=migrator t=2026-03-10T05:57:02.86068686Z level=info msg="Migration successfully executed" id="add column hidden to role table" duration=2.641605ms 2026-03-10T05:57:02.925 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:57:02 vm05 bash[59013]: logger=migrator t=2026-03-10T05:57:02.861693306Z level=info msg="Executing migration" id="permission kind migration" 2026-03-10T05:57:02.925 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:57:02 vm05 bash[59013]: logger=migrator t=2026-03-10T05:57:02.863975483Z level=info msg="Migration successfully executed" id="permission kind migration" duration=2.281837ms 2026-03-10T05:57:02.925 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:57:02 vm05 bash[59013]: logger=migrator t=2026-03-10T05:57:02.864782765Z level=info msg="Executing migration" id="permission attribute migration" 2026-03-10T05:57:02.925 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:57:02 vm05 bash[59013]: logger=migrator t=2026-03-10T05:57:02.866887678Z level=info msg="Migration successfully executed" id="permission attribute migration" duration=2.104654ms 2026-03-10T05:57:02.925 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:57:02 vm05 bash[59013]: logger=migrator t=2026-03-10T05:57:02.867841514Z level=info msg="Executing migration" id="permission identifier migration" 2026-03-10T05:57:02.925 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:57:02 vm05 bash[59013]: logger=migrator t=2026-03-10T05:57:02.869962339Z level=info msg="Migration successfully executed" id="permission identifier migration" duration=2.120473ms 2026-03-10T05:57:02.925 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:57:02 vm05 bash[59013]: logger=migrator t=2026-03-10T05:57:02.870904514Z level=info msg="Executing migration" id="add permission identifier index" 2026-03-10T05:57:02.925 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:57:02 vm05 bash[59013]: logger=migrator t=2026-03-10T05:57:02.871429241Z level=info msg="Migration successfully executed" id="add permission identifier index" duration=524.598µs 2026-03-10T05:57:02.925 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:57:02 vm05 bash[59013]: logger=migrator t=2026-03-10T05:57:02.872576792Z level=info msg="Executing migration" id="add permission action scope role_id index" 2026-03-10T05:57:02.926 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:57:02 vm05 bash[59013]: logger=migrator t=2026-03-10T05:57:02.873149641Z level=info msg="Migration successfully executed" id="add permission action scope role_id index" duration=573.089µs 2026-03-10T05:57:02.926 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:57:02 vm05 bash[59013]: logger=migrator t=2026-03-10T05:57:02.874337759Z level=info msg="Executing migration" id="remove permission role_id action scope index" 2026-03-10T05:57:02.926 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:57:02 vm05 bash[59013]: logger=migrator t=2026-03-10T05:57:02.874963187Z level=info msg="Migration successfully executed" id="remove permission role_id action scope index" duration=626.511µs 2026-03-10T05:57:02.926 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:57:02 vm05 bash[59013]: logger=migrator t=2026-03-10T05:57:02.875745579Z level=info msg="Executing migration" id="create query_history table v1" 2026-03-10T05:57:02.926 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:57:02 vm05 bash[59013]: logger=migrator t=2026-03-10T05:57:02.87619692Z level=info msg="Migration successfully executed" id="create query_history table v1" duration=451.16µs 2026-03-10T05:57:02.926 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:57:02 vm05 bash[59013]: logger=migrator t=2026-03-10T05:57:02.876946331Z level=info msg="Executing migration" id="add index query_history.org_id-created_by-datasource_uid" 2026-03-10T05:57:02.926 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:57:02 vm05 bash[59013]: logger=migrator t=2026-03-10T05:57:02.87747169Z level=info msg="Migration successfully executed" id="add index query_history.org_id-created_by-datasource_uid" duration=525.51µs 2026-03-10T05:57:02.926 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:57:02 vm05 bash[59013]: logger=migrator t=2026-03-10T05:57:02.878718409Z level=info msg="Executing migration" id="alter table query_history alter column created_by type to bigint" 2026-03-10T05:57:02.926 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:57:02 vm05 bash[59013]: logger=migrator t=2026-03-10T05:57:02.878753655Z level=info msg="Migration successfully executed" id="alter table query_history alter column created_by type to bigint" duration=34.505µs 2026-03-10T05:57:02.926 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:57:02 vm05 bash[59013]: logger=migrator t=2026-03-10T05:57:02.87967503Z level=info msg="Executing migration" id="rbac disabled migrator" 2026-03-10T05:57:02.926 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:57:02 vm05 bash[59013]: logger=migrator t=2026-03-10T05:57:02.879697141Z level=info msg="Migration successfully executed" id="rbac disabled migrator" duration=22.422µs 2026-03-10T05:57:02.926 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:57:02 vm05 bash[59013]: logger=migrator t=2026-03-10T05:57:02.880849873Z level=info msg="Executing migration" id="teams permissions migration" 2026-03-10T05:57:02.926 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:57:02 vm05 bash[59013]: logger=migrator t=2026-03-10T05:57:02.881146823Z level=info msg="Migration successfully executed" id="teams permissions migration" duration=296.879µs 2026-03-10T05:57:02.926 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:57:02 vm05 bash[59013]: logger=migrator t=2026-03-10T05:57:02.882110166Z level=info msg="Executing migration" id="dashboard permissions" 2026-03-10T05:57:02.926 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:57:02 vm05 bash[59013]: logger=migrator t=2026-03-10T05:57:02.884450854Z level=info msg="Migration successfully executed" id="dashboard permissions" duration=2.34208ms 2026-03-10T05:57:02.926 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:57:02 vm05 bash[59013]: logger=migrator t=2026-03-10T05:57:02.885368212Z level=info msg="Executing migration" id="dashboard permissions uid scopes" 2026-03-10T05:57:02.926 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:57:02 vm05 bash[59013]: logger=migrator t=2026-03-10T05:57:02.887141161Z level=info msg="Migration successfully executed" id="dashboard permissions uid scopes" duration=1.77286ms 2026-03-10T05:57:02.926 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:57:02 vm05 bash[59013]: logger=migrator t=2026-03-10T05:57:02.888230172Z level=info msg="Executing migration" id="drop managed folder create actions" 2026-03-10T05:57:02.926 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:57:02 vm05 bash[59013]: logger=migrator t=2026-03-10T05:57:02.888323147Z level=info msg="Migration successfully executed" id="drop managed folder create actions" duration=93.086µs 2026-03-10T05:57:02.926 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:57:02 vm05 bash[59013]: logger=migrator t=2026-03-10T05:57:02.889443768Z level=info msg="Executing migration" id="alerting notification permissions" 2026-03-10T05:57:02.926 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:57:02 vm05 bash[59013]: logger=migrator t=2026-03-10T05:57:02.8896681Z level=info msg="Migration successfully executed" id="alerting notification permissions" duration=224.042µs 2026-03-10T05:57:02.926 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:57:02 vm05 bash[59013]: logger=migrator t=2026-03-10T05:57:02.890673893Z level=info msg="Executing migration" id="create query_history_star table v1" 2026-03-10T05:57:02.926 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:57:02 vm05 bash[59013]: logger=migrator t=2026-03-10T05:57:02.891178375Z level=info msg="Migration successfully executed" id="create query_history_star table v1" duration=504.471µs 2026-03-10T05:57:02.926 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:57:02 vm05 bash[59013]: logger=migrator t=2026-03-10T05:57:02.892452254Z level=info msg="Executing migration" id="add index query_history.user_id-query_uid" 2026-03-10T05:57:02.926 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:57:02 vm05 bash[59013]: logger=migrator t=2026-03-10T05:57:02.892979516Z level=info msg="Migration successfully executed" id="add index query_history.user_id-query_uid" duration=527.041µs 2026-03-10T05:57:02.926 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:57:02 vm05 bash[59013]: logger=migrator t=2026-03-10T05:57:02.894167423Z level=info msg="Executing migration" id="add column org_id in query_history_star" 2026-03-10T05:57:02.926 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:57:02 vm05 bash[59013]: logger=migrator t=2026-03-10T05:57:02.896508773Z level=info msg="Migration successfully executed" id="add column org_id in query_history_star" duration=2.34142ms 2026-03-10T05:57:02.926 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:57:02 vm05 bash[59013]: logger=migrator t=2026-03-10T05:57:02.897417654Z level=info msg="Executing migration" id="alter table query_history_star_mig column user_id type to bigint" 2026-03-10T05:57:02.926 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:57:02 vm05 bash[59013]: logger=migrator t=2026-03-10T05:57:02.897610308Z level=info msg="Migration successfully executed" id="alter table query_history_star_mig column user_id type to bigint" duration=192.813µs 2026-03-10T05:57:02.926 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:57:02 vm05 bash[59013]: logger=migrator t=2026-03-10T05:57:02.898628885Z level=info msg="Executing migration" id="create correlation table v1" 2026-03-10T05:57:02.926 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:57:02 vm05 bash[59013]: logger=migrator t=2026-03-10T05:57:02.899176787Z level=info msg="Migration successfully executed" id="create correlation table v1" duration=546.93µs 2026-03-10T05:57:02.926 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:57:02 vm05 bash[59013]: logger=migrator t=2026-03-10T05:57:02.900372318Z level=info msg="Executing migration" id="add index correlations.uid" 2026-03-10T05:57:02.926 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:57:02 vm05 bash[59013]: logger=migrator t=2026-03-10T05:57:02.9009573Z level=info msg="Migration successfully executed" id="add index correlations.uid" duration=585.031µs 2026-03-10T05:57:02.926 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:57:02 vm05 bash[59013]: logger=migrator t=2026-03-10T05:57:02.902152592Z level=info msg="Executing migration" id="add index correlations.source_uid" 2026-03-10T05:57:02.926 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:57:02 vm05 bash[59013]: logger=migrator t=2026-03-10T05:57:02.902791415Z level=info msg="Migration successfully executed" id="add index correlations.source_uid" duration=638.043µs 2026-03-10T05:57:02.926 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:57:02 vm05 bash[59013]: logger=migrator t=2026-03-10T05:57:02.903910653Z level=info msg="Executing migration" id="add correlation config column" 2026-03-10T05:57:02.926 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:57:02 vm05 bash[59013]: logger=migrator t=2026-03-10T05:57:02.906486443Z level=info msg="Migration successfully executed" id="add correlation config column" duration=2.575971ms 2026-03-10T05:57:02.926 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:57:02 vm05 bash[59013]: logger=migrator t=2026-03-10T05:57:02.90761601Z level=info msg="Executing migration" id="drop index IDX_correlation_uid - v1" 2026-03-10T05:57:02.926 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:57:02 vm05 bash[59013]: logger=migrator t=2026-03-10T05:57:02.908131952Z level=info msg="Migration successfully executed" id="drop index IDX_correlation_uid - v1" duration=515.932µs 2026-03-10T05:57:02.926 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:57:02 vm05 bash[59013]: logger=migrator t=2026-03-10T05:57:02.9090179Z level=info msg="Executing migration" id="drop index IDX_correlation_source_uid - v1" 2026-03-10T05:57:02.926 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:57:02 vm05 bash[59013]: logger=migrator t=2026-03-10T05:57:02.909557898Z level=info msg="Migration successfully executed" id="drop index IDX_correlation_source_uid - v1" duration=539.386µs 2026-03-10T05:57:02.926 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:57:02 vm05 bash[59013]: logger=migrator t=2026-03-10T05:57:02.910611421Z level=info msg="Executing migration" id="Rename table correlation to correlation_tmp_qwerty - v1" 2026-03-10T05:57:02.926 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:57:02 vm05 bash[59013]: logger=migrator t=2026-03-10T05:57:02.917066769Z level=info msg="Migration successfully executed" id="Rename table correlation to correlation_tmp_qwerty - v1" duration=6.455247ms 2026-03-10T05:57:02.926 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:57:02 vm05 bash[59013]: logger=migrator t=2026-03-10T05:57:02.918153315Z level=info msg="Executing migration" id="create correlation v2" 2026-03-10T05:57:02.926 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:57:02 vm05 bash[59013]: logger=migrator t=2026-03-10T05:57:02.91871915Z level=info msg="Migration successfully executed" id="create correlation v2" duration=565.696µs 2026-03-10T05:57:02.926 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:57:02 vm05 bash[59013]: logger=migrator t=2026-03-10T05:57:02.919670052Z level=info msg="Executing migration" id="create index IDX_correlation_uid - v2" 2026-03-10T05:57:02.926 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:57:02 vm05 bash[59013]: logger=migrator t=2026-03-10T05:57:02.920192987Z level=info msg="Migration successfully executed" id="create index IDX_correlation_uid - v2" duration=523.076µs 2026-03-10T05:57:02.926 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:57:02 vm05 bash[59013]: logger=migrator t=2026-03-10T05:57:02.921468248Z level=info msg="Executing migration" id="create index IDX_correlation_source_uid - v2" 2026-03-10T05:57:02.926 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:57:02 vm05 bash[59013]: logger=migrator t=2026-03-10T05:57:02.922002093Z level=info msg="Migration successfully executed" id="create index IDX_correlation_source_uid - v2" duration=533.825µs 2026-03-10T05:57:02.926 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:57:02 vm05 bash[59013]: logger=migrator t=2026-03-10T05:57:02.923199287Z level=info msg="Executing migration" id="create index IDX_correlation_org_id - v2" 2026-03-10T05:57:02.926 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:57:02 vm05 bash[59013]: logger=migrator t=2026-03-10T05:57:02.923731781Z level=info msg="Migration successfully executed" id="create index IDX_correlation_org_id - v2" duration=532.734µs 2026-03-10T05:57:02.926 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:57:02 vm05 bash[59013]: logger=migrator t=2026-03-10T05:57:02.92487867Z level=info msg="Executing migration" id="copy correlation v1 to v2" 2026-03-10T05:57:02.926 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:57:02 vm05 bash[59013]: logger=migrator t=2026-03-10T05:57:02.925125775Z level=info msg="Migration successfully executed" id="copy correlation v1 to v2" duration=247.286µs 2026-03-10T05:57:02.926 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:57:02 vm05 bash[59013]: logger=migrator t=2026-03-10T05:57:02.926417118Z level=info msg="Executing migration" id="drop correlation_tmp_qwerty" 2026-03-10T05:57:02.926 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:57:02 vm05 bash[59013]: logger=migrator t=2026-03-10T05:57:02.926891491Z level=info msg="Migration successfully executed" id="drop correlation_tmp_qwerty" duration=474.372µs 2026-03-10T05:57:03.181 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:57:02 vm05 bash[59013]: logger=migrator t=2026-03-10T05:57:02.927828997Z level=info msg="Executing migration" id="add provisioning column" 2026-03-10T05:57:03.181 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:57:02 vm05 bash[59013]: logger=migrator t=2026-03-10T05:57:02.931581934Z level=info msg="Migration successfully executed" id="add provisioning column" duration=3.752676ms 2026-03-10T05:57:03.181 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:57:02 vm05 bash[59013]: logger=migrator t=2026-03-10T05:57:02.932699109Z level=info msg="Executing migration" id="create entity_events table" 2026-03-10T05:57:03.181 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:57:02 vm05 bash[59013]: logger=migrator t=2026-03-10T05:57:02.933216543Z level=info msg="Migration successfully executed" id="create entity_events table" duration=517.604µs 2026-03-10T05:57:03.181 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:57:02 vm05 bash[59013]: logger=migrator t=2026-03-10T05:57:02.93427147Z level=info msg="Executing migration" id="create dashboard public config v1" 2026-03-10T05:57:03.181 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:57:02 vm05 bash[59013]: logger=migrator t=2026-03-10T05:57:02.934902988Z level=info msg="Migration successfully executed" id="create dashboard public config v1" duration=631.48µs 2026-03-10T05:57:03.181 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:57:02 vm05 bash[59013]: logger=migrator t=2026-03-10T05:57:02.936257229Z level=info msg="Executing migration" id="drop index UQE_dashboard_public_config_uid - v1" 2026-03-10T05:57:03.181 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:57:02 vm05 bash[59013]: logger=migrator t=2026-03-10T05:57:02.93659237Z level=warn msg="Skipping migration: Already executed, but not recorded in migration log" id="drop index UQE_dashboard_public_config_uid - v1" 2026-03-10T05:57:03.181 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:57:02 vm05 bash[59013]: logger=migrator t=2026-03-10T05:57:02.937452861Z level=info msg="Executing migration" id="drop index IDX_dashboard_public_config_org_id_dashboard_uid - v1" 2026-03-10T05:57:03.181 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:57:02 vm05 bash[59013]: logger=migrator t=2026-03-10T05:57:02.937769567Z level=warn msg="Skipping migration: Already executed, but not recorded in migration log" id="drop index IDX_dashboard_public_config_org_id_dashboard_uid - v1" 2026-03-10T05:57:03.181 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:57:02 vm05 bash[59013]: logger=migrator t=2026-03-10T05:57:02.939052022Z level=info msg="Executing migration" id="Drop old dashboard public config table" 2026-03-10T05:57:03.181 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:57:02 vm05 bash[59013]: logger=migrator t=2026-03-10T05:57:02.939579687Z level=info msg="Migration successfully executed" id="Drop old dashboard public config table" duration=527.764µs 2026-03-10T05:57:03.181 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:57:02 vm05 bash[59013]: logger=migrator t=2026-03-10T05:57:02.940462298Z level=info msg="Executing migration" id="recreate dashboard public config v1" 2026-03-10T05:57:03.181 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:57:02 vm05 bash[59013]: logger=migrator t=2026-03-10T05:57:02.941051058Z level=info msg="Migration successfully executed" id="recreate dashboard public config v1" duration=588.95µs 2026-03-10T05:57:03.182 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:57:02 vm05 bash[59013]: logger=migrator t=2026-03-10T05:57:02.942065597Z level=info msg="Executing migration" id="create index UQE_dashboard_public_config_uid - v1" 2026-03-10T05:57:03.182 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:57:02 vm05 bash[59013]: logger=migrator t=2026-03-10T05:57:02.942599414Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_public_config_uid - v1" duration=533.084µs 2026-03-10T05:57:03.182 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:57:02 vm05 bash[59013]: logger=migrator t=2026-03-10T05:57:02.94381881Z level=info msg="Executing migration" id="create index IDX_dashboard_public_config_org_id_dashboard_uid - v1" 2026-03-10T05:57:03.182 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:57:02 vm05 bash[59013]: logger=migrator t=2026-03-10T05:57:02.944409131Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_public_config_org_id_dashboard_uid - v1" duration=590.994µs 2026-03-10T05:57:03.182 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:57:02 vm05 bash[59013]: logger=migrator t=2026-03-10T05:57:02.945683702Z level=info msg="Executing migration" id="drop index UQE_dashboard_public_config_uid - v2" 2026-03-10T05:57:03.182 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:57:02 vm05 bash[59013]: logger=migrator t=2026-03-10T05:57:02.946206346Z level=info msg="Migration successfully executed" id="drop index UQE_dashboard_public_config_uid - v2" duration=522.654µs 2026-03-10T05:57:03.182 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:57:02 vm05 bash[59013]: logger=migrator t=2026-03-10T05:57:02.947266183Z level=info msg="Executing migration" id="drop index IDX_dashboard_public_config_org_id_dashboard_uid - v2" 2026-03-10T05:57:03.182 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:57:02 vm05 bash[59013]: logger=migrator t=2026-03-10T05:57:02.947810437Z level=info msg="Migration successfully executed" id="drop index IDX_dashboard_public_config_org_id_dashboard_uid - v2" duration=544.054µs 2026-03-10T05:57:03.182 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:57:02 vm05 bash[59013]: logger=migrator t=2026-03-10T05:57:02.948707096Z level=info msg="Executing migration" id="Drop public config table" 2026-03-10T05:57:03.182 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:57:02 vm05 bash[59013]: logger=migrator t=2026-03-10T05:57:02.9492139Z level=info msg="Migration successfully executed" id="Drop public config table" duration=506.924µs 2026-03-10T05:57:03.182 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:57:02 vm05 bash[59013]: logger=migrator t=2026-03-10T05:57:02.950265861Z level=info msg="Executing migration" id="Recreate dashboard public config v2" 2026-03-10T05:57:03.182 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:57:02 vm05 bash[59013]: logger=migrator t=2026-03-10T05:57:02.950871301Z level=info msg="Migration successfully executed" id="Recreate dashboard public config v2" duration=605.45µs 2026-03-10T05:57:03.182 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:57:02 vm05 bash[59013]: logger=migrator t=2026-03-10T05:57:02.951861526Z level=info msg="Executing migration" id="create index UQE_dashboard_public_config_uid - v2" 2026-03-10T05:57:03.182 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:57:02 vm05 bash[59013]: logger=migrator t=2026-03-10T05:57:02.952448271Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_public_config_uid - v2" duration=586.695µs 2026-03-10T05:57:03.182 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:57:02 vm05 bash[59013]: logger=migrator t=2026-03-10T05:57:02.953305657Z level=info msg="Executing migration" id="create index IDX_dashboard_public_config_org_id_dashboard_uid - v2" 2026-03-10T05:57:03.182 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:57:02 vm05 bash[59013]: logger=migrator t=2026-03-10T05:57:02.953857205Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_public_config_org_id_dashboard_uid - v2" duration=551.599µs 2026-03-10T05:57:03.182 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:57:02 vm05 bash[59013]: logger=migrator t=2026-03-10T05:57:02.954867978Z level=info msg="Executing migration" id="create index UQE_dashboard_public_config_access_token - v2" 2026-03-10T05:57:03.182 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:57:02 vm05 bash[59013]: logger=migrator t=2026-03-10T05:57:02.95541586Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_public_config_access_token - v2" duration=547.812µs 2026-03-10T05:57:03.182 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:57:02 vm05 bash[59013]: logger=migrator t=2026-03-10T05:57:02.956899124Z level=info msg="Executing migration" id="Rename table dashboard_public_config to dashboard_public - v2" 2026-03-10T05:57:03.182 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:57:02 vm05 bash[59013]: logger=migrator t=2026-03-10T05:57:02.963373827Z level=info msg="Migration successfully executed" id="Rename table dashboard_public_config to dashboard_public - v2" duration=6.473942ms 2026-03-10T05:57:03.182 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:57:02 vm05 bash[59013]: logger=migrator t=2026-03-10T05:57:02.964779024Z level=info msg="Executing migration" id="add annotations_enabled column" 2026-03-10T05:57:03.182 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:57:02 vm05 bash[59013]: logger=migrator t=2026-03-10T05:57:02.967693553Z level=info msg="Migration successfully executed" id="add annotations_enabled column" duration=2.913198ms 2026-03-10T05:57:03.182 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:57:02 vm05 bash[59013]: logger=migrator t=2026-03-10T05:57:02.968910795Z level=info msg="Executing migration" id="add time_selection_enabled column" 2026-03-10T05:57:03.182 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:57:02 vm05 bash[59013]: logger=migrator t=2026-03-10T05:57:02.97139355Z level=info msg="Migration successfully executed" id="add time_selection_enabled column" duration=2.482506ms 2026-03-10T05:57:03.182 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:57:02 vm05 bash[59013]: logger=migrator t=2026-03-10T05:57:02.972594682Z level=info msg="Executing migration" id="delete orphaned public dashboards" 2026-03-10T05:57:03.182 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:57:02 vm05 bash[59013]: logger=migrator t=2026-03-10T05:57:02.972854532Z level=info msg="Migration successfully executed" id="delete orphaned public dashboards" duration=260.021µs 2026-03-10T05:57:03.182 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:57:02 vm05 bash[59013]: logger=migrator t=2026-03-10T05:57:02.974002915Z level=info msg="Executing migration" id="add share column" 2026-03-10T05:57:03.182 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:57:02 vm05 bash[59013]: logger=migrator t=2026-03-10T05:57:02.976386644Z level=info msg="Migration successfully executed" id="add share column" duration=2.383609ms 2026-03-10T05:57:03.182 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:57:02 vm05 bash[59013]: logger=migrator t=2026-03-10T05:57:02.977349547Z level=info msg="Executing migration" id="backfill empty share column fields with default of public" 2026-03-10T05:57:03.182 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:57:02 vm05 bash[59013]: logger=migrator t=2026-03-10T05:57:02.977588828Z level=info msg="Migration successfully executed" id="backfill empty share column fields with default of public" duration=239.281µs 2026-03-10T05:57:03.182 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:57:02 vm05 bash[59013]: logger=migrator t=2026-03-10T05:57:02.978772567Z level=info msg="Executing migration" id="create file table" 2026-03-10T05:57:03.182 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:57:02 vm05 bash[59013]: logger=migrator t=2026-03-10T05:57:02.979305842Z level=info msg="Migration successfully executed" id="create file table" duration=533.305µs 2026-03-10T05:57:03.182 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:57:02 vm05 bash[59013]: logger=migrator t=2026-03-10T05:57:02.980603505Z level=info msg="Executing migration" id="file table idx: path natural pk" 2026-03-10T05:57:03.182 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:57:02 vm05 bash[59013]: logger=migrator t=2026-03-10T05:57:02.981205469Z level=info msg="Migration successfully executed" id="file table idx: path natural pk" duration=601.784µs 2026-03-10T05:57:03.182 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:57:02 vm05 bash[59013]: logger=migrator t=2026-03-10T05:57:02.982502091Z level=info msg="Executing migration" id="file table idx: parent_folder_path_hash fast folder retrieval" 2026-03-10T05:57:03.182 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:57:02 vm05 bash[59013]: logger=migrator t=2026-03-10T05:57:02.983163376Z level=info msg="Migration successfully executed" id="file table idx: parent_folder_path_hash fast folder retrieval" duration=661.325µs 2026-03-10T05:57:03.182 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:57:02 vm05 bash[59013]: logger=migrator t=2026-03-10T05:57:02.984500935Z level=info msg="Executing migration" id="create file_meta table" 2026-03-10T05:57:03.182 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:57:02 vm05 bash[59013]: logger=migrator t=2026-03-10T05:57:02.985025312Z level=info msg="Migration successfully executed" id="create file_meta table" duration=524.407µs 2026-03-10T05:57:03.182 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:57:02 vm05 bash[59013]: logger=migrator t=2026-03-10T05:57:02.986379934Z level=info msg="Executing migration" id="file table idx: path key" 2026-03-10T05:57:03.182 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:57:02 vm05 bash[59013]: logger=migrator t=2026-03-10T05:57:02.987096784Z level=info msg="Migration successfully executed" id="file table idx: path key" duration=717.091µs 2026-03-10T05:57:03.182 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:57:02 vm05 bash[59013]: logger=migrator t=2026-03-10T05:57:02.988393405Z level=info msg="Executing migration" id="set path collation in file table" 2026-03-10T05:57:03.182 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:57:02 vm05 bash[59013]: logger=migrator t=2026-03-10T05:57:02.988597089Z level=info msg="Migration successfully executed" id="set path collation in file table" duration=203.884µs 2026-03-10T05:57:03.182 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:57:02 vm05 bash[59013]: logger=migrator t=2026-03-10T05:57:02.989794675Z level=info msg="Executing migration" id="migrate contents column to mediumblob for MySQL" 2026-03-10T05:57:03.182 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:57:02 vm05 bash[59013]: logger=migrator t=2026-03-10T05:57:02.990010661Z level=info msg="Migration successfully executed" id="migrate contents column to mediumblob for MySQL" duration=216.487µs 2026-03-10T05:57:03.182 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:57:02 vm05 bash[59013]: logger=migrator t=2026-03-10T05:57:02.991075236Z level=info msg="Executing migration" id="managed permissions migration" 2026-03-10T05:57:03.182 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:57:02 vm05 bash[59013]: logger=migrator t=2026-03-10T05:57:02.992976696Z level=info msg="Migration successfully executed" id="managed permissions migration" duration=1.902001ms 2026-03-10T05:57:03.182 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:57:02 vm05 bash[59013]: logger=migrator t=2026-03-10T05:57:02.994317702Z level=info msg="Executing migration" id="managed folder permissions alert actions migration" 2026-03-10T05:57:03.182 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:57:02 vm05 bash[59013]: logger=migrator t=2026-03-10T05:57:02.995388459Z level=info msg="Migration successfully executed" id="managed folder permissions alert actions migration" duration=1.069374ms 2026-03-10T05:57:03.182 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:57:02 vm05 bash[59013]: logger=migrator t=2026-03-10T05:57:02.996395246Z level=info msg="Executing migration" id="RBAC action name migrator" 2026-03-10T05:57:03.182 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:57:02 vm05 bash[59013]: logger=migrator t=2026-03-10T05:57:02.997255195Z level=info msg="Migration successfully executed" id="RBAC action name migrator" duration=859.589µs 2026-03-10T05:57:03.182 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:57:02 vm05 bash[59013]: logger=migrator t=2026-03-10T05:57:02.998386275Z level=info msg="Executing migration" id="Add UID column to playlist" 2026-03-10T05:57:03.182 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:57:03 vm05 bash[59013]: logger=migrator t=2026-03-10T05:57:03.001745882Z level=info msg="Migration successfully executed" id="Add UID column to playlist" duration=3.359317ms 2026-03-10T05:57:03.182 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:57:03 vm05 bash[59013]: logger=migrator t=2026-03-10T05:57:03.003019791Z level=info msg="Executing migration" id="Update uid column values in playlist" 2026-03-10T05:57:03.183 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:57:03 vm05 bash[59013]: logger=migrator t=2026-03-10T05:57:03.003251768Z level=info msg="Migration successfully executed" id="Update uid column values in playlist" duration=231.987µs 2026-03-10T05:57:03.183 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:57:03 vm05 bash[59013]: logger=migrator t=2026-03-10T05:57:03.004156341Z level=info msg="Executing migration" id="Add index for uid in playlist" 2026-03-10T05:57:03.183 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:57:03 vm05 bash[59013]: logger=migrator t=2026-03-10T05:57:03.004717047Z level=info msg="Migration successfully executed" id="Add index for uid in playlist" duration=560.877µs 2026-03-10T05:57:03.183 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:57:03 vm05 bash[59013]: logger=migrator t=2026-03-10T05:57:03.005970318Z level=info msg="Executing migration" id="update group index for alert rules" 2026-03-10T05:57:03.183 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:57:03 vm05 bash[59013]: logger=migrator t=2026-03-10T05:57:03.006326269Z level=info msg="Migration successfully executed" id="update group index for alert rules" duration=356.421µs 2026-03-10T05:57:03.183 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:57:03 vm05 bash[59013]: logger=migrator t=2026-03-10T05:57:03.007211596Z level=info msg="Executing migration" id="managed folder permissions alert actions repeated migration" 2026-03-10T05:57:03.183 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:57:03 vm05 bash[59013]: logger=migrator t=2026-03-10T05:57:03.007728289Z level=info msg="Migration successfully executed" id="managed folder permissions alert actions repeated migration" duration=516.512µs 2026-03-10T05:57:03.183 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:57:03 vm05 bash[59013]: logger=migrator t=2026-03-10T05:57:03.008561327Z level=info msg="Executing migration" id="admin only folder/dashboard permission" 2026-03-10T05:57:03.183 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:57:03 vm05 bash[59013]: logger=migrator t=2026-03-10T05:57:03.008899163Z level=info msg="Migration successfully executed" id="admin only folder/dashboard permission" duration=337.767µs 2026-03-10T05:57:03.183 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:57:03 vm05 bash[59013]: logger=migrator t=2026-03-10T05:57:03.009925747Z level=info msg="Executing migration" id="add action column to seed_assignment" 2026-03-10T05:57:03.183 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:57:03 vm05 bash[59013]: logger=migrator t=2026-03-10T05:57:03.012398974Z level=info msg="Migration successfully executed" id="add action column to seed_assignment" duration=2.472476ms 2026-03-10T05:57:03.183 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:57:03 vm05 bash[59013]: logger=migrator t=2026-03-10T05:57:03.013289903Z level=info msg="Executing migration" id="add scope column to seed_assignment" 2026-03-10T05:57:03.183 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:57:03 vm05 bash[59013]: logger=migrator t=2026-03-10T05:57:03.015857046Z level=info msg="Migration successfully executed" id="add scope column to seed_assignment" duration=2.565231ms 2026-03-10T05:57:03.183 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:57:03 vm05 bash[59013]: logger=migrator t=2026-03-10T05:57:03.016944846Z level=info msg="Executing migration" id="remove unique index builtin_role_role_name before nullable update" 2026-03-10T05:57:03.183 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:57:03 vm05 bash[59013]: logger=migrator t=2026-03-10T05:57:03.017731487Z level=info msg="Migration successfully executed" id="remove unique index builtin_role_role_name before nullable update" duration=787.022µs 2026-03-10T05:57:03.183 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:57:03 vm05 bash[59013]: logger=migrator t=2026-03-10T05:57:03.018765715Z level=info msg="Executing migration" id="update seed_assignment role_name column to nullable" 2026-03-10T05:57:03.183 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:57:03 vm05 bash[59013]: logger=migrator t=2026-03-10T05:57:03.042735677Z level=info msg="Migration successfully executed" id="update seed_assignment role_name column to nullable" duration=23.965434ms 2026-03-10T05:57:03.183 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:57:03 vm05 bash[59013]: logger=migrator t=2026-03-10T05:57:03.044466887Z level=info msg="Executing migration" id="add unique index builtin_role_name back" 2026-03-10T05:57:03.183 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:57:03 vm05 bash[59013]: logger=migrator t=2026-03-10T05:57:03.045178687Z level=info msg="Migration successfully executed" id="add unique index builtin_role_name back" duration=714.065µs 2026-03-10T05:57:03.183 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:57:03 vm05 bash[59013]: logger=migrator t=2026-03-10T05:57:03.046299669Z level=info msg="Executing migration" id="add unique index builtin_role_action_scope" 2026-03-10T05:57:03.183 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:57:03 vm05 bash[59013]: logger=migrator t=2026-03-10T05:57:03.046940395Z level=info msg="Migration successfully executed" id="add unique index builtin_role_action_scope" duration=640.916µs 2026-03-10T05:57:03.183 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:57:03 vm05 bash[59013]: logger=migrator t=2026-03-10T05:57:03.048223472Z level=info msg="Executing migration" id="add primary key to seed_assigment" 2026-03-10T05:57:03.183 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:57:03 vm05 bash[59013]: logger=migrator t=2026-03-10T05:57:03.055868479Z level=info msg="Migration successfully executed" id="add primary key to seed_assigment" duration=7.654465ms 2026-03-10T05:57:03.183 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:57:03 vm05 bash[59013]: logger=migrator t=2026-03-10T05:57:03.057469625Z level=info msg="Executing migration" id="add origin column to seed_assignment" 2026-03-10T05:57:03.183 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:57:03 vm05 bash[59013]: logger=migrator t=2026-03-10T05:57:03.060045817Z level=info msg="Migration successfully executed" id="add origin column to seed_assignment" duration=2.575971ms 2026-03-10T05:57:03.183 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:57:03 vm05 bash[59013]: logger=migrator t=2026-03-10T05:57:03.061079765Z level=info msg="Executing migration" id="add origin to plugin seed_assignment" 2026-03-10T05:57:03.183 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:57:03 vm05 bash[59013]: logger=migrator t=2026-03-10T05:57:03.061419825Z level=info msg="Migration successfully executed" id="add origin to plugin seed_assignment" duration=339.949µs 2026-03-10T05:57:03.183 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:57:03 vm05 bash[59013]: logger=migrator t=2026-03-10T05:57:03.062537449Z level=info msg="Executing migration" id="prevent seeding OnCall access" 2026-03-10T05:57:03.183 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:57:03 vm05 bash[59013]: logger=migrator t=2026-03-10T05:57:03.062752575Z level=info msg="Migration successfully executed" id="prevent seeding OnCall access" duration=215.295µs 2026-03-10T05:57:03.183 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:57:03 vm05 bash[59013]: logger=migrator t=2026-03-10T05:57:03.063645566Z level=info msg="Executing migration" id="managed folder permissions alert actions repeated fixed migration" 2026-03-10T05:57:03.183 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:57:03 vm05 bash[59013]: logger=migrator t=2026-03-10T05:57:03.06415215Z level=info msg="Migration successfully executed" id="managed folder permissions alert actions repeated fixed migration" duration=506.754µs 2026-03-10T05:57:03.183 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:57:03 vm05 bash[59013]: logger=migrator t=2026-03-10T05:57:03.06506005Z level=info msg="Executing migration" id="managed folder permissions library panel actions migration" 2026-03-10T05:57:03.183 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:57:03 vm05 bash[59013]: logger=migrator t=2026-03-10T05:57:03.065836161Z level=info msg="Migration successfully executed" id="managed folder permissions library panel actions migration" duration=776.142µs 2026-03-10T05:57:03.183 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:57:03 vm05 bash[59013]: logger=migrator t=2026-03-10T05:57:03.066922778Z level=info msg="Executing migration" id="migrate external alertmanagers to datsourcse" 2026-03-10T05:57:03.183 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:57:03 vm05 bash[59013]: logger=migrator t=2026-03-10T05:57:03.067212815Z level=info msg="Migration successfully executed" id="migrate external alertmanagers to datsourcse" duration=290.698µs 2026-03-10T05:57:03.183 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:57:03 vm05 bash[59013]: logger=migrator t=2026-03-10T05:57:03.068170047Z level=info msg="Executing migration" id="create folder table" 2026-03-10T05:57:03.183 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:57:03 vm05 bash[59013]: logger=migrator t=2026-03-10T05:57:03.068785517Z level=info msg="Migration successfully executed" id="create folder table" duration=616.27µs 2026-03-10T05:57:03.183 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:57:03 vm05 bash[59013]: logger=migrator t=2026-03-10T05:57:03.069924161Z level=info msg="Executing migration" id="Add index for parent_uid" 2026-03-10T05:57:03.183 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:57:03 vm05 bash[59013]: logger=migrator t=2026-03-10T05:57:03.070600173Z level=info msg="Migration successfully executed" id="Add index for parent_uid" duration=675.842µs 2026-03-10T05:57:03.183 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:57:03 vm05 bash[59013]: logger=migrator t=2026-03-10T05:57:03.071974191Z level=info msg="Executing migration" id="Add unique index for folder.uid and folder.org_id" 2026-03-10T05:57:03.183 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:57:03 vm05 bash[59013]: logger=migrator t=2026-03-10T05:57:03.0726211Z level=info msg="Migration successfully executed" id="Add unique index for folder.uid and folder.org_id" duration=649.184µs 2026-03-10T05:57:03.183 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:57:03 vm05 bash[59013]: logger=migrator t=2026-03-10T05:57:03.073955422Z level=info msg="Executing migration" id="Update folder title length" 2026-03-10T05:57:03.183 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:57:03 vm05 bash[59013]: logger=migrator t=2026-03-10T05:57:03.074111115Z level=info msg="Migration successfully executed" id="Update folder title length" duration=156.205µs 2026-03-10T05:57:03.183 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:57:03 vm05 bash[59013]: logger=migrator t=2026-03-10T05:57:03.075182232Z level=info msg="Executing migration" id="Add unique index for folder.title and folder.parent_uid" 2026-03-10T05:57:03.183 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:57:03 vm05 bash[59013]: logger=migrator t=2026-03-10T05:57:03.075827027Z level=info msg="Migration successfully executed" id="Add unique index for folder.title and folder.parent_uid" duration=644.754µs 2026-03-10T05:57:03.183 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:57:03 vm05 bash[59013]: logger=migrator t=2026-03-10T05:57:03.077067684Z level=info msg="Executing migration" id="Remove unique index for folder.title and folder.parent_uid" 2026-03-10T05:57:03.183 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:57:03 vm05 bash[59013]: logger=migrator t=2026-03-10T05:57:03.077665079Z level=info msg="Migration successfully executed" id="Remove unique index for folder.title and folder.parent_uid" duration=597.385µs 2026-03-10T05:57:03.184 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:57:03 vm05 bash[59013]: logger=migrator t=2026-03-10T05:57:03.078687745Z level=info msg="Executing migration" id="Add unique index for title, parent_uid, and org_id" 2026-03-10T05:57:03.184 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:57:03 vm05 bash[59013]: logger=migrator t=2026-03-10T05:57:03.079311299Z level=info msg="Migration successfully executed" id="Add unique index for title, parent_uid, and org_id" duration=623.374µs 2026-03-10T05:57:03.184 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:57:03 vm05 bash[59013]: logger=migrator t=2026-03-10T05:57:03.080543449Z level=info msg="Executing migration" id="Sync dashboard and folder table" 2026-03-10T05:57:03.184 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:57:03 vm05 bash[59013]: logger=migrator t=2026-03-10T05:57:03.08094779Z level=info msg="Migration successfully executed" id="Sync dashboard and folder table" duration=402.729µs 2026-03-10T05:57:03.184 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:57:03 vm05 bash[59013]: logger=migrator t=2026-03-10T05:57:03.082024098Z level=info msg="Executing migration" id="Remove ghost folders from the folder table" 2026-03-10T05:57:03.184 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:57:03 vm05 bash[59013]: logger=migrator t=2026-03-10T05:57:03.082308893Z level=info msg="Migration successfully executed" id="Remove ghost folders from the folder table" duration=284.885µs 2026-03-10T05:57:03.184 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:57:03 vm05 bash[59013]: logger=migrator t=2026-03-10T05:57:03.083206144Z level=info msg="Executing migration" id="Remove unique index UQE_folder_uid_org_id" 2026-03-10T05:57:03.184 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:57:03 vm05 bash[59013]: logger=migrator t=2026-03-10T05:57:03.084089187Z level=info msg="Migration successfully executed" id="Remove unique index UQE_folder_uid_org_id" duration=882.993µs 2026-03-10T05:57:03.184 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:57:03 vm05 bash[59013]: logger=migrator t=2026-03-10T05:57:03.085042803Z level=info msg="Executing migration" id="Add unique index UQE_folder_org_id_uid" 2026-03-10T05:57:03.184 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:57:03 vm05 bash[59013]: logger=migrator t=2026-03-10T05:57:03.085755785Z level=info msg="Migration successfully executed" id="Add unique index UQE_folder_org_id_uid" duration=712.812µs 2026-03-10T05:57:03.184 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:57:03 vm05 bash[59013]: logger=migrator t=2026-03-10T05:57:03.086879312Z level=info msg="Executing migration" id="Remove unique index UQE_folder_title_parent_uid_org_id" 2026-03-10T05:57:03.184 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:57:03 vm05 bash[59013]: logger=migrator t=2026-03-10T05:57:03.087481145Z level=info msg="Migration successfully executed" id="Remove unique index UQE_folder_title_parent_uid_org_id" duration=602.055µs 2026-03-10T05:57:03.184 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:57:03 vm05 bash[59013]: logger=migrator t=2026-03-10T05:57:03.088376511Z level=info msg="Executing migration" id="Add unique index UQE_folder_org_id_parent_uid_title" 2026-03-10T05:57:03.184 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:57:03 vm05 bash[59013]: logger=migrator t=2026-03-10T05:57:03.08903447Z level=info msg="Migration successfully executed" id="Add unique index UQE_folder_org_id_parent_uid_title" duration=657.748µs 2026-03-10T05:57:03.184 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:57:03 vm05 bash[59013]: logger=migrator t=2026-03-10T05:57:03.090094326Z level=info msg="Executing migration" id="Remove index IDX_folder_parent_uid_org_id" 2026-03-10T05:57:03.184 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:57:03 vm05 bash[59013]: logger=migrator t=2026-03-10T05:57:03.090709995Z level=info msg="Migration successfully executed" id="Remove index IDX_folder_parent_uid_org_id" duration=615.7µs 2026-03-10T05:57:03.184 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:57:03 vm05 bash[59013]: logger=migrator t=2026-03-10T05:57:03.091712824Z level=info msg="Executing migration" id="create anon_device table" 2026-03-10T05:57:03.184 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:57:03 vm05 bash[59013]: logger=migrator t=2026-03-10T05:57:03.092240197Z level=info msg="Migration successfully executed" id="create anon_device table" duration=527.383µs 2026-03-10T05:57:03.184 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:57:03 vm05 bash[59013]: logger=migrator t=2026-03-10T05:57:03.093263935Z level=info msg="Executing migration" id="add unique index anon_device.device_id" 2026-03-10T05:57:03.184 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:57:03 vm05 bash[59013]: logger=migrator t=2026-03-10T05:57:03.094005362Z level=info msg="Migration successfully executed" id="add unique index anon_device.device_id" duration=741.277µs 2026-03-10T05:57:03.184 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:57:03 vm05 bash[59013]: logger=migrator t=2026-03-10T05:57:03.095382435Z level=info msg="Executing migration" id="add index anon_device.updated_at" 2026-03-10T05:57:03.184 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:57:03 vm05 bash[59013]: logger=migrator t=2026-03-10T05:57:03.096109995Z level=info msg="Migration successfully executed" id="add index anon_device.updated_at" duration=727.671µs 2026-03-10T05:57:03.184 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:57:03 vm05 bash[59013]: logger=migrator t=2026-03-10T05:57:03.09730696Z level=info msg="Executing migration" id="create signing_key table" 2026-03-10T05:57:03.184 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:57:03 vm05 bash[59013]: logger=migrator t=2026-03-10T05:57:03.097886502Z level=info msg="Migration successfully executed" id="create signing_key table" duration=580.755µs 2026-03-10T05:57:03.184 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:57:03 vm05 bash[59013]: logger=migrator t=2026-03-10T05:57:03.099265819Z level=info msg="Executing migration" id="add unique index signing_key.key_id" 2026-03-10T05:57:03.184 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:57:03 vm05 bash[59013]: logger=migrator t=2026-03-10T05:57:03.099899272Z level=info msg="Migration successfully executed" id="add unique index signing_key.key_id" duration=633.713µs 2026-03-10T05:57:03.184 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:57:03 vm05 bash[59013]: logger=migrator t=2026-03-10T05:57:03.101105132Z level=info msg="Executing migration" id="set legacy alert migration status in kvstore" 2026-03-10T05:57:03.184 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:57:03 vm05 bash[59013]: logger=migrator t=2026-03-10T05:57:03.101787017Z level=info msg="Migration successfully executed" id="set legacy alert migration status in kvstore" duration=681.976µs 2026-03-10T05:57:03.184 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:57:03 vm05 bash[59013]: logger=migrator t=2026-03-10T05:57:03.102762103Z level=info msg="Executing migration" id="migrate record of created folders during legacy migration to kvstore" 2026-03-10T05:57:03.184 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:57:03 vm05 bash[59013]: logger=migrator t=2026-03-10T05:57:03.103041098Z level=info msg="Migration successfully executed" id="migrate record of created folders during legacy migration to kvstore" duration=279.214µs 2026-03-10T05:57:03.184 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:57:03 vm05 bash[59013]: logger=migrator t=2026-03-10T05:57:03.103927038Z level=info msg="Executing migration" id="Add folder_uid for dashboard" 2026-03-10T05:57:03.184 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:57:03 vm05 bash[59013]: logger=migrator t=2026-03-10T05:57:03.106478162Z level=info msg="Migration successfully executed" id="Add folder_uid for dashboard" duration=2.550953ms 2026-03-10T05:57:03.184 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:57:03 vm05 bash[59013]: logger=migrator t=2026-03-10T05:57:03.107584054Z level=info msg="Executing migration" id="Populate dashboard folder_uid column" 2026-03-10T05:57:03.184 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:57:03 vm05 bash[59013]: logger=migrator t=2026-03-10T05:57:03.108705045Z level=info msg="Migration successfully executed" id="Populate dashboard folder_uid column" duration=1.121242ms 2026-03-10T05:57:03.184 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:57:03 vm05 bash[59013]: logger=migrator t=2026-03-10T05:57:03.111133168Z level=info msg="Executing migration" id="Add unique index for dashboard_org_id_folder_uid_title" 2026-03-10T05:57:03.184 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:57:03 vm05 bash[59013]: logger=migrator t=2026-03-10T05:57:03.111849537Z level=info msg="Migration successfully executed" id="Add unique index for dashboard_org_id_folder_uid_title" duration=716.248µs 2026-03-10T05:57:03.184 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:57:03 vm05 bash[59013]: logger=migrator t=2026-03-10T05:57:03.112747208Z level=info msg="Executing migration" id="Delete unique index for dashboard_org_id_folder_id_title" 2026-03-10T05:57:03.184 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:57:03 vm05 bash[59013]: logger=migrator t=2026-03-10T05:57:03.113307213Z level=info msg="Migration successfully executed" id="Delete unique index for dashboard_org_id_folder_id_title" duration=559.964µs 2026-03-10T05:57:03.184 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:57:03 vm05 bash[59013]: logger=migrator t=2026-03-10T05:57:03.114404999Z level=info msg="Executing migration" id="Delete unique index for dashboard_org_id_folder_uid_title" 2026-03-10T05:57:03.184 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:57:03 vm05 bash[59013]: logger=migrator t=2026-03-10T05:57:03.115002916Z level=info msg="Migration successfully executed" id="Delete unique index for dashboard_org_id_folder_uid_title" duration=595.371µs 2026-03-10T05:57:03.184 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:57:03 vm05 bash[59013]: logger=migrator t=2026-03-10T05:57:03.11589763Z level=info msg="Executing migration" id="Add unique index for dashboard_org_id_folder_uid_title_is_folder" 2026-03-10T05:57:03.184 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:57:03 vm05 bash[59013]: logger=migrator t=2026-03-10T05:57:03.116537566Z level=info msg="Migration successfully executed" id="Add unique index for dashboard_org_id_folder_uid_title_is_folder" duration=639.855µs 2026-03-10T05:57:03.184 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:57:03 vm05 bash[59013]: logger=migrator t=2026-03-10T05:57:03.117514205Z level=info msg="Executing migration" id="Restore index for dashboard_org_id_folder_id_title" 2026-03-10T05:57:03.184 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:57:03 vm05 bash[59013]: logger=migrator t=2026-03-10T05:57:03.118112462Z level=info msg="Migration successfully executed" id="Restore index for dashboard_org_id_folder_id_title" duration=597.907µs 2026-03-10T05:57:03.184 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:57:03 vm05 bash[59013]: logger=migrator t=2026-03-10T05:57:03.119176625Z level=info msg="Executing migration" id="create sso_setting table" 2026-03-10T05:57:03.184 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:57:03 vm05 bash[59013]: logger=migrator t=2026-03-10T05:57:03.119810089Z level=info msg="Migration successfully executed" id="create sso_setting table" duration=633.442µs 2026-03-10T05:57:03.184 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:57:03 vm05 bash[59013]: logger=migrator t=2026-03-10T05:57:03.120826824Z level=info msg="Executing migration" id="copy kvstore migration status to each org" 2026-03-10T05:57:03.184 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:57:03 vm05 bash[59013]: logger=migrator t=2026-03-10T05:57:03.121510061Z level=info msg="Migration successfully executed" id="copy kvstore migration status to each org" duration=683.367µs 2026-03-10T05:57:03.184 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:57:03 vm05 bash[59013]: logger=migrator t=2026-03-10T05:57:03.122654967Z level=info msg="Executing migration" id="add back entry for orgid=0 migrated status" 2026-03-10T05:57:03.184 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:57:03 vm05 bash[59013]: logger=migrator t=2026-03-10T05:57:03.123109973Z level=info msg="Migration successfully executed" id="add back entry for orgid=0 migrated status" duration=455.148µs 2026-03-10T05:57:03.185 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:57:03 vm05 bash[59013]: logger=migrator t=2026-03-10T05:57:03.124027581Z level=info msg="Executing migration" id="alter kv_store.value to longtext" 2026-03-10T05:57:03.185 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:57:03 vm05 bash[59013]: logger=migrator t=2026-03-10T05:57:03.124216878Z level=info msg="Migration successfully executed" id="alter kv_store.value to longtext" duration=191.341µs 2026-03-10T05:57:03.185 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:57:03 vm05 bash[59013]: logger=migrator t=2026-03-10T05:57:03.125159785Z level=info msg="Executing migration" id="add notification_settings column to alert_rule table" 2026-03-10T05:57:03.185 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:57:03 vm05 bash[59013]: logger=migrator t=2026-03-10T05:57:03.127941793Z level=info msg="Migration successfully executed" id="add notification_settings column to alert_rule table" duration=2.781649ms 2026-03-10T05:57:03.185 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:57:03 vm05 bash[59013]: logger=migrator t=2026-03-10T05:57:03.129051864Z level=info msg="Executing migration" id="add notification_settings column to alert_rule_version table" 2026-03-10T05:57:03.185 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:57:03 vm05 bash[59013]: logger=migrator t=2026-03-10T05:57:03.131751799Z level=info msg="Migration successfully executed" id="add notification_settings column to alert_rule_version table" duration=2.699354ms 2026-03-10T05:57:03.185 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:57:03 vm05 bash[59013]: logger=migrator t=2026-03-10T05:57:03.132875044Z level=info msg="Executing migration" id="removing scope from alert.instances:read action migration" 2026-03-10T05:57:03.185 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:57:03 vm05 bash[59013]: logger=migrator t=2026-03-10T05:57:03.133172064Z level=info msg="Migration successfully executed" id="removing scope from alert.instances:read action migration" duration=296.588µs 2026-03-10T05:57:03.185 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:57:03 vm05 bash[59013]: logger=migrator t=2026-03-10T05:57:03.134367225Z level=info msg="migrations completed" performed=169 skipped=378 duration=439.061421ms 2026-03-10T05:57:03.185 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:57:03 vm05 bash[59013]: logger=sqlstore t=2026-03-10T05:57:03.134931056Z level=info msg="Created default organization" 2026-03-10T05:57:03.185 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:57:03 vm05 bash[59013]: logger=secrets t=2026-03-10T05:57:03.137690592Z level=info msg="Envelope encryption state" enabled=true currentprovider=secretKey.v1 2026-03-10T05:57:03.185 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:57:03 vm05 bash[59013]: logger=plugin.store t=2026-03-10T05:57:03.147020343Z level=info msg="Loading plugins..." 2026-03-10T05:57:03.335 INFO:journalctl@ceph.mgr.y.vm02.stdout:Mar 10 05:57:02 vm02 bash[52264]: ::ffff:192.168.123.105 - - [10/Mar/2026:05:57:02] "GET /metrics HTTP/1.1" 200 38251 "" "Prometheus/2.51.0" 2026-03-10T05:57:03.471 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:57:03 vm05 bash[59013]: logger=local.finder t=2026-03-10T05:57:03.185552661Z level=warn msg="Skipping finding plugins as directory does not exist" path=/usr/share/grafana/plugins-bundled 2026-03-10T05:57:03.471 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:57:03 vm05 bash[59013]: logger=plugin.store t=2026-03-10T05:57:03.185724995Z level=info msg="Plugins loaded" count=55 duration=38.706034ms 2026-03-10T05:57:03.472 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:57:03 vm05 bash[59013]: logger=query_data t=2026-03-10T05:57:03.188422065Z level=info msg="Query Service initialization" 2026-03-10T05:57:03.472 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:57:03 vm05 bash[59013]: logger=live.push_http t=2026-03-10T05:57:03.192474927Z level=info msg="Live Push Gateway initialization" 2026-03-10T05:57:03.472 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:57:03 vm05 bash[59013]: logger=ngalert.migration t=2026-03-10T05:57:03.195617765Z level=info msg=Starting 2026-03-10T05:57:03.472 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:57:03 vm05 bash[59013]: logger=ngalert t=2026-03-10T05:57:03.199308667Z level=warn msg="Unexpected number of rows updating alert configuration history" rows=0 org=1 hash=not-yet-calculated 2026-03-10T05:57:03.472 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:57:03 vm05 bash[59013]: logger=ngalert.state.manager t=2026-03-10T05:57:03.199979019Z level=info msg="Running in alternative execution of Error/NoData mode" 2026-03-10T05:57:03.472 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:57:03 vm05 bash[59013]: logger=infra.usagestats.collector t=2026-03-10T05:57:03.201013798Z level=info msg="registering usage stat providers" usageStatsProvidersLen=2 2026-03-10T05:57:03.472 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:57:03 vm05 bash[59013]: logger=provisioning.datasources t=2026-03-10T05:57:03.203440388Z level=info msg="deleted datasource based on configuration" name=Dashboard1 2026-03-10T05:57:03.472 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:57:03 vm05 bash[59013]: logger=provisioning.datasources t=2026-03-10T05:57:03.203764749Z level=info msg="inserting datasource from configuration" name=Dashboard1 uid=P43CA22E17D0F9596 2026-03-10T05:57:03.472 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:57:03 vm05 bash[59013]: logger=provisioning.alerting t=2026-03-10T05:57:03.214274712Z level=info msg="starting to provision alerting" 2026-03-10T05:57:03.472 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:57:03 vm05 bash[59013]: logger=provisioning.alerting t=2026-03-10T05:57:03.214437118Z level=info msg="finished to provision alerting" 2026-03-10T05:57:03.472 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:57:03 vm05 bash[59013]: logger=http.server t=2026-03-10T05:57:03.215712039Z level=info msg="HTTP Server TLS settings" MinTLSVersion=TLS1.2 configuredciphers=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA,TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA,TLS_RSA_WITH_AES_128_GCM_SHA256,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_CBC_SHA,TLS_RSA_WITH_AES_256_CBC_SHA 2026-03-10T05:57:03.472 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:57:03 vm05 bash[59013]: logger=ngalert.state.manager t=2026-03-10T05:57:03.215775648Z level=info msg="Warming state cache for startup" 2026-03-10T05:57:03.472 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:57:03 vm05 bash[59013]: logger=http.server t=2026-03-10T05:57:03.216075003Z level=info msg="HTTP Server Listen" address=[::]:3000 protocol=https subUrl= socket= 2026-03-10T05:57:03.472 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:57:03 vm05 bash[59013]: logger=ngalert.state.manager t=2026-03-10T05:57:03.216263557Z level=info msg="State cache has been initialized" states=0 duration=487.467µs 2026-03-10T05:57:03.472 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:57:03 vm05 bash[59013]: logger=provisioning.dashboard t=2026-03-10T05:57:03.217066289Z level=info msg="starting to provision dashboards" 2026-03-10T05:57:03.472 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:57:03 vm05 bash[59013]: logger=grafanaStorageLogger t=2026-03-10T05:57:03.218050874Z level=info msg="Storage starting" 2026-03-10T05:57:03.472 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:57:03 vm05 bash[59013]: logger=ngalert.multiorg.alertmanager t=2026-03-10T05:57:03.232973867Z level=info msg="Starting MultiOrg Alertmanager" 2026-03-10T05:57:03.472 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:57:03 vm05 bash[59013]: logger=ngalert.scheduler t=2026-03-10T05:57:03.233075038Z level=info msg="Starting scheduler" tickInterval=10s maxAttempts=1 2026-03-10T05:57:03.472 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:57:03 vm05 bash[59013]: logger=ticker t=2026-03-10T05:57:03.233295202Z level=info msg=starting first_tick=2026-03-10T05:57:10Z 2026-03-10T05:57:03.472 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:57:03 vm05 bash[59013]: logger=sqlstore.transactions t=2026-03-10T05:57:03.249533444Z level=info msg="Database locked, sleeping then retrying" error="database is locked" retry=0 code="database is locked" 2026-03-10T05:57:03.472 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:57:03 vm05 bash[59013]: logger=plugins.update.checker t=2026-03-10T05:57:03.303343814Z level=info msg="Update check succeeded" duration=86.000914ms 2026-03-10T05:57:03.472 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:57:03 vm05 bash[59013]: logger=provisioning.dashboard t=2026-03-10T05:57:03.380891598Z level=info msg="finished to provision dashboards" 2026-03-10T05:57:03.748 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:57:03 vm05 bash[43541]: audit 2026-03-10T05:57:02.046324+0000 mgr.y (mgr.24992) 285 : audit [DBG] from='client.34459 -' entity='client.iscsi.foo.vm02.mxbwmh' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T05:57:03.748 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:57:03 vm05 bash[43541]: audit 2026-03-10T05:57:02.046324+0000 mgr.y (mgr.24992) 285 : audit [DBG] from='client.34459 -' entity='client.iscsi.foo.vm02.mxbwmh' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T05:57:03.748 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:57:03 vm05 bash[43541]: audit 2026-03-10T05:57:02.469153+0000 mon.a (mon.0) 663 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:57:03.748 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:57:03 vm05 bash[43541]: audit 2026-03-10T05:57:02.469153+0000 mon.a (mon.0) 663 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:57:03.748 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:57:03 vm05 bash[43541]: audit 2026-03-10T05:57:02.477219+0000 mon.a (mon.0) 664 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:57:03.748 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:57:03 vm05 bash[43541]: audit 2026-03-10T05:57:02.477219+0000 mon.a (mon.0) 664 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:57:03.749 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:57:03 vm05 bash[59013]: logger=grafana-apiserver t=2026-03-10T05:57:03.700149527Z level=info msg="Adding GroupVersion playlist.grafana.app v0alpha1 to ResourceManager" 2026-03-10T05:57:03.749 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:57:03 vm05 bash[59013]: logger=grafana-apiserver t=2026-03-10T05:57:03.700896765Z level=info msg="Adding GroupVersion featuretoggle.grafana.app v0alpha1 to ResourceManager" 2026-03-10T05:57:03.835 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:57:03 vm02 bash[56371]: audit 2026-03-10T05:57:02.046324+0000 mgr.y (mgr.24992) 285 : audit [DBG] from='client.34459 -' entity='client.iscsi.foo.vm02.mxbwmh' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T05:57:03.835 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:57:03 vm02 bash[56371]: audit 2026-03-10T05:57:02.046324+0000 mgr.y (mgr.24992) 285 : audit [DBG] from='client.34459 -' entity='client.iscsi.foo.vm02.mxbwmh' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T05:57:03.835 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:57:03 vm02 bash[56371]: audit 2026-03-10T05:57:02.469153+0000 mon.a (mon.0) 663 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:57:03.835 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:57:03 vm02 bash[56371]: audit 2026-03-10T05:57:02.469153+0000 mon.a (mon.0) 663 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:57:03.835 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:57:03 vm02 bash[56371]: audit 2026-03-10T05:57:02.477219+0000 mon.a (mon.0) 664 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:57:03.835 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:57:03 vm02 bash[56371]: audit 2026-03-10T05:57:02.477219+0000 mon.a (mon.0) 664 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:57:03.835 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:57:03 vm02 bash[55303]: audit 2026-03-10T05:57:02.046324+0000 mgr.y (mgr.24992) 285 : audit [DBG] from='client.34459 -' entity='client.iscsi.foo.vm02.mxbwmh' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T05:57:03.835 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:57:03 vm02 bash[55303]: audit 2026-03-10T05:57:02.046324+0000 mgr.y (mgr.24992) 285 : audit [DBG] from='client.34459 -' entity='client.iscsi.foo.vm02.mxbwmh' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T05:57:03.835 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:57:03 vm02 bash[55303]: audit 2026-03-10T05:57:02.469153+0000 mon.a (mon.0) 663 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:57:03.835 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:57:03 vm02 bash[55303]: audit 2026-03-10T05:57:02.469153+0000 mon.a (mon.0) 663 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:57:03.835 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:57:03 vm02 bash[55303]: audit 2026-03-10T05:57:02.477219+0000 mon.a (mon.0) 664 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:57:03.835 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:57:03 vm02 bash[55303]: audit 2026-03-10T05:57:02.477219+0000 mon.a (mon.0) 664 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:57:04.748 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:57:04 vm05 bash[43541]: cluster 2026-03-10T05:57:02.869977+0000 mgr.y (mgr.24992) 286 : cluster [DBG] pgmap v166: 161 pgs: 161 active+clean; 457 KiB data, 325 MiB used, 160 GiB / 160 GiB avail; 3.1 KiB/s rd, 3 op/s 2026-03-10T05:57:04.748 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:57:04 vm05 bash[43541]: cluster 2026-03-10T05:57:02.869977+0000 mgr.y (mgr.24992) 286 : cluster [DBG] pgmap v166: 161 pgs: 161 active+clean; 457 KiB data, 325 MiB used, 160 GiB / 160 GiB avail; 3.1 KiB/s rd, 3 op/s 2026-03-10T05:57:04.835 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:57:04 vm02 bash[56371]: cluster 2026-03-10T05:57:02.869977+0000 mgr.y (mgr.24992) 286 : cluster [DBG] pgmap v166: 161 pgs: 161 active+clean; 457 KiB data, 325 MiB used, 160 GiB / 160 GiB avail; 3.1 KiB/s rd, 3 op/s 2026-03-10T05:57:04.835 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:57:04 vm02 bash[56371]: cluster 2026-03-10T05:57:02.869977+0000 mgr.y (mgr.24992) 286 : cluster [DBG] pgmap v166: 161 pgs: 161 active+clean; 457 KiB data, 325 MiB used, 160 GiB / 160 GiB avail; 3.1 KiB/s rd, 3 op/s 2026-03-10T05:57:04.835 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:57:04 vm02 bash[55303]: cluster 2026-03-10T05:57:02.869977+0000 mgr.y (mgr.24992) 286 : cluster [DBG] pgmap v166: 161 pgs: 161 active+clean; 457 KiB data, 325 MiB used, 160 GiB / 160 GiB avail; 3.1 KiB/s rd, 3 op/s 2026-03-10T05:57:04.835 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:57:04 vm02 bash[55303]: cluster 2026-03-10T05:57:02.869977+0000 mgr.y (mgr.24992) 286 : cluster [DBG] pgmap v166: 161 pgs: 161 active+clean; 457 KiB data, 325 MiB used, 160 GiB / 160 GiB avail; 3.1 KiB/s rd, 3 op/s 2026-03-10T05:57:06.835 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:57:06 vm02 bash[56371]: cluster 2026-03-10T05:57:04.870341+0000 mgr.y (mgr.24992) 287 : cluster [DBG] pgmap v167: 161 pgs: 161 active+clean; 457 KiB data, 325 MiB used, 160 GiB / 160 GiB avail; 2.7 KiB/s rd, 2 op/s 2026-03-10T05:57:06.835 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:57:06 vm02 bash[56371]: cluster 2026-03-10T05:57:04.870341+0000 mgr.y (mgr.24992) 287 : cluster [DBG] pgmap v167: 161 pgs: 161 active+clean; 457 KiB data, 325 MiB used, 160 GiB / 160 GiB avail; 2.7 KiB/s rd, 2 op/s 2026-03-10T05:57:06.835 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:57:06 vm02 bash[55303]: cluster 2026-03-10T05:57:04.870341+0000 mgr.y (mgr.24992) 287 : cluster [DBG] pgmap v167: 161 pgs: 161 active+clean; 457 KiB data, 325 MiB used, 160 GiB / 160 GiB avail; 2.7 KiB/s rd, 2 op/s 2026-03-10T05:57:06.835 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:57:06 vm02 bash[55303]: cluster 2026-03-10T05:57:04.870341+0000 mgr.y (mgr.24992) 287 : cluster [DBG] pgmap v167: 161 pgs: 161 active+clean; 457 KiB data, 325 MiB used, 160 GiB / 160 GiB avail; 2.7 KiB/s rd, 2 op/s 2026-03-10T05:57:06.998 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:57:06 vm05 bash[43541]: cluster 2026-03-10T05:57:04.870341+0000 mgr.y (mgr.24992) 287 : cluster [DBG] pgmap v167: 161 pgs: 161 active+clean; 457 KiB data, 325 MiB used, 160 GiB / 160 GiB avail; 2.7 KiB/s rd, 2 op/s 2026-03-10T05:57:06.998 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:57:06 vm05 bash[43541]: cluster 2026-03-10T05:57:04.870341+0000 mgr.y (mgr.24992) 287 : cluster [DBG] pgmap v167: 161 pgs: 161 active+clean; 457 KiB data, 325 MiB used, 160 GiB / 160 GiB avail; 2.7 KiB/s rd, 2 op/s 2026-03-10T05:57:08.998 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:57:08 vm05 bash[43541]: cluster 2026-03-10T05:57:06.870766+0000 mgr.y (mgr.24992) 288 : cluster [DBG] pgmap v168: 161 pgs: 161 active+clean; 457 KiB data, 325 MiB used, 160 GiB / 160 GiB avail; 2.8 KiB/s rd, 2 op/s 2026-03-10T05:57:08.998 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:57:08 vm05 bash[43541]: cluster 2026-03-10T05:57:06.870766+0000 mgr.y (mgr.24992) 288 : cluster [DBG] pgmap v168: 161 pgs: 161 active+clean; 457 KiB data, 325 MiB used, 160 GiB / 160 GiB avail; 2.8 KiB/s rd, 2 op/s 2026-03-10T05:57:08.998 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:57:08 vm05 bash[43541]: audit 2026-03-10T05:57:07.710641+0000 mon.a (mon.0) 665 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:57:08.998 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:57:08 vm05 bash[43541]: audit 2026-03-10T05:57:07.710641+0000 mon.a (mon.0) 665 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:57:08.998 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:57:08 vm05 bash[43541]: audit 2026-03-10T05:57:07.718369+0000 mon.a (mon.0) 666 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:57:08.998 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:57:08 vm05 bash[43541]: audit 2026-03-10T05:57:07.718369+0000 mon.a (mon.0) 666 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:57:08.998 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:57:08 vm05 bash[43541]: audit 2026-03-10T05:57:07.780636+0000 mon.a (mon.0) 667 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:57:08.998 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:57:08 vm05 bash[43541]: audit 2026-03-10T05:57:07.780636+0000 mon.a (mon.0) 667 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:57:08.998 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:57:08 vm05 bash[43541]: audit 2026-03-10T05:57:07.786654+0000 mon.a (mon.0) 668 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:57:08.998 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:57:08 vm05 bash[43541]: audit 2026-03-10T05:57:07.786654+0000 mon.a (mon.0) 668 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:57:08.998 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:57:08 vm05 bash[43541]: audit 2026-03-10T05:57:08.314760+0000 mon.a (mon.0) 669 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:57:08.998 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:57:08 vm05 bash[43541]: audit 2026-03-10T05:57:08.314760+0000 mon.a (mon.0) 669 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:57:08.999 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:57:08 vm05 bash[43541]: audit 2026-03-10T05:57:08.321452+0000 mon.a (mon.0) 670 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:57:08.999 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:57:08 vm05 bash[43541]: audit 2026-03-10T05:57:08.321452+0000 mon.a (mon.0) 670 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:57:09.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:57:08 vm02 bash[56371]: cluster 2026-03-10T05:57:06.870766+0000 mgr.y (mgr.24992) 288 : cluster [DBG] pgmap v168: 161 pgs: 161 active+clean; 457 KiB data, 325 MiB used, 160 GiB / 160 GiB avail; 2.8 KiB/s rd, 2 op/s 2026-03-10T05:57:09.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:57:08 vm02 bash[56371]: cluster 2026-03-10T05:57:06.870766+0000 mgr.y (mgr.24992) 288 : cluster [DBG] pgmap v168: 161 pgs: 161 active+clean; 457 KiB data, 325 MiB used, 160 GiB / 160 GiB avail; 2.8 KiB/s rd, 2 op/s 2026-03-10T05:57:09.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:57:08 vm02 bash[56371]: audit 2026-03-10T05:57:07.710641+0000 mon.a (mon.0) 665 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:57:09.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:57:08 vm02 bash[56371]: audit 2026-03-10T05:57:07.710641+0000 mon.a (mon.0) 665 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:57:09.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:57:08 vm02 bash[56371]: audit 2026-03-10T05:57:07.718369+0000 mon.a (mon.0) 666 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:57:09.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:57:08 vm02 bash[56371]: audit 2026-03-10T05:57:07.718369+0000 mon.a (mon.0) 666 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:57:09.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:57:08 vm02 bash[56371]: audit 2026-03-10T05:57:07.780636+0000 mon.a (mon.0) 667 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:57:09.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:57:08 vm02 bash[56371]: audit 2026-03-10T05:57:07.780636+0000 mon.a (mon.0) 667 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:57:09.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:57:08 vm02 bash[56371]: audit 2026-03-10T05:57:07.786654+0000 mon.a (mon.0) 668 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:57:09.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:57:08 vm02 bash[56371]: audit 2026-03-10T05:57:07.786654+0000 mon.a (mon.0) 668 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:57:09.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:57:08 vm02 bash[56371]: audit 2026-03-10T05:57:08.314760+0000 mon.a (mon.0) 669 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:57:09.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:57:08 vm02 bash[56371]: audit 2026-03-10T05:57:08.314760+0000 mon.a (mon.0) 669 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:57:09.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:57:08 vm02 bash[56371]: audit 2026-03-10T05:57:08.321452+0000 mon.a (mon.0) 670 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:57:09.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:57:08 vm02 bash[56371]: audit 2026-03-10T05:57:08.321452+0000 mon.a (mon.0) 670 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:57:09.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:57:08 vm02 bash[55303]: cluster 2026-03-10T05:57:06.870766+0000 mgr.y (mgr.24992) 288 : cluster [DBG] pgmap v168: 161 pgs: 161 active+clean; 457 KiB data, 325 MiB used, 160 GiB / 160 GiB avail; 2.8 KiB/s rd, 2 op/s 2026-03-10T05:57:09.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:57:08 vm02 bash[55303]: cluster 2026-03-10T05:57:06.870766+0000 mgr.y (mgr.24992) 288 : cluster [DBG] pgmap v168: 161 pgs: 161 active+clean; 457 KiB data, 325 MiB used, 160 GiB / 160 GiB avail; 2.8 KiB/s rd, 2 op/s 2026-03-10T05:57:09.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:57:08 vm02 bash[55303]: audit 2026-03-10T05:57:07.710641+0000 mon.a (mon.0) 665 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:57:09.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:57:08 vm02 bash[55303]: audit 2026-03-10T05:57:07.710641+0000 mon.a (mon.0) 665 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:57:09.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:57:08 vm02 bash[55303]: audit 2026-03-10T05:57:07.718369+0000 mon.a (mon.0) 666 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:57:09.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:57:08 vm02 bash[55303]: audit 2026-03-10T05:57:07.718369+0000 mon.a (mon.0) 666 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:57:09.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:57:08 vm02 bash[55303]: audit 2026-03-10T05:57:07.780636+0000 mon.a (mon.0) 667 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:57:09.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:57:08 vm02 bash[55303]: audit 2026-03-10T05:57:07.780636+0000 mon.a (mon.0) 667 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:57:09.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:57:08 vm02 bash[55303]: audit 2026-03-10T05:57:07.786654+0000 mon.a (mon.0) 668 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:57:09.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:57:08 vm02 bash[55303]: audit 2026-03-10T05:57:07.786654+0000 mon.a (mon.0) 668 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:57:09.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:57:08 vm02 bash[55303]: audit 2026-03-10T05:57:08.314760+0000 mon.a (mon.0) 669 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:57:09.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:57:08 vm02 bash[55303]: audit 2026-03-10T05:57:08.314760+0000 mon.a (mon.0) 669 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:57:09.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:57:08 vm02 bash[55303]: audit 2026-03-10T05:57:08.321452+0000 mon.a (mon.0) 670 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:57:09.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:57:08 vm02 bash[55303]: audit 2026-03-10T05:57:08.321452+0000 mon.a (mon.0) 670 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:57:10.998 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:57:10 vm05 bash[43541]: cluster 2026-03-10T05:57:08.871136+0000 mgr.y (mgr.24992) 289 : cluster [DBG] pgmap v169: 161 pgs: 161 active+clean; 457 KiB data, 325 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T05:57:10.998 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:57:10 vm05 bash[43541]: cluster 2026-03-10T05:57:08.871136+0000 mgr.y (mgr.24992) 289 : cluster [DBG] pgmap v169: 161 pgs: 161 active+clean; 457 KiB data, 325 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T05:57:11.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:57:10 vm02 bash[56371]: cluster 2026-03-10T05:57:08.871136+0000 mgr.y (mgr.24992) 289 : cluster [DBG] pgmap v169: 161 pgs: 161 active+clean; 457 KiB data, 325 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T05:57:11.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:57:10 vm02 bash[56371]: cluster 2026-03-10T05:57:08.871136+0000 mgr.y (mgr.24992) 289 : cluster [DBG] pgmap v169: 161 pgs: 161 active+clean; 457 KiB data, 325 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T05:57:11.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:57:10 vm02 bash[55303]: cluster 2026-03-10T05:57:08.871136+0000 mgr.y (mgr.24992) 289 : cluster [DBG] pgmap v169: 161 pgs: 161 active+clean; 457 KiB data, 325 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T05:57:11.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:57:10 vm02 bash[55303]: cluster 2026-03-10T05:57:08.871136+0000 mgr.y (mgr.24992) 289 : cluster [DBG] pgmap v169: 161 pgs: 161 active+clean; 457 KiB data, 325 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T05:57:11.998 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:57:11 vm05 bash[43541]: audit 2026-03-10T05:57:10.884095+0000 mon.a (mon.0) 671 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T05:57:11.998 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:57:11 vm05 bash[43541]: audit 2026-03-10T05:57:10.884095+0000 mon.a (mon.0) 671 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T05:57:12.055 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:57:11 vm02 bash[56371]: audit 2026-03-10T05:57:10.884095+0000 mon.a (mon.0) 671 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T05:57:12.055 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:57:11 vm02 bash[56371]: audit 2026-03-10T05:57:10.884095+0000 mon.a (mon.0) 671 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T05:57:12.055 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:57:11 vm02 bash[55303]: audit 2026-03-10T05:57:10.884095+0000 mon.a (mon.0) 671 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T05:57:12.055 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:57:11 vm02 bash[55303]: audit 2026-03-10T05:57:10.884095+0000 mon.a (mon.0) 671 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T05:57:12.998 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:57:12 vm05 bash[43541]: cluster 2026-03-10T05:57:10.871447+0000 mgr.y (mgr.24992) 290 : cluster [DBG] pgmap v170: 161 pgs: 161 active+clean; 457 KiB data, 325 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T05:57:12.998 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:57:12 vm05 bash[43541]: cluster 2026-03-10T05:57:10.871447+0000 mgr.y (mgr.24992) 290 : cluster [DBG] pgmap v170: 161 pgs: 161 active+clean; 457 KiB data, 325 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T05:57:12.998 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:57:12 vm05 bash[43541]: audit 2026-03-10T05:57:12.054480+0000 mgr.y (mgr.24992) 291 : audit [DBG] from='client.34459 -' entity='client.iscsi.foo.vm02.mxbwmh' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T05:57:12.998 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:57:12 vm05 bash[43541]: audit 2026-03-10T05:57:12.054480+0000 mgr.y (mgr.24992) 291 : audit [DBG] from='client.34459 -' entity='client.iscsi.foo.vm02.mxbwmh' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T05:57:13.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:57:12 vm02 bash[56371]: cluster 2026-03-10T05:57:10.871447+0000 mgr.y (mgr.24992) 290 : cluster [DBG] pgmap v170: 161 pgs: 161 active+clean; 457 KiB data, 325 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T05:57:13.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:57:12 vm02 bash[56371]: cluster 2026-03-10T05:57:10.871447+0000 mgr.y (mgr.24992) 290 : cluster [DBG] pgmap v170: 161 pgs: 161 active+clean; 457 KiB data, 325 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T05:57:13.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:57:12 vm02 bash[56371]: audit 2026-03-10T05:57:12.054480+0000 mgr.y (mgr.24992) 291 : audit [DBG] from='client.34459 -' entity='client.iscsi.foo.vm02.mxbwmh' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T05:57:13.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:57:12 vm02 bash[56371]: audit 2026-03-10T05:57:12.054480+0000 mgr.y (mgr.24992) 291 : audit [DBG] from='client.34459 -' entity='client.iscsi.foo.vm02.mxbwmh' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T05:57:13.085 INFO:journalctl@ceph.mgr.y.vm02.stdout:Mar 10 05:57:12 vm02 bash[52264]: ::ffff:192.168.123.105 - - [10/Mar/2026:05:57:12] "GET /metrics HTTP/1.1" 200 38251 "" "Prometheus/2.51.0" 2026-03-10T05:57:13.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:57:12 vm02 bash[55303]: cluster 2026-03-10T05:57:10.871447+0000 mgr.y (mgr.24992) 290 : cluster [DBG] pgmap v170: 161 pgs: 161 active+clean; 457 KiB data, 325 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T05:57:13.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:57:12 vm02 bash[55303]: cluster 2026-03-10T05:57:10.871447+0000 mgr.y (mgr.24992) 290 : cluster [DBG] pgmap v170: 161 pgs: 161 active+clean; 457 KiB data, 325 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T05:57:13.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:57:12 vm02 bash[55303]: audit 2026-03-10T05:57:12.054480+0000 mgr.y (mgr.24992) 291 : audit [DBG] from='client.34459 -' entity='client.iscsi.foo.vm02.mxbwmh' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T05:57:13.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:57:12 vm02 bash[55303]: audit 2026-03-10T05:57:12.054480+0000 mgr.y (mgr.24992) 291 : audit [DBG] from='client.34459 -' entity='client.iscsi.foo.vm02.mxbwmh' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T05:57:13.998 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:57:13 vm05 bash[43541]: cluster 2026-03-10T05:57:12.871852+0000 mgr.y (mgr.24992) 292 : cluster [DBG] pgmap v171: 161 pgs: 161 active+clean; 457 KiB data, 325 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T05:57:13.998 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:57:13 vm05 bash[43541]: cluster 2026-03-10T05:57:12.871852+0000 mgr.y (mgr.24992) 292 : cluster [DBG] pgmap v171: 161 pgs: 161 active+clean; 457 KiB data, 325 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T05:57:14.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:57:13 vm02 bash[56371]: cluster 2026-03-10T05:57:12.871852+0000 mgr.y (mgr.24992) 292 : cluster [DBG] pgmap v171: 161 pgs: 161 active+clean; 457 KiB data, 325 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T05:57:14.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:57:13 vm02 bash[56371]: cluster 2026-03-10T05:57:12.871852+0000 mgr.y (mgr.24992) 292 : cluster [DBG] pgmap v171: 161 pgs: 161 active+clean; 457 KiB data, 325 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T05:57:14.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:57:13 vm02 bash[55303]: cluster 2026-03-10T05:57:12.871852+0000 mgr.y (mgr.24992) 292 : cluster [DBG] pgmap v171: 161 pgs: 161 active+clean; 457 KiB data, 325 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T05:57:14.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:57:13 vm02 bash[55303]: cluster 2026-03-10T05:57:12.871852+0000 mgr.y (mgr.24992) 292 : cluster [DBG] pgmap v171: 161 pgs: 161 active+clean; 457 KiB data, 325 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T05:57:15.498 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:57:15 vm05 bash[43541]: audit 2026-03-10T05:57:14.179358+0000 mon.a (mon.0) 672 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:57:15.498 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:57:15 vm05 bash[43541]: audit 2026-03-10T05:57:14.179358+0000 mon.a (mon.0) 672 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:57:15.498 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:57:15 vm05 bash[43541]: audit 2026-03-10T05:57:14.186160+0000 mon.a (mon.0) 673 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:57:15.498 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:57:15 vm05 bash[43541]: audit 2026-03-10T05:57:14.186160+0000 mon.a (mon.0) 673 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:57:15.499 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:57:15 vm05 bash[43541]: audit 2026-03-10T05:57:14.187273+0000 mon.a (mon.0) 674 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:57:15.499 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:57:15 vm05 bash[43541]: audit 2026-03-10T05:57:14.187273+0000 mon.a (mon.0) 674 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:57:15.499 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:57:15 vm05 bash[43541]: audit 2026-03-10T05:57:14.187735+0000 mon.a (mon.0) 675 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T05:57:15.499 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:57:15 vm05 bash[43541]: audit 2026-03-10T05:57:14.187735+0000 mon.a (mon.0) 675 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T05:57:15.499 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:57:15 vm05 bash[43541]: audit 2026-03-10T05:57:14.192655+0000 mon.a (mon.0) 676 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:57:15.499 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:57:15 vm05 bash[43541]: audit 2026-03-10T05:57:14.192655+0000 mon.a (mon.0) 676 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:57:15.499 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:57:15 vm05 bash[43541]: audit 2026-03-10T05:57:14.205531+0000 mon.a (mon.0) 677 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "dashboard get-grafana-api-url"}]: dispatch 2026-03-10T05:57:15.499 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:57:15 vm05 bash[43541]: audit 2026-03-10T05:57:14.205531+0000 mon.a (mon.0) 677 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "dashboard get-grafana-api-url"}]: dispatch 2026-03-10T05:57:15.499 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:57:15 vm05 bash[43541]: audit 2026-03-10T05:57:14.205854+0000 mgr.y (mgr.24992) 293 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard get-grafana-api-url"}]: dispatch 2026-03-10T05:57:15.499 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:57:15 vm05 bash[43541]: audit 2026-03-10T05:57:14.205854+0000 mgr.y (mgr.24992) 293 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard get-grafana-api-url"}]: dispatch 2026-03-10T05:57:15.499 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:57:15 vm05 bash[43541]: audit 2026-03-10T05:57:14.235587+0000 mon.a (mon.0) 678 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T05:57:15.499 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:57:15 vm05 bash[43541]: audit 2026-03-10T05:57:14.235587+0000 mon.a (mon.0) 678 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T05:57:15.499 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:57:15 vm05 bash[43541]: audit 2026-03-10T05:57:14.236900+0000 mon.a (mon.0) 679 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T05:57:15.499 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:57:15 vm05 bash[43541]: audit 2026-03-10T05:57:14.236900+0000 mon.a (mon.0) 679 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T05:57:15.499 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:57:15 vm05 bash[43541]: audit 2026-03-10T05:57:14.237650+0000 mon.a (mon.0) 680 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T05:57:15.499 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:57:15 vm05 bash[43541]: audit 2026-03-10T05:57:14.237650+0000 mon.a (mon.0) 680 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T05:57:15.499 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:57:15 vm05 bash[43541]: audit 2026-03-10T05:57:14.238233+0000 mon.a (mon.0) 681 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T05:57:15.499 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:57:15 vm05 bash[43541]: audit 2026-03-10T05:57:14.238233+0000 mon.a (mon.0) 681 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T05:57:15.499 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:57:15 vm05 bash[43541]: audit 2026-03-10T05:57:14.238994+0000 mon.a (mon.0) 682 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T05:57:15.499 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:57:15 vm05 bash[43541]: audit 2026-03-10T05:57:14.238994+0000 mon.a (mon.0) 682 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T05:57:15.499 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:57:15 vm05 bash[43541]: audit 2026-03-10T05:57:14.240076+0000 mon.a (mon.0) 683 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T05:57:15.499 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:57:15 vm05 bash[43541]: audit 2026-03-10T05:57:14.240076+0000 mon.a (mon.0) 683 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T05:57:15.499 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:57:15 vm05 bash[43541]: audit 2026-03-10T05:57:14.240703+0000 mon.a (mon.0) 684 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T05:57:15.499 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:57:15 vm05 bash[43541]: audit 2026-03-10T05:57:14.240703+0000 mon.a (mon.0) 684 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T05:57:15.499 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:57:15 vm05 bash[43541]: audit 2026-03-10T05:57:14.241198+0000 mon.a (mon.0) 685 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T05:57:15.499 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:57:15 vm05 bash[43541]: audit 2026-03-10T05:57:14.241198+0000 mon.a (mon.0) 685 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T05:57:15.499 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:57:15 vm05 bash[43541]: audit 2026-03-10T05:57:14.241689+0000 mon.a (mon.0) 686 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T05:57:15.499 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:57:15 vm05 bash[43541]: audit 2026-03-10T05:57:14.241689+0000 mon.a (mon.0) 686 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T05:57:15.499 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:57:15 vm05 bash[43541]: audit 2026-03-10T05:57:14.242217+0000 mon.a (mon.0) 687 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T05:57:15.499 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:57:15 vm05 bash[43541]: audit 2026-03-10T05:57:14.242217+0000 mon.a (mon.0) 687 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T05:57:15.499 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:57:15 vm05 bash[43541]: audit 2026-03-10T05:57:14.242977+0000 mon.a (mon.0) 688 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T05:57:15.499 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:57:15 vm05 bash[43541]: audit 2026-03-10T05:57:14.242977+0000 mon.a (mon.0) 688 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T05:57:15.499 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:57:15 vm05 bash[43541]: audit 2026-03-10T05:57:14.243518+0000 mon.a (mon.0) 689 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T05:57:15.499 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:57:15 vm05 bash[43541]: audit 2026-03-10T05:57:14.243518+0000 mon.a (mon.0) 689 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T05:57:15.499 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:57:15 vm05 bash[43541]: audit 2026-03-10T05:57:14.245088+0000 mon.a (mon.0) 690 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T05:57:15.499 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:57:15 vm05 bash[43541]: audit 2026-03-10T05:57:14.245088+0000 mon.a (mon.0) 690 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T05:57:15.499 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:57:15 vm05 bash[43541]: audit 2026-03-10T05:57:14.246432+0000 mon.a (mon.0) 691 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T05:57:15.499 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:57:15 vm05 bash[43541]: audit 2026-03-10T05:57:14.246432+0000 mon.a (mon.0) 691 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T05:57:15.499 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:57:15 vm05 bash[43541]: audit 2026-03-10T05:57:14.247523+0000 mon.a (mon.0) 692 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T05:57:15.499 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:57:15 vm05 bash[43541]: audit 2026-03-10T05:57:14.247523+0000 mon.a (mon.0) 692 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T05:57:15.499 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:57:15 vm05 bash[43541]: audit 2026-03-10T05:57:14.248140+0000 mon.a (mon.0) 693 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T05:57:15.499 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:57:15 vm05 bash[43541]: audit 2026-03-10T05:57:14.248140+0000 mon.a (mon.0) 693 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T05:57:15.499 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:57:15 vm05 bash[43541]: audit 2026-03-10T05:57:14.249004+0000 mon.a (mon.0) 694 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T05:57:15.499 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:57:15 vm05 bash[43541]: audit 2026-03-10T05:57:14.249004+0000 mon.a (mon.0) 694 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T05:57:15.499 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:57:15 vm05 bash[43541]: audit 2026-03-10T05:57:14.249872+0000 mon.a (mon.0) 695 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T05:57:15.499 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:57:15 vm05 bash[43541]: audit 2026-03-10T05:57:14.249872+0000 mon.a (mon.0) 695 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T05:57:15.499 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:57:15 vm05 bash[43541]: audit 2026-03-10T05:57:14.250749+0000 mon.a (mon.0) 696 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T05:57:15.499 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:57:15 vm05 bash[43541]: audit 2026-03-10T05:57:14.250749+0000 mon.a (mon.0) 696 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T05:57:15.499 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:57:15 vm05 bash[43541]: cephadm 2026-03-10T05:57:14.251336+0000 mgr.y (mgr.24992) 294 : cephadm [INF] Upgrade: Finalizing container_image settings 2026-03-10T05:57:15.499 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:57:15 vm05 bash[43541]: cephadm 2026-03-10T05:57:14.251336+0000 mgr.y (mgr.24992) 294 : cephadm [INF] Upgrade: Finalizing container_image settings 2026-03-10T05:57:15.499 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:57:15 vm05 bash[43541]: audit 2026-03-10T05:57:14.255379+0000 mon.a (mon.0) 697 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:57:15.499 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:57:15 vm05 bash[43541]: audit 2026-03-10T05:57:14.255379+0000 mon.a (mon.0) 697 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:57:15.499 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:57:15 vm05 bash[43541]: audit 2026-03-10T05:57:14.258480+0000 mon.a (mon.0) 698 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mgr"}]: dispatch 2026-03-10T05:57:15.499 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:57:15 vm05 bash[43541]: audit 2026-03-10T05:57:14.258480+0000 mon.a (mon.0) 698 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mgr"}]: dispatch 2026-03-10T05:57:15.499 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:57:15 vm05 bash[43541]: audit 2026-03-10T05:57:14.261200+0000 mon.a (mon.0) 699 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "mgr"}]': finished 2026-03-10T05:57:15.499 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:57:15 vm05 bash[43541]: audit 2026-03-10T05:57:14.261200+0000 mon.a (mon.0) 699 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "mgr"}]': finished 2026-03-10T05:57:15.499 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:57:15 vm05 bash[43541]: audit 2026-03-10T05:57:14.263401+0000 mon.a (mon.0) 700 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T05:57:15.500 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:57:15 vm05 bash[43541]: audit 2026-03-10T05:57:14.263401+0000 mon.a (mon.0) 700 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T05:57:15.500 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:57:15 vm05 bash[43541]: audit 2026-03-10T05:57:14.265874+0000 mon.a (mon.0) 701 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "mon"}]': finished 2026-03-10T05:57:15.500 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:57:15 vm05 bash[43541]: audit 2026-03-10T05:57:14.265874+0000 mon.a (mon.0) 701 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "mon"}]': finished 2026-03-10T05:57:15.500 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:57:15 vm05 bash[43541]: audit 2026-03-10T05:57:14.267585+0000 mon.a (mon.0) 702 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.crash"}]: dispatch 2026-03-10T05:57:15.500 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:57:15 vm05 bash[43541]: audit 2026-03-10T05:57:14.267585+0000 mon.a (mon.0) 702 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.crash"}]: dispatch 2026-03-10T05:57:15.500 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:57:15 vm05 bash[43541]: audit 2026-03-10T05:57:14.270436+0000 mon.a (mon.0) 703 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.crash"}]': finished 2026-03-10T05:57:15.500 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:57:15 vm05 bash[43541]: audit 2026-03-10T05:57:14.270436+0000 mon.a (mon.0) 703 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.crash"}]': finished 2026-03-10T05:57:15.500 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:57:15 vm05 bash[43541]: audit 2026-03-10T05:57:14.272064+0000 mon.a (mon.0) 704 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "osd"}]: dispatch 2026-03-10T05:57:15.500 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:57:15 vm05 bash[43541]: audit 2026-03-10T05:57:14.272064+0000 mon.a (mon.0) 704 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "osd"}]: dispatch 2026-03-10T05:57:15.500 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:57:15 vm05 bash[43541]: audit 2026-03-10T05:57:14.274979+0000 mon.a (mon.0) 705 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "osd"}]': finished 2026-03-10T05:57:15.500 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:57:15 vm05 bash[43541]: audit 2026-03-10T05:57:14.274979+0000 mon.a (mon.0) 705 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "osd"}]': finished 2026-03-10T05:57:15.500 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:57:15 vm05 bash[43541]: audit 2026-03-10T05:57:14.276641+0000 mon.a (mon.0) 706 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mds"}]: dispatch 2026-03-10T05:57:15.500 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:57:15 vm05 bash[43541]: audit 2026-03-10T05:57:14.276641+0000 mon.a (mon.0) 706 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mds"}]: dispatch 2026-03-10T05:57:15.500 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:57:15 vm05 bash[43541]: audit 2026-03-10T05:57:14.279127+0000 mon.a (mon.0) 707 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "mds"}]': finished 2026-03-10T05:57:15.500 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:57:15 vm05 bash[43541]: audit 2026-03-10T05:57:14.279127+0000 mon.a (mon.0) 707 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "mds"}]': finished 2026-03-10T05:57:15.500 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:57:15 vm05 bash[43541]: audit 2026-03-10T05:57:14.280736+0000 mon.a (mon.0) 708 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.rgw"}]: dispatch 2026-03-10T05:57:15.500 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:57:15 vm05 bash[43541]: audit 2026-03-10T05:57:14.280736+0000 mon.a (mon.0) 708 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.rgw"}]: dispatch 2026-03-10T05:57:15.500 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:57:15 vm05 bash[43541]: audit 2026-03-10T05:57:14.283525+0000 mon.a (mon.0) 709 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.rgw"}]': finished 2026-03-10T05:57:15.500 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:57:15 vm05 bash[43541]: audit 2026-03-10T05:57:14.283525+0000 mon.a (mon.0) 709 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.rgw"}]': finished 2026-03-10T05:57:15.500 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:57:15 vm05 bash[43541]: audit 2026-03-10T05:57:14.285150+0000 mon.a (mon.0) 710 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.rbd-mirror"}]: dispatch 2026-03-10T05:57:15.500 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:57:15 vm05 bash[43541]: audit 2026-03-10T05:57:14.285150+0000 mon.a (mon.0) 710 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.rbd-mirror"}]: dispatch 2026-03-10T05:57:15.500 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:57:15 vm05 bash[43541]: audit 2026-03-10T05:57:14.287693+0000 mon.a (mon.0) 711 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.rbd-mirror"}]': finished 2026-03-10T05:57:15.500 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:57:15 vm05 bash[43541]: audit 2026-03-10T05:57:14.287693+0000 mon.a (mon.0) 711 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.rbd-mirror"}]': finished 2026-03-10T05:57:15.500 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:57:15 vm05 bash[43541]: audit 2026-03-10T05:57:14.289382+0000 mon.a (mon.0) 712 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T05:57:15.500 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:57:15 vm05 bash[43541]: audit 2026-03-10T05:57:14.289382+0000 mon.a (mon.0) 712 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T05:57:15.500 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:57:15 vm05 bash[43541]: audit 2026-03-10T05:57:14.289695+0000 mon.a (mon.0) 713 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.ceph-exporter"}]: dispatch 2026-03-10T05:57:15.500 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:57:15 vm05 bash[43541]: audit 2026-03-10T05:57:14.289695+0000 mon.a (mon.0) 713 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.ceph-exporter"}]: dispatch 2026-03-10T05:57:15.500 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:57:15 vm05 bash[43541]: audit 2026-03-10T05:57:14.292120+0000 mon.a (mon.0) 714 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.ceph-exporter"}]': finished 2026-03-10T05:57:15.500 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:57:15 vm05 bash[43541]: audit 2026-03-10T05:57:14.292120+0000 mon.a (mon.0) 714 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.ceph-exporter"}]': finished 2026-03-10T05:57:15.500 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:57:15 vm05 bash[43541]: audit 2026-03-10T05:57:14.293672+0000 mon.a (mon.0) 715 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.iscsi"}]: dispatch 2026-03-10T05:57:15.500 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:57:15 vm05 bash[43541]: audit 2026-03-10T05:57:14.293672+0000 mon.a (mon.0) 715 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.iscsi"}]: dispatch 2026-03-10T05:57:15.500 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:57:15 vm05 bash[43541]: audit 2026-03-10T05:57:14.296126+0000 mon.a (mon.0) 716 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.iscsi"}]': finished 2026-03-10T05:57:15.500 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:57:15 vm05 bash[43541]: audit 2026-03-10T05:57:14.296126+0000 mon.a (mon.0) 716 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.iscsi"}]': finished 2026-03-10T05:57:15.500 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:57:15 vm05 bash[43541]: audit 2026-03-10T05:57:14.297723+0000 mon.a (mon.0) 717 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.nfs"}]: dispatch 2026-03-10T05:57:15.500 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:57:15 vm05 bash[43541]: audit 2026-03-10T05:57:14.297723+0000 mon.a (mon.0) 717 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.nfs"}]: dispatch 2026-03-10T05:57:15.500 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:57:15 vm05 bash[43541]: audit 2026-03-10T05:57:14.300375+0000 mon.a (mon.0) 718 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.nfs"}]': finished 2026-03-10T05:57:15.500 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:57:15 vm05 bash[43541]: audit 2026-03-10T05:57:14.300375+0000 mon.a (mon.0) 718 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.nfs"}]': finished 2026-03-10T05:57:15.500 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:57:15 vm05 bash[43541]: audit 2026-03-10T05:57:14.302022+0000 mon.a (mon.0) 719 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.nvmeof"}]: dispatch 2026-03-10T05:57:15.500 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:57:15 vm05 bash[43541]: audit 2026-03-10T05:57:14.302022+0000 mon.a (mon.0) 719 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.nvmeof"}]: dispatch 2026-03-10T05:57:15.500 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:57:15 vm05 bash[43541]: audit 2026-03-10T05:57:14.304685+0000 mon.a (mon.0) 720 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.nvmeof"}]': finished 2026-03-10T05:57:15.500 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:57:15 vm05 bash[43541]: audit 2026-03-10T05:57:14.304685+0000 mon.a (mon.0) 720 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.nvmeof"}]': finished 2026-03-10T05:57:15.500 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:57:15 vm05 bash[43541]: audit 2026-03-10T05:57:14.306321+0000 mon.a (mon.0) 721 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T05:57:15.500 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:57:15 vm05 bash[43541]: audit 2026-03-10T05:57:14.306321+0000 mon.a (mon.0) 721 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T05:57:15.500 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:57:15 vm05 bash[43541]: audit 2026-03-10T05:57:14.306643+0000 mon.a (mon.0) 722 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T05:57:15.500 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:57:15 vm05 bash[43541]: audit 2026-03-10T05:57:14.306643+0000 mon.a (mon.0) 722 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T05:57:15.500 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:57:15 vm05 bash[43541]: audit 2026-03-10T05:57:14.306960+0000 mon.a (mon.0) 723 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T05:57:15.500 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:57:15 vm05 bash[43541]: audit 2026-03-10T05:57:14.306960+0000 mon.a (mon.0) 723 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T05:57:15.500 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:57:15 vm05 bash[43541]: audit 2026-03-10T05:57:14.307304+0000 mon.a (mon.0) 724 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T05:57:15.500 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:57:15 vm05 bash[43541]: audit 2026-03-10T05:57:14.307304+0000 mon.a (mon.0) 724 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T05:57:15.500 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:57:15 vm05 bash[43541]: audit 2026-03-10T05:57:14.307606+0000 mon.a (mon.0) 725 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T05:57:15.500 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:57:15 vm05 bash[43541]: audit 2026-03-10T05:57:14.307606+0000 mon.a (mon.0) 725 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T05:57:15.500 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:57:15 vm05 bash[43541]: audit 2026-03-10T05:57:14.307900+0000 mon.a (mon.0) 726 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T05:57:15.500 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:57:15 vm05 bash[43541]: audit 2026-03-10T05:57:14.307900+0000 mon.a (mon.0) 726 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T05:57:15.500 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:57:15 vm05 bash[43541]: cephadm 2026-03-10T05:57:14.308167+0000 mgr.y (mgr.24992) 295 : cephadm [INF] Upgrade: Complete! 2026-03-10T05:57:15.500 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:57:15 vm05 bash[43541]: cephadm 2026-03-10T05:57:14.308167+0000 mgr.y (mgr.24992) 295 : cephadm [INF] Upgrade: Complete! 2026-03-10T05:57:15.500 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:57:15 vm05 bash[43541]: audit 2026-03-10T05:57:14.308389+0000 mon.a (mon.0) 727 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix":"config-key del","key":"mgr/cephadm/upgrade_state"}]: dispatch 2026-03-10T05:57:15.500 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:57:15 vm05 bash[43541]: audit 2026-03-10T05:57:14.308389+0000 mon.a (mon.0) 727 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix":"config-key del","key":"mgr/cephadm/upgrade_state"}]: dispatch 2026-03-10T05:57:15.500 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:57:15 vm05 bash[43541]: audit 2026-03-10T05:57:14.311034+0000 mon.a (mon.0) 728 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd='[{"prefix":"config-key del","key":"mgr/cephadm/upgrade_state"}]': finished 2026-03-10T05:57:15.500 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:57:15 vm05 bash[43541]: audit 2026-03-10T05:57:14.311034+0000 mon.a (mon.0) 728 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd='[{"prefix":"config-key del","key":"mgr/cephadm/upgrade_state"}]': finished 2026-03-10T05:57:15.500 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:57:15 vm05 bash[43541]: audit 2026-03-10T05:57:14.311669+0000 mon.a (mon.0) 729 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:57:15.500 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:57:15 vm05 bash[43541]: audit 2026-03-10T05:57:14.311669+0000 mon.a (mon.0) 729 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:57:15.500 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:57:15 vm05 bash[43541]: audit 2026-03-10T05:57:14.312073+0000 mon.a (mon.0) 730 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T05:57:15.500 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:57:15 vm05 bash[43541]: audit 2026-03-10T05:57:14.312073+0000 mon.a (mon.0) 730 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T05:57:15.500 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:57:15 vm05 bash[43541]: audit 2026-03-10T05:57:14.315103+0000 mon.a (mon.0) 731 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:57:15.500 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:57:15 vm05 bash[43541]: audit 2026-03-10T05:57:14.315103+0000 mon.a (mon.0) 731 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:57:15.500 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:57:15 vm05 bash[43541]: audit 2026-03-10T05:57:14.355209+0000 mon.a (mon.0) 732 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:57:15.500 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:57:15 vm05 bash[43541]: audit 2026-03-10T05:57:14.355209+0000 mon.a (mon.0) 732 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:57:15.501 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:57:15 vm05 bash[43541]: audit 2026-03-10T05:57:14.355769+0000 mon.a (mon.0) 733 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T05:57:15.501 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:57:15 vm05 bash[43541]: audit 2026-03-10T05:57:14.355769+0000 mon.a (mon.0) 733 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T05:57:15.501 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:57:15 vm05 bash[43541]: audit 2026-03-10T05:57:14.360304+0000 mon.a (mon.0) 734 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:57:15.501 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:57:15 vm05 bash[43541]: audit 2026-03-10T05:57:14.360304+0000 mon.a (mon.0) 734 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:57:15.585 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:57:15 vm02 bash[56371]: audit 2026-03-10T05:57:14.179358+0000 mon.a (mon.0) 672 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:57:15.585 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:57:15 vm02 bash[56371]: audit 2026-03-10T05:57:14.179358+0000 mon.a (mon.0) 672 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:57:15.585 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:57:15 vm02 bash[56371]: audit 2026-03-10T05:57:14.186160+0000 mon.a (mon.0) 673 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:57:15.585 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:57:15 vm02 bash[56371]: audit 2026-03-10T05:57:14.186160+0000 mon.a (mon.0) 673 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:57:15.585 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:57:15 vm02 bash[56371]: audit 2026-03-10T05:57:14.187273+0000 mon.a (mon.0) 674 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:57:15.585 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:57:15 vm02 bash[56371]: audit 2026-03-10T05:57:14.187273+0000 mon.a (mon.0) 674 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:57:15.585 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:57:15 vm02 bash[56371]: audit 2026-03-10T05:57:14.187735+0000 mon.a (mon.0) 675 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T05:57:15.585 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:57:15 vm02 bash[56371]: audit 2026-03-10T05:57:14.187735+0000 mon.a (mon.0) 675 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T05:57:15.585 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:57:15 vm02 bash[56371]: audit 2026-03-10T05:57:14.192655+0000 mon.a (mon.0) 676 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:57:15.585 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:57:15 vm02 bash[56371]: audit 2026-03-10T05:57:14.192655+0000 mon.a (mon.0) 676 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:57:15.585 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:57:15 vm02 bash[56371]: audit 2026-03-10T05:57:14.205531+0000 mon.a (mon.0) 677 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "dashboard get-grafana-api-url"}]: dispatch 2026-03-10T05:57:15.585 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:57:15 vm02 bash[56371]: audit 2026-03-10T05:57:14.205531+0000 mon.a (mon.0) 677 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "dashboard get-grafana-api-url"}]: dispatch 2026-03-10T05:57:15.585 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:57:15 vm02 bash[56371]: audit 2026-03-10T05:57:14.205854+0000 mgr.y (mgr.24992) 293 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard get-grafana-api-url"}]: dispatch 2026-03-10T05:57:15.585 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:57:15 vm02 bash[56371]: audit 2026-03-10T05:57:14.205854+0000 mgr.y (mgr.24992) 293 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard get-grafana-api-url"}]: dispatch 2026-03-10T05:57:15.585 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:57:15 vm02 bash[56371]: audit 2026-03-10T05:57:14.235587+0000 mon.a (mon.0) 678 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T05:57:15.585 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:57:15 vm02 bash[56371]: audit 2026-03-10T05:57:14.235587+0000 mon.a (mon.0) 678 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T05:57:15.585 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:57:15 vm02 bash[56371]: audit 2026-03-10T05:57:14.236900+0000 mon.a (mon.0) 679 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T05:57:15.585 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:57:15 vm02 bash[56371]: audit 2026-03-10T05:57:14.236900+0000 mon.a (mon.0) 679 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T05:57:15.585 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:57:15 vm02 bash[56371]: audit 2026-03-10T05:57:14.237650+0000 mon.a (mon.0) 680 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T05:57:15.585 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:57:15 vm02 bash[56371]: audit 2026-03-10T05:57:14.237650+0000 mon.a (mon.0) 680 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T05:57:15.586 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:57:15 vm02 bash[56371]: audit 2026-03-10T05:57:14.238233+0000 mon.a (mon.0) 681 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T05:57:15.586 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:57:15 vm02 bash[56371]: audit 2026-03-10T05:57:14.238233+0000 mon.a (mon.0) 681 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T05:57:15.586 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:57:15 vm02 bash[56371]: audit 2026-03-10T05:57:14.238994+0000 mon.a (mon.0) 682 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T05:57:15.586 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:57:15 vm02 bash[56371]: audit 2026-03-10T05:57:14.238994+0000 mon.a (mon.0) 682 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T05:57:15.586 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:57:15 vm02 bash[56371]: audit 2026-03-10T05:57:14.240076+0000 mon.a (mon.0) 683 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T05:57:15.586 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:57:15 vm02 bash[56371]: audit 2026-03-10T05:57:14.240076+0000 mon.a (mon.0) 683 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T05:57:15.586 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:57:15 vm02 bash[56371]: audit 2026-03-10T05:57:14.240703+0000 mon.a (mon.0) 684 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T05:57:15.586 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:57:15 vm02 bash[56371]: audit 2026-03-10T05:57:14.240703+0000 mon.a (mon.0) 684 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T05:57:15.586 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:57:15 vm02 bash[56371]: audit 2026-03-10T05:57:14.241198+0000 mon.a (mon.0) 685 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T05:57:15.586 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:57:15 vm02 bash[56371]: audit 2026-03-10T05:57:14.241198+0000 mon.a (mon.0) 685 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T05:57:15.586 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:57:15 vm02 bash[56371]: audit 2026-03-10T05:57:14.241689+0000 mon.a (mon.0) 686 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T05:57:15.586 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:57:15 vm02 bash[56371]: audit 2026-03-10T05:57:14.241689+0000 mon.a (mon.0) 686 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T05:57:15.586 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:57:15 vm02 bash[56371]: audit 2026-03-10T05:57:14.242217+0000 mon.a (mon.0) 687 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T05:57:15.586 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:57:15 vm02 bash[56371]: audit 2026-03-10T05:57:14.242217+0000 mon.a (mon.0) 687 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T05:57:15.586 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:57:15 vm02 bash[56371]: audit 2026-03-10T05:57:14.242977+0000 mon.a (mon.0) 688 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T05:57:15.586 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:57:15 vm02 bash[56371]: audit 2026-03-10T05:57:14.242977+0000 mon.a (mon.0) 688 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T05:57:15.586 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:57:15 vm02 bash[56371]: audit 2026-03-10T05:57:14.243518+0000 mon.a (mon.0) 689 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T05:57:15.586 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:57:15 vm02 bash[56371]: audit 2026-03-10T05:57:14.243518+0000 mon.a (mon.0) 689 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T05:57:15.586 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:57:15 vm02 bash[56371]: audit 2026-03-10T05:57:14.245088+0000 mon.a (mon.0) 690 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T05:57:15.586 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:57:15 vm02 bash[56371]: audit 2026-03-10T05:57:14.245088+0000 mon.a (mon.0) 690 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T05:57:15.586 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:57:15 vm02 bash[56371]: audit 2026-03-10T05:57:14.246432+0000 mon.a (mon.0) 691 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T05:57:15.586 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:57:15 vm02 bash[56371]: audit 2026-03-10T05:57:14.246432+0000 mon.a (mon.0) 691 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T05:57:15.586 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:57:15 vm02 bash[56371]: audit 2026-03-10T05:57:14.247523+0000 mon.a (mon.0) 692 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T05:57:15.586 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:57:15 vm02 bash[56371]: audit 2026-03-10T05:57:14.247523+0000 mon.a (mon.0) 692 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T05:57:15.586 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:57:15 vm02 bash[56371]: audit 2026-03-10T05:57:14.248140+0000 mon.a (mon.0) 693 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T05:57:15.586 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:57:15 vm02 bash[56371]: audit 2026-03-10T05:57:14.248140+0000 mon.a (mon.0) 693 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T05:57:15.586 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:57:15 vm02 bash[56371]: audit 2026-03-10T05:57:14.249004+0000 mon.a (mon.0) 694 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T05:57:15.586 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:57:15 vm02 bash[56371]: audit 2026-03-10T05:57:14.249004+0000 mon.a (mon.0) 694 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T05:57:15.586 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:57:15 vm02 bash[56371]: audit 2026-03-10T05:57:14.249872+0000 mon.a (mon.0) 695 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T05:57:15.586 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:57:15 vm02 bash[56371]: audit 2026-03-10T05:57:14.249872+0000 mon.a (mon.0) 695 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T05:57:15.586 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:57:15 vm02 bash[56371]: audit 2026-03-10T05:57:14.250749+0000 mon.a (mon.0) 696 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T05:57:15.586 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:57:15 vm02 bash[56371]: audit 2026-03-10T05:57:14.250749+0000 mon.a (mon.0) 696 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T05:57:15.586 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:57:15 vm02 bash[56371]: cephadm 2026-03-10T05:57:14.251336+0000 mgr.y (mgr.24992) 294 : cephadm [INF] Upgrade: Finalizing container_image settings 2026-03-10T05:57:15.586 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:57:15 vm02 bash[56371]: cephadm 2026-03-10T05:57:14.251336+0000 mgr.y (mgr.24992) 294 : cephadm [INF] Upgrade: Finalizing container_image settings 2026-03-10T05:57:15.586 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:57:15 vm02 bash[56371]: audit 2026-03-10T05:57:14.255379+0000 mon.a (mon.0) 697 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:57:15.586 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:57:15 vm02 bash[56371]: audit 2026-03-10T05:57:14.255379+0000 mon.a (mon.0) 697 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:57:15.586 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:57:15 vm02 bash[56371]: audit 2026-03-10T05:57:14.258480+0000 mon.a (mon.0) 698 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mgr"}]: dispatch 2026-03-10T05:57:15.586 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:57:15 vm02 bash[56371]: audit 2026-03-10T05:57:14.258480+0000 mon.a (mon.0) 698 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mgr"}]: dispatch 2026-03-10T05:57:15.586 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:57:15 vm02 bash[56371]: audit 2026-03-10T05:57:14.261200+0000 mon.a (mon.0) 699 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "mgr"}]': finished 2026-03-10T05:57:15.586 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:57:15 vm02 bash[56371]: audit 2026-03-10T05:57:14.261200+0000 mon.a (mon.0) 699 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "mgr"}]': finished 2026-03-10T05:57:15.586 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:57:15 vm02 bash[56371]: audit 2026-03-10T05:57:14.263401+0000 mon.a (mon.0) 700 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T05:57:15.586 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:57:15 vm02 bash[56371]: audit 2026-03-10T05:57:14.263401+0000 mon.a (mon.0) 700 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T05:57:15.586 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:57:15 vm02 bash[56371]: audit 2026-03-10T05:57:14.265874+0000 mon.a (mon.0) 701 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "mon"}]': finished 2026-03-10T05:57:15.586 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:57:15 vm02 bash[56371]: audit 2026-03-10T05:57:14.265874+0000 mon.a (mon.0) 701 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "mon"}]': finished 2026-03-10T05:57:15.586 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:57:15 vm02 bash[56371]: audit 2026-03-10T05:57:14.267585+0000 mon.a (mon.0) 702 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.crash"}]: dispatch 2026-03-10T05:57:15.586 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:57:15 vm02 bash[56371]: audit 2026-03-10T05:57:14.267585+0000 mon.a (mon.0) 702 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.crash"}]: dispatch 2026-03-10T05:57:15.586 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:57:15 vm02 bash[56371]: audit 2026-03-10T05:57:14.270436+0000 mon.a (mon.0) 703 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.crash"}]': finished 2026-03-10T05:57:15.586 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:57:15 vm02 bash[56371]: audit 2026-03-10T05:57:14.270436+0000 mon.a (mon.0) 703 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.crash"}]': finished 2026-03-10T05:57:15.586 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:57:15 vm02 bash[56371]: audit 2026-03-10T05:57:14.272064+0000 mon.a (mon.0) 704 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "osd"}]: dispatch 2026-03-10T05:57:15.586 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:57:15 vm02 bash[56371]: audit 2026-03-10T05:57:14.272064+0000 mon.a (mon.0) 704 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "osd"}]: dispatch 2026-03-10T05:57:15.586 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:57:15 vm02 bash[56371]: audit 2026-03-10T05:57:14.274979+0000 mon.a (mon.0) 705 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "osd"}]': finished 2026-03-10T05:57:15.586 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:57:15 vm02 bash[56371]: audit 2026-03-10T05:57:14.274979+0000 mon.a (mon.0) 705 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "osd"}]': finished 2026-03-10T05:57:15.586 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:57:15 vm02 bash[56371]: audit 2026-03-10T05:57:14.276641+0000 mon.a (mon.0) 706 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mds"}]: dispatch 2026-03-10T05:57:15.586 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:57:15 vm02 bash[56371]: audit 2026-03-10T05:57:14.276641+0000 mon.a (mon.0) 706 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mds"}]: dispatch 2026-03-10T05:57:15.586 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:57:15 vm02 bash[56371]: audit 2026-03-10T05:57:14.279127+0000 mon.a (mon.0) 707 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "mds"}]': finished 2026-03-10T05:57:15.586 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:57:15 vm02 bash[56371]: audit 2026-03-10T05:57:14.279127+0000 mon.a (mon.0) 707 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "mds"}]': finished 2026-03-10T05:57:15.586 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:57:15 vm02 bash[56371]: audit 2026-03-10T05:57:14.280736+0000 mon.a (mon.0) 708 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.rgw"}]: dispatch 2026-03-10T05:57:15.586 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:57:15 vm02 bash[56371]: audit 2026-03-10T05:57:14.280736+0000 mon.a (mon.0) 708 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.rgw"}]: dispatch 2026-03-10T05:57:15.587 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:57:15 vm02 bash[56371]: audit 2026-03-10T05:57:14.283525+0000 mon.a (mon.0) 709 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.rgw"}]': finished 2026-03-10T05:57:15.587 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:57:15 vm02 bash[56371]: audit 2026-03-10T05:57:14.283525+0000 mon.a (mon.0) 709 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.rgw"}]': finished 2026-03-10T05:57:15.587 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:57:15 vm02 bash[56371]: audit 2026-03-10T05:57:14.285150+0000 mon.a (mon.0) 710 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.rbd-mirror"}]: dispatch 2026-03-10T05:57:15.587 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:57:15 vm02 bash[56371]: audit 2026-03-10T05:57:14.285150+0000 mon.a (mon.0) 710 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.rbd-mirror"}]: dispatch 2026-03-10T05:57:15.587 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:57:15 vm02 bash[56371]: audit 2026-03-10T05:57:14.287693+0000 mon.a (mon.0) 711 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.rbd-mirror"}]': finished 2026-03-10T05:57:15.587 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:57:15 vm02 bash[56371]: audit 2026-03-10T05:57:14.287693+0000 mon.a (mon.0) 711 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.rbd-mirror"}]': finished 2026-03-10T05:57:15.587 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:57:15 vm02 bash[56371]: audit 2026-03-10T05:57:14.289382+0000 mon.a (mon.0) 712 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T05:57:15.587 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:57:15 vm02 bash[56371]: audit 2026-03-10T05:57:14.289382+0000 mon.a (mon.0) 712 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T05:57:15.587 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:57:15 vm02 bash[56371]: audit 2026-03-10T05:57:14.289695+0000 mon.a (mon.0) 713 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.ceph-exporter"}]: dispatch 2026-03-10T05:57:15.587 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:57:15 vm02 bash[56371]: audit 2026-03-10T05:57:14.289695+0000 mon.a (mon.0) 713 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.ceph-exporter"}]: dispatch 2026-03-10T05:57:15.587 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:57:15 vm02 bash[56371]: audit 2026-03-10T05:57:14.292120+0000 mon.a (mon.0) 714 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.ceph-exporter"}]': finished 2026-03-10T05:57:15.587 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:57:15 vm02 bash[56371]: audit 2026-03-10T05:57:14.292120+0000 mon.a (mon.0) 714 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.ceph-exporter"}]': finished 2026-03-10T05:57:15.587 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:57:15 vm02 bash[56371]: audit 2026-03-10T05:57:14.293672+0000 mon.a (mon.0) 715 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.iscsi"}]: dispatch 2026-03-10T05:57:15.587 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:57:15 vm02 bash[56371]: audit 2026-03-10T05:57:14.293672+0000 mon.a (mon.0) 715 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.iscsi"}]: dispatch 2026-03-10T05:57:15.587 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:57:15 vm02 bash[56371]: audit 2026-03-10T05:57:14.296126+0000 mon.a (mon.0) 716 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.iscsi"}]': finished 2026-03-10T05:57:15.587 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:57:15 vm02 bash[56371]: audit 2026-03-10T05:57:14.296126+0000 mon.a (mon.0) 716 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.iscsi"}]': finished 2026-03-10T05:57:15.587 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:57:15 vm02 bash[56371]: audit 2026-03-10T05:57:14.297723+0000 mon.a (mon.0) 717 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.nfs"}]: dispatch 2026-03-10T05:57:15.587 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:57:15 vm02 bash[56371]: audit 2026-03-10T05:57:14.297723+0000 mon.a (mon.0) 717 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.nfs"}]: dispatch 2026-03-10T05:57:15.587 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:57:15 vm02 bash[56371]: audit 2026-03-10T05:57:14.300375+0000 mon.a (mon.0) 718 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.nfs"}]': finished 2026-03-10T05:57:15.587 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:57:15 vm02 bash[56371]: audit 2026-03-10T05:57:14.300375+0000 mon.a (mon.0) 718 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.nfs"}]': finished 2026-03-10T05:57:15.587 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:57:15 vm02 bash[56371]: audit 2026-03-10T05:57:14.302022+0000 mon.a (mon.0) 719 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.nvmeof"}]: dispatch 2026-03-10T05:57:15.587 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:57:15 vm02 bash[56371]: audit 2026-03-10T05:57:14.302022+0000 mon.a (mon.0) 719 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.nvmeof"}]: dispatch 2026-03-10T05:57:15.587 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:57:15 vm02 bash[56371]: audit 2026-03-10T05:57:14.304685+0000 mon.a (mon.0) 720 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.nvmeof"}]': finished 2026-03-10T05:57:15.587 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:57:15 vm02 bash[56371]: audit 2026-03-10T05:57:14.304685+0000 mon.a (mon.0) 720 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.nvmeof"}]': finished 2026-03-10T05:57:15.587 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:57:15 vm02 bash[56371]: audit 2026-03-10T05:57:14.306321+0000 mon.a (mon.0) 721 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T05:57:15.587 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:57:15 vm02 bash[56371]: audit 2026-03-10T05:57:14.306321+0000 mon.a (mon.0) 721 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T05:57:15.587 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:57:15 vm02 bash[56371]: audit 2026-03-10T05:57:14.306643+0000 mon.a (mon.0) 722 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T05:57:15.587 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:57:15 vm02 bash[56371]: audit 2026-03-10T05:57:14.306643+0000 mon.a (mon.0) 722 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T05:57:15.587 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:57:15 vm02 bash[56371]: audit 2026-03-10T05:57:14.306960+0000 mon.a (mon.0) 723 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T05:57:15.587 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:57:15 vm02 bash[56371]: audit 2026-03-10T05:57:14.306960+0000 mon.a (mon.0) 723 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T05:57:15.587 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:57:15 vm02 bash[56371]: audit 2026-03-10T05:57:14.307304+0000 mon.a (mon.0) 724 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T05:57:15.587 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:57:15 vm02 bash[56371]: audit 2026-03-10T05:57:14.307304+0000 mon.a (mon.0) 724 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T05:57:15.587 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:57:15 vm02 bash[56371]: audit 2026-03-10T05:57:14.307606+0000 mon.a (mon.0) 725 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T05:57:15.587 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:57:15 vm02 bash[56371]: audit 2026-03-10T05:57:14.307606+0000 mon.a (mon.0) 725 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T05:57:15.587 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:57:15 vm02 bash[56371]: audit 2026-03-10T05:57:14.307900+0000 mon.a (mon.0) 726 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T05:57:15.587 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:57:15 vm02 bash[56371]: audit 2026-03-10T05:57:14.307900+0000 mon.a (mon.0) 726 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T05:57:15.587 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:57:15 vm02 bash[56371]: cephadm 2026-03-10T05:57:14.308167+0000 mgr.y (mgr.24992) 295 : cephadm [INF] Upgrade: Complete! 2026-03-10T05:57:15.587 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:57:15 vm02 bash[56371]: cephadm 2026-03-10T05:57:14.308167+0000 mgr.y (mgr.24992) 295 : cephadm [INF] Upgrade: Complete! 2026-03-10T05:57:15.587 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:57:15 vm02 bash[56371]: audit 2026-03-10T05:57:14.308389+0000 mon.a (mon.0) 727 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix":"config-key del","key":"mgr/cephadm/upgrade_state"}]: dispatch 2026-03-10T05:57:15.587 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:57:15 vm02 bash[56371]: audit 2026-03-10T05:57:14.308389+0000 mon.a (mon.0) 727 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix":"config-key del","key":"mgr/cephadm/upgrade_state"}]: dispatch 2026-03-10T05:57:15.587 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:57:15 vm02 bash[56371]: audit 2026-03-10T05:57:14.311034+0000 mon.a (mon.0) 728 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd='[{"prefix":"config-key del","key":"mgr/cephadm/upgrade_state"}]': finished 2026-03-10T05:57:15.587 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:57:15 vm02 bash[56371]: audit 2026-03-10T05:57:14.311034+0000 mon.a (mon.0) 728 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd='[{"prefix":"config-key del","key":"mgr/cephadm/upgrade_state"}]': finished 2026-03-10T05:57:15.587 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:57:15 vm02 bash[56371]: audit 2026-03-10T05:57:14.311669+0000 mon.a (mon.0) 729 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:57:15.587 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:57:15 vm02 bash[56371]: audit 2026-03-10T05:57:14.311669+0000 mon.a (mon.0) 729 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:57:15.587 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:57:15 vm02 bash[56371]: audit 2026-03-10T05:57:14.312073+0000 mon.a (mon.0) 730 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T05:57:15.587 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:57:15 vm02 bash[56371]: audit 2026-03-10T05:57:14.312073+0000 mon.a (mon.0) 730 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T05:57:15.587 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:57:15 vm02 bash[56371]: audit 2026-03-10T05:57:14.315103+0000 mon.a (mon.0) 731 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:57:15.587 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:57:15 vm02 bash[56371]: audit 2026-03-10T05:57:14.315103+0000 mon.a (mon.0) 731 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:57:15.587 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:57:15 vm02 bash[56371]: audit 2026-03-10T05:57:14.355209+0000 mon.a (mon.0) 732 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:57:15.587 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:57:15 vm02 bash[56371]: audit 2026-03-10T05:57:14.355209+0000 mon.a (mon.0) 732 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:57:15.587 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:57:15 vm02 bash[56371]: audit 2026-03-10T05:57:14.355769+0000 mon.a (mon.0) 733 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T05:57:15.587 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:57:15 vm02 bash[56371]: audit 2026-03-10T05:57:14.355769+0000 mon.a (mon.0) 733 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T05:57:15.587 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:57:15 vm02 bash[56371]: audit 2026-03-10T05:57:14.360304+0000 mon.a (mon.0) 734 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:57:15.587 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:57:15 vm02 bash[56371]: audit 2026-03-10T05:57:14.360304+0000 mon.a (mon.0) 734 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:57:15.587 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:57:15 vm02 bash[55303]: audit 2026-03-10T05:57:14.179358+0000 mon.a (mon.0) 672 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:57:15.587 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:57:15 vm02 bash[55303]: audit 2026-03-10T05:57:14.179358+0000 mon.a (mon.0) 672 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:57:15.587 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:57:15 vm02 bash[55303]: audit 2026-03-10T05:57:14.186160+0000 mon.a (mon.0) 673 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:57:15.587 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:57:15 vm02 bash[55303]: audit 2026-03-10T05:57:14.186160+0000 mon.a (mon.0) 673 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:57:15.587 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:57:15 vm02 bash[55303]: audit 2026-03-10T05:57:14.187273+0000 mon.a (mon.0) 674 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:57:15.587 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:57:15 vm02 bash[55303]: audit 2026-03-10T05:57:14.187273+0000 mon.a (mon.0) 674 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:57:15.588 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:57:15 vm02 bash[55303]: audit 2026-03-10T05:57:14.187735+0000 mon.a (mon.0) 675 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T05:57:15.588 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:57:15 vm02 bash[55303]: audit 2026-03-10T05:57:14.187735+0000 mon.a (mon.0) 675 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T05:57:15.588 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:57:15 vm02 bash[55303]: audit 2026-03-10T05:57:14.192655+0000 mon.a (mon.0) 676 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:57:15.588 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:57:15 vm02 bash[55303]: audit 2026-03-10T05:57:14.192655+0000 mon.a (mon.0) 676 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:57:15.588 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:57:15 vm02 bash[55303]: audit 2026-03-10T05:57:14.205531+0000 mon.a (mon.0) 677 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "dashboard get-grafana-api-url"}]: dispatch 2026-03-10T05:57:15.588 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:57:15 vm02 bash[55303]: audit 2026-03-10T05:57:14.205531+0000 mon.a (mon.0) 677 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "dashboard get-grafana-api-url"}]: dispatch 2026-03-10T05:57:15.588 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:57:15 vm02 bash[55303]: audit 2026-03-10T05:57:14.205854+0000 mgr.y (mgr.24992) 293 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard get-grafana-api-url"}]: dispatch 2026-03-10T05:57:15.588 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:57:15 vm02 bash[55303]: audit 2026-03-10T05:57:14.205854+0000 mgr.y (mgr.24992) 293 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard get-grafana-api-url"}]: dispatch 2026-03-10T05:57:15.588 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:57:15 vm02 bash[55303]: audit 2026-03-10T05:57:14.235587+0000 mon.a (mon.0) 678 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T05:57:15.588 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:57:15 vm02 bash[55303]: audit 2026-03-10T05:57:14.235587+0000 mon.a (mon.0) 678 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T05:57:15.588 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:57:15 vm02 bash[55303]: audit 2026-03-10T05:57:14.236900+0000 mon.a (mon.0) 679 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T05:57:15.588 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:57:15 vm02 bash[55303]: audit 2026-03-10T05:57:14.236900+0000 mon.a (mon.0) 679 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T05:57:15.588 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:57:15 vm02 bash[55303]: audit 2026-03-10T05:57:14.237650+0000 mon.a (mon.0) 680 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T05:57:15.588 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:57:15 vm02 bash[55303]: audit 2026-03-10T05:57:14.237650+0000 mon.a (mon.0) 680 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T05:57:15.588 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:57:15 vm02 bash[55303]: audit 2026-03-10T05:57:14.238233+0000 mon.a (mon.0) 681 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T05:57:15.588 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:57:15 vm02 bash[55303]: audit 2026-03-10T05:57:14.238233+0000 mon.a (mon.0) 681 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T05:57:15.588 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:57:15 vm02 bash[55303]: audit 2026-03-10T05:57:14.238994+0000 mon.a (mon.0) 682 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T05:57:15.588 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:57:15 vm02 bash[55303]: audit 2026-03-10T05:57:14.238994+0000 mon.a (mon.0) 682 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T05:57:15.588 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:57:15 vm02 bash[55303]: audit 2026-03-10T05:57:14.240076+0000 mon.a (mon.0) 683 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T05:57:15.588 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:57:15 vm02 bash[55303]: audit 2026-03-10T05:57:14.240076+0000 mon.a (mon.0) 683 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T05:57:15.588 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:57:15 vm02 bash[55303]: audit 2026-03-10T05:57:14.240703+0000 mon.a (mon.0) 684 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T05:57:15.588 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:57:15 vm02 bash[55303]: audit 2026-03-10T05:57:14.240703+0000 mon.a (mon.0) 684 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T05:57:15.588 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:57:15 vm02 bash[55303]: audit 2026-03-10T05:57:14.241198+0000 mon.a (mon.0) 685 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T05:57:15.588 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:57:15 vm02 bash[55303]: audit 2026-03-10T05:57:14.241198+0000 mon.a (mon.0) 685 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T05:57:15.588 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:57:15 vm02 bash[55303]: audit 2026-03-10T05:57:14.241689+0000 mon.a (mon.0) 686 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T05:57:15.588 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:57:15 vm02 bash[55303]: audit 2026-03-10T05:57:14.241689+0000 mon.a (mon.0) 686 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T05:57:15.588 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:57:15 vm02 bash[55303]: audit 2026-03-10T05:57:14.242217+0000 mon.a (mon.0) 687 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T05:57:15.588 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:57:15 vm02 bash[55303]: audit 2026-03-10T05:57:14.242217+0000 mon.a (mon.0) 687 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T05:57:15.588 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:57:15 vm02 bash[55303]: audit 2026-03-10T05:57:14.242977+0000 mon.a (mon.0) 688 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T05:57:15.588 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:57:15 vm02 bash[55303]: audit 2026-03-10T05:57:14.242977+0000 mon.a (mon.0) 688 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T05:57:15.588 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:57:15 vm02 bash[55303]: audit 2026-03-10T05:57:14.243518+0000 mon.a (mon.0) 689 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T05:57:15.588 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:57:15 vm02 bash[55303]: audit 2026-03-10T05:57:14.243518+0000 mon.a (mon.0) 689 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T05:57:15.588 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:57:15 vm02 bash[55303]: audit 2026-03-10T05:57:14.245088+0000 mon.a (mon.0) 690 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T05:57:15.588 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:57:15 vm02 bash[55303]: audit 2026-03-10T05:57:14.245088+0000 mon.a (mon.0) 690 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T05:57:15.588 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:57:15 vm02 bash[55303]: audit 2026-03-10T05:57:14.246432+0000 mon.a (mon.0) 691 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T05:57:15.588 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:57:15 vm02 bash[55303]: audit 2026-03-10T05:57:14.246432+0000 mon.a (mon.0) 691 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T05:57:15.588 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:57:15 vm02 bash[55303]: audit 2026-03-10T05:57:14.247523+0000 mon.a (mon.0) 692 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T05:57:15.588 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:57:15 vm02 bash[55303]: audit 2026-03-10T05:57:14.247523+0000 mon.a (mon.0) 692 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T05:57:15.588 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:57:15 vm02 bash[55303]: audit 2026-03-10T05:57:14.248140+0000 mon.a (mon.0) 693 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T05:57:15.588 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:57:15 vm02 bash[55303]: audit 2026-03-10T05:57:14.248140+0000 mon.a (mon.0) 693 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T05:57:15.588 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:57:15 vm02 bash[55303]: audit 2026-03-10T05:57:14.249004+0000 mon.a (mon.0) 694 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T05:57:15.588 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:57:15 vm02 bash[55303]: audit 2026-03-10T05:57:14.249004+0000 mon.a (mon.0) 694 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T05:57:15.588 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:57:15 vm02 bash[55303]: audit 2026-03-10T05:57:14.249872+0000 mon.a (mon.0) 695 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T05:57:15.588 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:57:15 vm02 bash[55303]: audit 2026-03-10T05:57:14.249872+0000 mon.a (mon.0) 695 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T05:57:15.588 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:57:15 vm02 bash[55303]: audit 2026-03-10T05:57:14.250749+0000 mon.a (mon.0) 696 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T05:57:15.588 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:57:15 vm02 bash[55303]: audit 2026-03-10T05:57:14.250749+0000 mon.a (mon.0) 696 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T05:57:15.588 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:57:15 vm02 bash[55303]: cephadm 2026-03-10T05:57:14.251336+0000 mgr.y (mgr.24992) 294 : cephadm [INF] Upgrade: Finalizing container_image settings 2026-03-10T05:57:15.588 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:57:15 vm02 bash[55303]: cephadm 2026-03-10T05:57:14.251336+0000 mgr.y (mgr.24992) 294 : cephadm [INF] Upgrade: Finalizing container_image settings 2026-03-10T05:57:15.588 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:57:15 vm02 bash[55303]: audit 2026-03-10T05:57:14.255379+0000 mon.a (mon.0) 697 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:57:15.588 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:57:15 vm02 bash[55303]: audit 2026-03-10T05:57:14.255379+0000 mon.a (mon.0) 697 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:57:15.588 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:57:15 vm02 bash[55303]: audit 2026-03-10T05:57:14.258480+0000 mon.a (mon.0) 698 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mgr"}]: dispatch 2026-03-10T05:57:15.588 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:57:15 vm02 bash[55303]: audit 2026-03-10T05:57:14.258480+0000 mon.a (mon.0) 698 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mgr"}]: dispatch 2026-03-10T05:57:15.588 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:57:15 vm02 bash[55303]: audit 2026-03-10T05:57:14.261200+0000 mon.a (mon.0) 699 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "mgr"}]': finished 2026-03-10T05:57:15.588 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:57:15 vm02 bash[55303]: audit 2026-03-10T05:57:14.261200+0000 mon.a (mon.0) 699 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "mgr"}]': finished 2026-03-10T05:57:15.588 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:57:15 vm02 bash[55303]: audit 2026-03-10T05:57:14.263401+0000 mon.a (mon.0) 700 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T05:57:15.588 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:57:15 vm02 bash[55303]: audit 2026-03-10T05:57:14.263401+0000 mon.a (mon.0) 700 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T05:57:15.588 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:57:15 vm02 bash[55303]: audit 2026-03-10T05:57:14.265874+0000 mon.a (mon.0) 701 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "mon"}]': finished 2026-03-10T05:57:15.588 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:57:15 vm02 bash[55303]: audit 2026-03-10T05:57:14.265874+0000 mon.a (mon.0) 701 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "mon"}]': finished 2026-03-10T05:57:15.588 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:57:15 vm02 bash[55303]: audit 2026-03-10T05:57:14.267585+0000 mon.a (mon.0) 702 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.crash"}]: dispatch 2026-03-10T05:57:15.588 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:57:15 vm02 bash[55303]: audit 2026-03-10T05:57:14.267585+0000 mon.a (mon.0) 702 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.crash"}]: dispatch 2026-03-10T05:57:15.588 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:57:15 vm02 bash[55303]: audit 2026-03-10T05:57:14.270436+0000 mon.a (mon.0) 703 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.crash"}]': finished 2026-03-10T05:57:15.588 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:57:15 vm02 bash[55303]: audit 2026-03-10T05:57:14.270436+0000 mon.a (mon.0) 703 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.crash"}]': finished 2026-03-10T05:57:15.588 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:57:15 vm02 bash[55303]: audit 2026-03-10T05:57:14.272064+0000 mon.a (mon.0) 704 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "osd"}]: dispatch 2026-03-10T05:57:15.588 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:57:15 vm02 bash[55303]: audit 2026-03-10T05:57:14.272064+0000 mon.a (mon.0) 704 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "osd"}]: dispatch 2026-03-10T05:57:15.588 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:57:15 vm02 bash[55303]: audit 2026-03-10T05:57:14.274979+0000 mon.a (mon.0) 705 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "osd"}]': finished 2026-03-10T05:57:15.588 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:57:15 vm02 bash[55303]: audit 2026-03-10T05:57:14.274979+0000 mon.a (mon.0) 705 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "osd"}]': finished 2026-03-10T05:57:15.588 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:57:15 vm02 bash[55303]: audit 2026-03-10T05:57:14.276641+0000 mon.a (mon.0) 706 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mds"}]: dispatch 2026-03-10T05:57:15.588 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:57:15 vm02 bash[55303]: audit 2026-03-10T05:57:14.276641+0000 mon.a (mon.0) 706 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mds"}]: dispatch 2026-03-10T05:57:15.589 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:57:15 vm02 bash[55303]: audit 2026-03-10T05:57:14.279127+0000 mon.a (mon.0) 707 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "mds"}]': finished 2026-03-10T05:57:15.589 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:57:15 vm02 bash[55303]: audit 2026-03-10T05:57:14.279127+0000 mon.a (mon.0) 707 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "mds"}]': finished 2026-03-10T05:57:15.589 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:57:15 vm02 bash[55303]: audit 2026-03-10T05:57:14.280736+0000 mon.a (mon.0) 708 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.rgw"}]: dispatch 2026-03-10T05:57:15.589 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:57:15 vm02 bash[55303]: audit 2026-03-10T05:57:14.280736+0000 mon.a (mon.0) 708 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.rgw"}]: dispatch 2026-03-10T05:57:15.589 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:57:15 vm02 bash[55303]: audit 2026-03-10T05:57:14.283525+0000 mon.a (mon.0) 709 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.rgw"}]': finished 2026-03-10T05:57:15.589 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:57:15 vm02 bash[55303]: audit 2026-03-10T05:57:14.283525+0000 mon.a (mon.0) 709 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.rgw"}]': finished 2026-03-10T05:57:15.589 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:57:15 vm02 bash[55303]: audit 2026-03-10T05:57:14.285150+0000 mon.a (mon.0) 710 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.rbd-mirror"}]: dispatch 2026-03-10T05:57:15.589 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:57:15 vm02 bash[55303]: audit 2026-03-10T05:57:14.285150+0000 mon.a (mon.0) 710 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.rbd-mirror"}]: dispatch 2026-03-10T05:57:15.589 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:57:15 vm02 bash[55303]: audit 2026-03-10T05:57:14.287693+0000 mon.a (mon.0) 711 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.rbd-mirror"}]': finished 2026-03-10T05:57:15.589 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:57:15 vm02 bash[55303]: audit 2026-03-10T05:57:14.287693+0000 mon.a (mon.0) 711 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.rbd-mirror"}]': finished 2026-03-10T05:57:15.589 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:57:15 vm02 bash[55303]: audit 2026-03-10T05:57:14.289382+0000 mon.a (mon.0) 712 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T05:57:15.589 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:57:15 vm02 bash[55303]: audit 2026-03-10T05:57:14.289382+0000 mon.a (mon.0) 712 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T05:57:15.589 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:57:15 vm02 bash[55303]: audit 2026-03-10T05:57:14.289695+0000 mon.a (mon.0) 713 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.ceph-exporter"}]: dispatch 2026-03-10T05:57:15.589 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:57:15 vm02 bash[55303]: audit 2026-03-10T05:57:14.289695+0000 mon.a (mon.0) 713 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.ceph-exporter"}]: dispatch 2026-03-10T05:57:15.589 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:57:15 vm02 bash[55303]: audit 2026-03-10T05:57:14.292120+0000 mon.a (mon.0) 714 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.ceph-exporter"}]': finished 2026-03-10T05:57:15.589 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:57:15 vm02 bash[55303]: audit 2026-03-10T05:57:14.292120+0000 mon.a (mon.0) 714 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.ceph-exporter"}]': finished 2026-03-10T05:57:15.589 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:57:15 vm02 bash[55303]: audit 2026-03-10T05:57:14.293672+0000 mon.a (mon.0) 715 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.iscsi"}]: dispatch 2026-03-10T05:57:15.589 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:57:15 vm02 bash[55303]: audit 2026-03-10T05:57:14.293672+0000 mon.a (mon.0) 715 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.iscsi"}]: dispatch 2026-03-10T05:57:15.589 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:57:15 vm02 bash[55303]: audit 2026-03-10T05:57:14.296126+0000 mon.a (mon.0) 716 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.iscsi"}]': finished 2026-03-10T05:57:15.589 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:57:15 vm02 bash[55303]: audit 2026-03-10T05:57:14.296126+0000 mon.a (mon.0) 716 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.iscsi"}]': finished 2026-03-10T05:57:15.589 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:57:15 vm02 bash[55303]: audit 2026-03-10T05:57:14.297723+0000 mon.a (mon.0) 717 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.nfs"}]: dispatch 2026-03-10T05:57:15.589 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:57:15 vm02 bash[55303]: audit 2026-03-10T05:57:14.297723+0000 mon.a (mon.0) 717 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.nfs"}]: dispatch 2026-03-10T05:57:15.589 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:57:15 vm02 bash[55303]: audit 2026-03-10T05:57:14.300375+0000 mon.a (mon.0) 718 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.nfs"}]': finished 2026-03-10T05:57:15.589 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:57:15 vm02 bash[55303]: audit 2026-03-10T05:57:14.300375+0000 mon.a (mon.0) 718 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.nfs"}]': finished 2026-03-10T05:57:15.589 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:57:15 vm02 bash[55303]: audit 2026-03-10T05:57:14.302022+0000 mon.a (mon.0) 719 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.nvmeof"}]: dispatch 2026-03-10T05:57:15.589 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:57:15 vm02 bash[55303]: audit 2026-03-10T05:57:14.302022+0000 mon.a (mon.0) 719 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.nvmeof"}]: dispatch 2026-03-10T05:57:15.589 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:57:15 vm02 bash[55303]: audit 2026-03-10T05:57:14.304685+0000 mon.a (mon.0) 720 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.nvmeof"}]': finished 2026-03-10T05:57:15.589 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:57:15 vm02 bash[55303]: audit 2026-03-10T05:57:14.304685+0000 mon.a (mon.0) 720 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.nvmeof"}]': finished 2026-03-10T05:57:15.589 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:57:15 vm02 bash[55303]: audit 2026-03-10T05:57:14.306321+0000 mon.a (mon.0) 721 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T05:57:15.589 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:57:15 vm02 bash[55303]: audit 2026-03-10T05:57:14.306321+0000 mon.a (mon.0) 721 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T05:57:15.589 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:57:15 vm02 bash[55303]: audit 2026-03-10T05:57:14.306643+0000 mon.a (mon.0) 722 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T05:57:15.589 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:57:15 vm02 bash[55303]: audit 2026-03-10T05:57:14.306643+0000 mon.a (mon.0) 722 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T05:57:15.589 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:57:15 vm02 bash[55303]: audit 2026-03-10T05:57:14.306960+0000 mon.a (mon.0) 723 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T05:57:15.589 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:57:15 vm02 bash[55303]: audit 2026-03-10T05:57:14.306960+0000 mon.a (mon.0) 723 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T05:57:15.589 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:57:15 vm02 bash[55303]: audit 2026-03-10T05:57:14.307304+0000 mon.a (mon.0) 724 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T05:57:15.589 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:57:15 vm02 bash[55303]: audit 2026-03-10T05:57:14.307304+0000 mon.a (mon.0) 724 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T05:57:15.589 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:57:15 vm02 bash[55303]: audit 2026-03-10T05:57:14.307606+0000 mon.a (mon.0) 725 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T05:57:15.589 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:57:15 vm02 bash[55303]: audit 2026-03-10T05:57:14.307606+0000 mon.a (mon.0) 725 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T05:57:15.589 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:57:15 vm02 bash[55303]: audit 2026-03-10T05:57:14.307900+0000 mon.a (mon.0) 726 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T05:57:15.589 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:57:15 vm02 bash[55303]: audit 2026-03-10T05:57:14.307900+0000 mon.a (mon.0) 726 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T05:57:15.589 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:57:15 vm02 bash[55303]: cephadm 2026-03-10T05:57:14.308167+0000 mgr.y (mgr.24992) 295 : cephadm [INF] Upgrade: Complete! 2026-03-10T05:57:15.589 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:57:15 vm02 bash[55303]: cephadm 2026-03-10T05:57:14.308167+0000 mgr.y (mgr.24992) 295 : cephadm [INF] Upgrade: Complete! 2026-03-10T05:57:15.589 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:57:15 vm02 bash[55303]: audit 2026-03-10T05:57:14.308389+0000 mon.a (mon.0) 727 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix":"config-key del","key":"mgr/cephadm/upgrade_state"}]: dispatch 2026-03-10T05:57:15.589 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:57:15 vm02 bash[55303]: audit 2026-03-10T05:57:14.308389+0000 mon.a (mon.0) 727 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix":"config-key del","key":"mgr/cephadm/upgrade_state"}]: dispatch 2026-03-10T05:57:15.589 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:57:15 vm02 bash[55303]: audit 2026-03-10T05:57:14.311034+0000 mon.a (mon.0) 728 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd='[{"prefix":"config-key del","key":"mgr/cephadm/upgrade_state"}]': finished 2026-03-10T05:57:15.589 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:57:15 vm02 bash[55303]: audit 2026-03-10T05:57:14.311034+0000 mon.a (mon.0) 728 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd='[{"prefix":"config-key del","key":"mgr/cephadm/upgrade_state"}]': finished 2026-03-10T05:57:15.589 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:57:15 vm02 bash[55303]: audit 2026-03-10T05:57:14.311669+0000 mon.a (mon.0) 729 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:57:15.589 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:57:15 vm02 bash[55303]: audit 2026-03-10T05:57:14.311669+0000 mon.a (mon.0) 729 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:57:15.589 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:57:15 vm02 bash[55303]: audit 2026-03-10T05:57:14.312073+0000 mon.a (mon.0) 730 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T05:57:15.589 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:57:15 vm02 bash[55303]: audit 2026-03-10T05:57:14.312073+0000 mon.a (mon.0) 730 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T05:57:15.589 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:57:15 vm02 bash[55303]: audit 2026-03-10T05:57:14.315103+0000 mon.a (mon.0) 731 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:57:15.589 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:57:15 vm02 bash[55303]: audit 2026-03-10T05:57:14.315103+0000 mon.a (mon.0) 731 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:57:15.589 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:57:15 vm02 bash[55303]: audit 2026-03-10T05:57:14.355209+0000 mon.a (mon.0) 732 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:57:15.589 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:57:15 vm02 bash[55303]: audit 2026-03-10T05:57:14.355209+0000 mon.a (mon.0) 732 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:57:15.589 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:57:15 vm02 bash[55303]: audit 2026-03-10T05:57:14.355769+0000 mon.a (mon.0) 733 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T05:57:15.589 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:57:15 vm02 bash[55303]: audit 2026-03-10T05:57:14.355769+0000 mon.a (mon.0) 733 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T05:57:15.589 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:57:15 vm02 bash[55303]: audit 2026-03-10T05:57:14.360304+0000 mon.a (mon.0) 734 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:57:15.589 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:57:15 vm02 bash[55303]: audit 2026-03-10T05:57:14.360304+0000 mon.a (mon.0) 734 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:57:16.748 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:57:16 vm05 bash[43541]: cluster 2026-03-10T05:57:14.872192+0000 mgr.y (mgr.24992) 296 : cluster [DBG] pgmap v172: 161 pgs: 161 active+clean; 457 KiB data, 325 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T05:57:16.748 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:57:16 vm05 bash[43541]: cluster 2026-03-10T05:57:14.872192+0000 mgr.y (mgr.24992) 296 : cluster [DBG] pgmap v172: 161 pgs: 161 active+clean; 457 KiB data, 325 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T05:57:16.748 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:57:16 vm05 bash[43541]: audit 2026-03-10T05:57:15.980893+0000 mon.a (mon.0) 735 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:57:16.748 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:57:16 vm05 bash[43541]: audit 2026-03-10T05:57:15.980893+0000 mon.a (mon.0) 735 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:57:16.835 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:57:16 vm02 bash[56371]: cluster 2026-03-10T05:57:14.872192+0000 mgr.y (mgr.24992) 296 : cluster [DBG] pgmap v172: 161 pgs: 161 active+clean; 457 KiB data, 325 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T05:57:16.835 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:57:16 vm02 bash[56371]: cluster 2026-03-10T05:57:14.872192+0000 mgr.y (mgr.24992) 296 : cluster [DBG] pgmap v172: 161 pgs: 161 active+clean; 457 KiB data, 325 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T05:57:16.835 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:57:16 vm02 bash[56371]: audit 2026-03-10T05:57:15.980893+0000 mon.a (mon.0) 735 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:57:16.835 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:57:16 vm02 bash[56371]: audit 2026-03-10T05:57:15.980893+0000 mon.a (mon.0) 735 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:57:16.835 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:57:16 vm02 bash[55303]: cluster 2026-03-10T05:57:14.872192+0000 mgr.y (mgr.24992) 296 : cluster [DBG] pgmap v172: 161 pgs: 161 active+clean; 457 KiB data, 325 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T05:57:16.835 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:57:16 vm02 bash[55303]: cluster 2026-03-10T05:57:14.872192+0000 mgr.y (mgr.24992) 296 : cluster [DBG] pgmap v172: 161 pgs: 161 active+clean; 457 KiB data, 325 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T05:57:16.835 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:57:16 vm02 bash[55303]: audit 2026-03-10T05:57:15.980893+0000 mon.a (mon.0) 735 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:57:16.835 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:57:16 vm02 bash[55303]: audit 2026-03-10T05:57:15.980893+0000 mon.a (mon.0) 735 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:57:18.748 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:57:18 vm05 bash[43541]: cluster 2026-03-10T05:57:16.872561+0000 mgr.y (mgr.24992) 297 : cluster [DBG] pgmap v173: 161 pgs: 161 active+clean; 457 KiB data, 325 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T05:57:18.748 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:57:18 vm05 bash[43541]: cluster 2026-03-10T05:57:16.872561+0000 mgr.y (mgr.24992) 297 : cluster [DBG] pgmap v173: 161 pgs: 161 active+clean; 457 KiB data, 325 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T05:57:18.835 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:57:18 vm02 bash[56371]: cluster 2026-03-10T05:57:16.872561+0000 mgr.y (mgr.24992) 297 : cluster [DBG] pgmap v173: 161 pgs: 161 active+clean; 457 KiB data, 325 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T05:57:18.835 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:57:18 vm02 bash[56371]: cluster 2026-03-10T05:57:16.872561+0000 mgr.y (mgr.24992) 297 : cluster [DBG] pgmap v173: 161 pgs: 161 active+clean; 457 KiB data, 325 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T05:57:18.835 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:57:18 vm02 bash[55303]: cluster 2026-03-10T05:57:16.872561+0000 mgr.y (mgr.24992) 297 : cluster [DBG] pgmap v173: 161 pgs: 161 active+clean; 457 KiB data, 325 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T05:57:18.835 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:57:18 vm02 bash[55303]: cluster 2026-03-10T05:57:16.872561+0000 mgr.y (mgr.24992) 297 : cluster [DBG] pgmap v173: 161 pgs: 161 active+clean; 457 KiB data, 325 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T05:57:20.748 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:57:20 vm05 bash[43541]: cluster 2026-03-10T05:57:18.872906+0000 mgr.y (mgr.24992) 298 : cluster [DBG] pgmap v174: 161 pgs: 161 active+clean; 457 KiB data, 325 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T05:57:20.748 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:57:20 vm05 bash[43541]: cluster 2026-03-10T05:57:18.872906+0000 mgr.y (mgr.24992) 298 : cluster [DBG] pgmap v174: 161 pgs: 161 active+clean; 457 KiB data, 325 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T05:57:20.835 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:57:20 vm02 bash[56371]: cluster 2026-03-10T05:57:18.872906+0000 mgr.y (mgr.24992) 298 : cluster [DBG] pgmap v174: 161 pgs: 161 active+clean; 457 KiB data, 325 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T05:57:20.835 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:57:20 vm02 bash[56371]: cluster 2026-03-10T05:57:18.872906+0000 mgr.y (mgr.24992) 298 : cluster [DBG] pgmap v174: 161 pgs: 161 active+clean; 457 KiB data, 325 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T05:57:20.835 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:57:20 vm02 bash[55303]: cluster 2026-03-10T05:57:18.872906+0000 mgr.y (mgr.24992) 298 : cluster [DBG] pgmap v174: 161 pgs: 161 active+clean; 457 KiB data, 325 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T05:57:20.835 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:57:20 vm02 bash[55303]: cluster 2026-03-10T05:57:18.872906+0000 mgr.y (mgr.24992) 298 : cluster [DBG] pgmap v174: 161 pgs: 161 active+clean; 457 KiB data, 325 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T05:57:22.748 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:57:22 vm05 bash[43541]: cluster 2026-03-10T05:57:20.873269+0000 mgr.y (mgr.24992) 299 : cluster [DBG] pgmap v175: 161 pgs: 161 active+clean; 457 KiB data, 325 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T05:57:22.748 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:57:22 vm05 bash[43541]: cluster 2026-03-10T05:57:20.873269+0000 mgr.y (mgr.24992) 299 : cluster [DBG] pgmap v175: 161 pgs: 161 active+clean; 457 KiB data, 325 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T05:57:22.835 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:57:22 vm02 bash[56371]: cluster 2026-03-10T05:57:20.873269+0000 mgr.y (mgr.24992) 299 : cluster [DBG] pgmap v175: 161 pgs: 161 active+clean; 457 KiB data, 325 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T05:57:22.835 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:57:22 vm02 bash[56371]: cluster 2026-03-10T05:57:20.873269+0000 mgr.y (mgr.24992) 299 : cluster [DBG] pgmap v175: 161 pgs: 161 active+clean; 457 KiB data, 325 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T05:57:22.835 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:57:22 vm02 bash[55303]: cluster 2026-03-10T05:57:20.873269+0000 mgr.y (mgr.24992) 299 : cluster [DBG] pgmap v175: 161 pgs: 161 active+clean; 457 KiB data, 325 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T05:57:22.835 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:57:22 vm02 bash[55303]: cluster 2026-03-10T05:57:20.873269+0000 mgr.y (mgr.24992) 299 : cluster [DBG] pgmap v175: 161 pgs: 161 active+clean; 457 KiB data, 325 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T05:57:23.335 INFO:journalctl@ceph.mgr.y.vm02.stdout:Mar 10 05:57:22 vm02 bash[52264]: ::ffff:192.168.123.105 - - [10/Mar/2026:05:57:22] "GET /metrics HTTP/1.1" 200 38251 "" "Prometheus/2.51.0" 2026-03-10T05:57:23.748 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:57:23 vm05 bash[43541]: audit 2026-03-10T05:57:22.062216+0000 mgr.y (mgr.24992) 300 : audit [DBG] from='client.34459 -' entity='client.iscsi.foo.vm02.mxbwmh' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T05:57:23.748 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:57:23 vm05 bash[43541]: audit 2026-03-10T05:57:22.062216+0000 mgr.y (mgr.24992) 300 : audit [DBG] from='client.34459 -' entity='client.iscsi.foo.vm02.mxbwmh' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T05:57:23.835 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:57:23 vm02 bash[56371]: audit 2026-03-10T05:57:22.062216+0000 mgr.y (mgr.24992) 300 : audit [DBG] from='client.34459 -' entity='client.iscsi.foo.vm02.mxbwmh' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T05:57:23.835 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:57:23 vm02 bash[56371]: audit 2026-03-10T05:57:22.062216+0000 mgr.y (mgr.24992) 300 : audit [DBG] from='client.34459 -' entity='client.iscsi.foo.vm02.mxbwmh' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T05:57:23.835 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:57:23 vm02 bash[55303]: audit 2026-03-10T05:57:22.062216+0000 mgr.y (mgr.24992) 300 : audit [DBG] from='client.34459 -' entity='client.iscsi.foo.vm02.mxbwmh' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T05:57:23.835 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:57:23 vm02 bash[55303]: audit 2026-03-10T05:57:22.062216+0000 mgr.y (mgr.24992) 300 : audit [DBG] from='client.34459 -' entity='client.iscsi.foo.vm02.mxbwmh' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T05:57:24.748 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:57:24 vm05 bash[43541]: cluster 2026-03-10T05:57:22.873671+0000 mgr.y (mgr.24992) 301 : cluster [DBG] pgmap v176: 161 pgs: 161 active+clean; 457 KiB data, 325 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T05:57:24.748 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:57:24 vm05 bash[43541]: cluster 2026-03-10T05:57:22.873671+0000 mgr.y (mgr.24992) 301 : cluster [DBG] pgmap v176: 161 pgs: 161 active+clean; 457 KiB data, 325 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T05:57:24.835 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:57:24 vm02 bash[56371]: cluster 2026-03-10T05:57:22.873671+0000 mgr.y (mgr.24992) 301 : cluster [DBG] pgmap v176: 161 pgs: 161 active+clean; 457 KiB data, 325 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T05:57:24.835 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:57:24 vm02 bash[56371]: cluster 2026-03-10T05:57:22.873671+0000 mgr.y (mgr.24992) 301 : cluster [DBG] pgmap v176: 161 pgs: 161 active+clean; 457 KiB data, 325 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T05:57:24.835 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:57:24 vm02 bash[55303]: cluster 2026-03-10T05:57:22.873671+0000 mgr.y (mgr.24992) 301 : cluster [DBG] pgmap v176: 161 pgs: 161 active+clean; 457 KiB data, 325 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T05:57:24.835 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:57:24 vm02 bash[55303]: cluster 2026-03-10T05:57:22.873671+0000 mgr.y (mgr.24992) 301 : cluster [DBG] pgmap v176: 161 pgs: 161 active+clean; 457 KiB data, 325 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T05:57:26.748 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:57:26 vm05 bash[43541]: cluster 2026-03-10T05:57:24.874016+0000 mgr.y (mgr.24992) 302 : cluster [DBG] pgmap v177: 161 pgs: 161 active+clean; 457 KiB data, 325 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T05:57:26.748 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:57:26 vm05 bash[43541]: cluster 2026-03-10T05:57:24.874016+0000 mgr.y (mgr.24992) 302 : cluster [DBG] pgmap v177: 161 pgs: 161 active+clean; 457 KiB data, 325 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T05:57:26.748 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:57:26 vm05 bash[43541]: audit 2026-03-10T05:57:25.884446+0000 mon.a (mon.0) 736 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T05:57:26.748 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:57:26 vm05 bash[43541]: audit 2026-03-10T05:57:25.884446+0000 mon.a (mon.0) 736 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T05:57:26.835 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:57:26 vm02 bash[56371]: cluster 2026-03-10T05:57:24.874016+0000 mgr.y (mgr.24992) 302 : cluster [DBG] pgmap v177: 161 pgs: 161 active+clean; 457 KiB data, 325 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T05:57:26.835 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:57:26 vm02 bash[56371]: cluster 2026-03-10T05:57:24.874016+0000 mgr.y (mgr.24992) 302 : cluster [DBG] pgmap v177: 161 pgs: 161 active+clean; 457 KiB data, 325 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T05:57:26.835 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:57:26 vm02 bash[56371]: audit 2026-03-10T05:57:25.884446+0000 mon.a (mon.0) 736 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T05:57:26.835 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:57:26 vm02 bash[56371]: audit 2026-03-10T05:57:25.884446+0000 mon.a (mon.0) 736 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T05:57:26.835 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:57:26 vm02 bash[55303]: cluster 2026-03-10T05:57:24.874016+0000 mgr.y (mgr.24992) 302 : cluster [DBG] pgmap v177: 161 pgs: 161 active+clean; 457 KiB data, 325 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T05:57:26.835 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:57:26 vm02 bash[55303]: cluster 2026-03-10T05:57:24.874016+0000 mgr.y (mgr.24992) 302 : cluster [DBG] pgmap v177: 161 pgs: 161 active+clean; 457 KiB data, 325 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T05:57:26.835 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:57:26 vm02 bash[55303]: audit 2026-03-10T05:57:25.884446+0000 mon.a (mon.0) 736 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T05:57:26.835 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:57:26 vm02 bash[55303]: audit 2026-03-10T05:57:25.884446+0000 mon.a (mon.0) 736 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T05:57:28.748 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:57:28 vm05 bash[43541]: cluster 2026-03-10T05:57:26.874369+0000 mgr.y (mgr.24992) 303 : cluster [DBG] pgmap v178: 161 pgs: 161 active+clean; 457 KiB data, 325 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T05:57:28.748 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:57:28 vm05 bash[43541]: cluster 2026-03-10T05:57:26.874369+0000 mgr.y (mgr.24992) 303 : cluster [DBG] pgmap v178: 161 pgs: 161 active+clean; 457 KiB data, 325 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T05:57:28.835 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:57:28 vm02 bash[56371]: cluster 2026-03-10T05:57:26.874369+0000 mgr.y (mgr.24992) 303 : cluster [DBG] pgmap v178: 161 pgs: 161 active+clean; 457 KiB data, 325 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T05:57:28.835 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:57:28 vm02 bash[56371]: cluster 2026-03-10T05:57:26.874369+0000 mgr.y (mgr.24992) 303 : cluster [DBG] pgmap v178: 161 pgs: 161 active+clean; 457 KiB data, 325 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T05:57:28.835 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:57:28 vm02 bash[55303]: cluster 2026-03-10T05:57:26.874369+0000 mgr.y (mgr.24992) 303 : cluster [DBG] pgmap v178: 161 pgs: 161 active+clean; 457 KiB data, 325 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T05:57:28.835 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:57:28 vm02 bash[55303]: cluster 2026-03-10T05:57:26.874369+0000 mgr.y (mgr.24992) 303 : cluster [DBG] pgmap v178: 161 pgs: 161 active+clean; 457 KiB data, 325 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T05:57:29.323 DEBUG:teuthology.orchestra.run.vm02:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 107483ae-1c44-11f1-b530-c1172cd6122a -e sha1=e911bdebe5c8faa3800735d1568fcdca65db60df -- bash -c 'ceph orch ps' 2026-03-10T05:57:29.744 INFO:teuthology.orchestra.run.vm02.stdout:NAME HOST PORTS STATUS REFRESHED AGE MEM USE MEM LIM VERSION IMAGE ID CONTAINER ID 2026-03-10T05:57:29.744 INFO:teuthology.orchestra.run.vm02.stdout:alertmanager.a vm02 *:9093,9094 running (5m) 22s ago 10m 13.2M - 0.25.0 c8568f914cd2 7a7c5c2cddb6 2026-03-10T05:57:29.744 INFO:teuthology.orchestra.run.vm02.stdout:grafana.a vm05 *:3000 running (27s) 21s ago 10m 58.0M - 10.4.0 c8b91775d855 5f00ef7c3fac 2026-03-10T05:57:29.744 INFO:teuthology.orchestra.run.vm02.stdout:iscsi.foo.vm02.mxbwmh vm02 running (48s) 22s ago 9m 48.3M - 3.9 654f31e6858e f1b577537dcd 2026-03-10T05:57:29.745 INFO:teuthology.orchestra.run.vm02.stdout:mgr.x vm05 *:8443,9283,8765 running (4m) 21s ago 12m 465M - 19.2.3-678-ge911bdeb 654f31e6858e 7579626ada90 2026-03-10T05:57:29.745 INFO:teuthology.orchestra.run.vm02.stdout:mgr.y vm02 *:8443,9283,8765 running (5m) 22s ago 13m 544M - 19.2.3-678-ge911bdeb 654f31e6858e ef46d0f7b15e 2026-03-10T05:57:29.745 INFO:teuthology.orchestra.run.vm02.stdout:mon.a vm02 running (4m) 22s ago 13m 60.2M 2048M 19.2.3-678-ge911bdeb 654f31e6858e df3a0a290a95 2026-03-10T05:57:29.745 INFO:teuthology.orchestra.run.vm02.stdout:mon.b vm05 running (4m) 21s ago 13m 51.4M 2048M 19.2.3-678-ge911bdeb 654f31e6858e 1da04b90d16b 2026-03-10T05:57:29.745 INFO:teuthology.orchestra.run.vm02.stdout:mon.c vm02 running (4m) 22s ago 13m 57.7M 2048M 19.2.3-678-ge911bdeb 654f31e6858e 7f2cdf1b7aa6 2026-03-10T05:57:29.745 INFO:teuthology.orchestra.run.vm02.stdout:node-exporter.a vm02 *:9100 running (5m) 22s ago 10m 7560k - 1.7.0 72c9c2088986 90288450bd1f 2026-03-10T05:57:29.745 INFO:teuthology.orchestra.run.vm02.stdout:node-exporter.b vm05 *:9100 running (5m) 21s ago 10m 7591k - 1.7.0 72c9c2088986 4e859143cb0e 2026-03-10T05:57:29.745 INFO:teuthology.orchestra.run.vm02.stdout:osd.0 vm02 running (3m) 22s ago 12m 75.8M 4096M 19.2.3-678-ge911bdeb 654f31e6858e 640360275f83 2026-03-10T05:57:29.745 INFO:teuthology.orchestra.run.vm02.stdout:osd.1 vm02 running (2m) 22s ago 12m 57.1M 4096M 19.2.3-678-ge911bdeb 654f31e6858e 4de5c460789a 2026-03-10T05:57:29.745 INFO:teuthology.orchestra.run.vm02.stdout:osd.2 vm02 running (3m) 22s ago 12m 51.3M 4096M 19.2.3-678-ge911bdeb 654f31e6858e 51dac2f581d9 2026-03-10T05:57:29.745 INFO:teuthology.orchestra.run.vm02.stdout:osd.3 vm02 running (3m) 22s ago 11m 81.4M 4096M 19.2.3-678-ge911bdeb 654f31e6858e 0eca961791f4 2026-03-10T05:57:29.745 INFO:teuthology.orchestra.run.vm02.stdout:osd.4 vm05 running (2m) 21s ago 11m 57.2M 4096M 19.2.3-678-ge911bdeb 654f31e6858e 2c1b499265f4 2026-03-10T05:57:29.745 INFO:teuthology.orchestra.run.vm02.stdout:osd.5 vm05 running (2m) 21s ago 11m 75.9M 4096M 19.2.3-678-ge911bdeb 654f31e6858e 7ec1a1246098 2026-03-10T05:57:29.745 INFO:teuthology.orchestra.run.vm02.stdout:osd.6 vm05 running (107s) 21s ago 11m 73.8M 4096M 19.2.3-678-ge911bdeb 654f31e6858e bd151ab03026 2026-03-10T05:57:29.745 INFO:teuthology.orchestra.run.vm02.stdout:osd.7 vm05 running (91s) 21s ago 10m 73.1M 4096M 19.2.3-678-ge911bdeb 654f31e6858e 83fe4a7f26f5 2026-03-10T05:57:29.745 INFO:teuthology.orchestra.run.vm02.stdout:prometheus.a vm05 *:9095 running (4m) 21s ago 10m 39.3M - 2.51.0 1d3b7f56885b 3328811f8f28 2026-03-10T05:57:29.745 INFO:teuthology.orchestra.run.vm02.stdout:rgw.foo.vm02.pbogjd vm02 *:8000 running (76s) 22s ago 9m 92.6M - 19.2.3-678-ge911bdeb 654f31e6858e 4e1a47dc4ede 2026-03-10T05:57:29.745 INFO:teuthology.orchestra.run.vm02.stdout:rgw.foo.vm05.hvmsxl vm05 *:8000 running (72s) 21s ago 9m 92.6M - 19.2.3-678-ge911bdeb 654f31e6858e 51931a978021 2026-03-10T05:57:29.745 INFO:teuthology.orchestra.run.vm02.stdout:rgw.smpl.vm02.pglcfm vm02 *:80 running (74s) 22s ago 9m 92.6M - 19.2.3-678-ge911bdeb 654f31e6858e a59d6d93b54c 2026-03-10T05:57:29.745 INFO:teuthology.orchestra.run.vm02.stdout:rgw.smpl.vm05.hqqmap vm05 *:80 running (70s) 21s ago 9m 92.4M - 19.2.3-678-ge911bdeb 654f31e6858e 62b012e7d3ec 2026-03-10T05:57:29.794 DEBUG:teuthology.orchestra.run.vm02:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 107483ae-1c44-11f1-b530-c1172cd6122a -e sha1=e911bdebe5c8faa3800735d1568fcdca65db60df -- bash -c 'ceph versions' 2026-03-10T05:57:30.247 INFO:teuthology.orchestra.run.vm02.stdout:{ 2026-03-10T05:57:30.248 INFO:teuthology.orchestra.run.vm02.stdout: "mon": { 2026-03-10T05:57:30.248 INFO:teuthology.orchestra.run.vm02.stdout: "ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable)": 3 2026-03-10T05:57:30.248 INFO:teuthology.orchestra.run.vm02.stdout: }, 2026-03-10T05:57:30.248 INFO:teuthology.orchestra.run.vm02.stdout: "mgr": { 2026-03-10T05:57:30.248 INFO:teuthology.orchestra.run.vm02.stdout: "ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable)": 2 2026-03-10T05:57:30.248 INFO:teuthology.orchestra.run.vm02.stdout: }, 2026-03-10T05:57:30.248 INFO:teuthology.orchestra.run.vm02.stdout: "osd": { 2026-03-10T05:57:30.248 INFO:teuthology.orchestra.run.vm02.stdout: "ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable)": 8 2026-03-10T05:57:30.248 INFO:teuthology.orchestra.run.vm02.stdout: }, 2026-03-10T05:57:30.248 INFO:teuthology.orchestra.run.vm02.stdout: "rgw": { 2026-03-10T05:57:30.248 INFO:teuthology.orchestra.run.vm02.stdout: "ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable)": 4 2026-03-10T05:57:30.248 INFO:teuthology.orchestra.run.vm02.stdout: }, 2026-03-10T05:57:30.248 INFO:teuthology.orchestra.run.vm02.stdout: "overall": { 2026-03-10T05:57:30.248 INFO:teuthology.orchestra.run.vm02.stdout: "ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable)": 17 2026-03-10T05:57:30.248 INFO:teuthology.orchestra.run.vm02.stdout: } 2026-03-10T05:57:30.248 INFO:teuthology.orchestra.run.vm02.stdout:} 2026-03-10T05:57:30.295 DEBUG:teuthology.orchestra.run.vm02:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 107483ae-1c44-11f1-b530-c1172cd6122a -e sha1=e911bdebe5c8faa3800735d1568fcdca65db60df -- bash -c 'echo "wait for servicemap items w/ changing names to refresh"' 2026-03-10T05:57:30.518 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:57:30 vm02 bash[56371]: cluster 2026-03-10T05:57:28.874714+0000 mgr.y (mgr.24992) 304 : cluster [DBG] pgmap v179: 161 pgs: 161 active+clean; 457 KiB data, 325 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T05:57:30.518 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:57:30 vm02 bash[56371]: cluster 2026-03-10T05:57:28.874714+0000 mgr.y (mgr.24992) 304 : cluster [DBG] pgmap v179: 161 pgs: 161 active+clean; 457 KiB data, 325 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T05:57:30.518 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:57:30 vm02 bash[56371]: audit 2026-03-10T05:57:29.251133+0000 mgr.y (mgr.24992) 305 : audit [DBG] from='client.44550 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T05:57:30.518 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:57:30 vm02 bash[56371]: audit 2026-03-10T05:57:29.251133+0000 mgr.y (mgr.24992) 305 : audit [DBG] from='client.44550 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T05:57:30.519 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:57:30 vm02 bash[56371]: audit 2026-03-10T05:57:30.246512+0000 mon.a (mon.0) 737 : audit [DBG] from='client.? 192.168.123.102:0/568895299' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T05:57:30.519 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:57:30 vm02 bash[56371]: audit 2026-03-10T05:57:30.246512+0000 mon.a (mon.0) 737 : audit [DBG] from='client.? 192.168.123.102:0/568895299' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T05:57:30.519 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:57:30 vm02 bash[55303]: cluster 2026-03-10T05:57:28.874714+0000 mgr.y (mgr.24992) 304 : cluster [DBG] pgmap v179: 161 pgs: 161 active+clean; 457 KiB data, 325 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T05:57:30.519 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:57:30 vm02 bash[55303]: cluster 2026-03-10T05:57:28.874714+0000 mgr.y (mgr.24992) 304 : cluster [DBG] pgmap v179: 161 pgs: 161 active+clean; 457 KiB data, 325 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T05:57:30.519 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:57:30 vm02 bash[55303]: audit 2026-03-10T05:57:29.251133+0000 mgr.y (mgr.24992) 305 : audit [DBG] from='client.44550 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T05:57:30.519 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:57:30 vm02 bash[55303]: audit 2026-03-10T05:57:29.251133+0000 mgr.y (mgr.24992) 305 : audit [DBG] from='client.44550 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T05:57:30.519 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:57:30 vm02 bash[55303]: audit 2026-03-10T05:57:30.246512+0000 mon.a (mon.0) 737 : audit [DBG] from='client.? 192.168.123.102:0/568895299' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T05:57:30.519 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:57:30 vm02 bash[55303]: audit 2026-03-10T05:57:30.246512+0000 mon.a (mon.0) 737 : audit [DBG] from='client.? 192.168.123.102:0/568895299' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T05:57:30.519 INFO:teuthology.orchestra.run.vm02.stdout:wait for servicemap items w/ changing names to refresh 2026-03-10T05:57:30.555 DEBUG:teuthology.orchestra.run.vm02:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 107483ae-1c44-11f1-b530-c1172cd6122a -e sha1=e911bdebe5c8faa3800735d1568fcdca65db60df -- bash -c 'sleep 60' 2026-03-10T05:57:30.748 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:57:30 vm05 bash[43541]: cluster 2026-03-10T05:57:28.874714+0000 mgr.y (mgr.24992) 304 : cluster [DBG] pgmap v179: 161 pgs: 161 active+clean; 457 KiB data, 325 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T05:57:30.748 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:57:30 vm05 bash[43541]: cluster 2026-03-10T05:57:28.874714+0000 mgr.y (mgr.24992) 304 : cluster [DBG] pgmap v179: 161 pgs: 161 active+clean; 457 KiB data, 325 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T05:57:30.748 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:57:30 vm05 bash[43541]: audit 2026-03-10T05:57:29.251133+0000 mgr.y (mgr.24992) 305 : audit [DBG] from='client.44550 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T05:57:30.748 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:57:30 vm05 bash[43541]: audit 2026-03-10T05:57:29.251133+0000 mgr.y (mgr.24992) 305 : audit [DBG] from='client.44550 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T05:57:30.748 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:57:30 vm05 bash[43541]: audit 2026-03-10T05:57:30.246512+0000 mon.a (mon.0) 737 : audit [DBG] from='client.? 192.168.123.102:0/568895299' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T05:57:30.748 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:57:30 vm05 bash[43541]: audit 2026-03-10T05:57:30.246512+0000 mon.a (mon.0) 737 : audit [DBG] from='client.? 192.168.123.102:0/568895299' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T05:57:31.748 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:57:31 vm05 bash[43541]: audit 2026-03-10T05:57:29.739081+0000 mgr.y (mgr.24992) 306 : audit [DBG] from='client.44553 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T05:57:31.748 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:57:31 vm05 bash[43541]: audit 2026-03-10T05:57:29.739081+0000 mgr.y (mgr.24992) 306 : audit [DBG] from='client.44553 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T05:57:31.835 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:57:31 vm02 bash[56371]: audit 2026-03-10T05:57:29.739081+0000 mgr.y (mgr.24992) 306 : audit [DBG] from='client.44553 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T05:57:31.835 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:57:31 vm02 bash[56371]: audit 2026-03-10T05:57:29.739081+0000 mgr.y (mgr.24992) 306 : audit [DBG] from='client.44553 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T05:57:31.835 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:57:31 vm02 bash[55303]: audit 2026-03-10T05:57:29.739081+0000 mgr.y (mgr.24992) 306 : audit [DBG] from='client.44553 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T05:57:31.835 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:57:31 vm02 bash[55303]: audit 2026-03-10T05:57:29.739081+0000 mgr.y (mgr.24992) 306 : audit [DBG] from='client.44553 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T05:57:32.748 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:57:32 vm05 bash[43541]: cluster 2026-03-10T05:57:30.875043+0000 mgr.y (mgr.24992) 307 : cluster [DBG] pgmap v180: 161 pgs: 161 active+clean; 457 KiB data, 325 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T05:57:32.748 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:57:32 vm05 bash[43541]: cluster 2026-03-10T05:57:30.875043+0000 mgr.y (mgr.24992) 307 : cluster [DBG] pgmap v180: 161 pgs: 161 active+clean; 457 KiB data, 325 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T05:57:32.835 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:57:32 vm02 bash[56371]: cluster 2026-03-10T05:57:30.875043+0000 mgr.y (mgr.24992) 307 : cluster [DBG] pgmap v180: 161 pgs: 161 active+clean; 457 KiB data, 325 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T05:57:32.835 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:57:32 vm02 bash[56371]: cluster 2026-03-10T05:57:30.875043+0000 mgr.y (mgr.24992) 307 : cluster [DBG] pgmap v180: 161 pgs: 161 active+clean; 457 KiB data, 325 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T05:57:32.835 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:57:32 vm02 bash[55303]: cluster 2026-03-10T05:57:30.875043+0000 mgr.y (mgr.24992) 307 : cluster [DBG] pgmap v180: 161 pgs: 161 active+clean; 457 KiB data, 325 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T05:57:32.835 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:57:32 vm02 bash[55303]: cluster 2026-03-10T05:57:30.875043+0000 mgr.y (mgr.24992) 307 : cluster [DBG] pgmap v180: 161 pgs: 161 active+clean; 457 KiB data, 325 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T05:57:33.335 INFO:journalctl@ceph.mgr.y.vm02.stdout:Mar 10 05:57:32 vm02 bash[52264]: ::ffff:192.168.123.105 - - [10/Mar/2026:05:57:32] "GET /metrics HTTP/1.1" 200 38254 "" "Prometheus/2.51.0" 2026-03-10T05:57:33.748 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:57:33 vm05 bash[43541]: audit 2026-03-10T05:57:32.064940+0000 mgr.y (mgr.24992) 308 : audit [DBG] from='client.34459 -' entity='client.iscsi.foo.vm02.mxbwmh' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T05:57:33.748 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:57:33 vm05 bash[43541]: audit 2026-03-10T05:57:32.064940+0000 mgr.y (mgr.24992) 308 : audit [DBG] from='client.34459 -' entity='client.iscsi.foo.vm02.mxbwmh' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T05:57:33.835 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:57:33 vm02 bash[56371]: audit 2026-03-10T05:57:32.064940+0000 mgr.y (mgr.24992) 308 : audit [DBG] from='client.34459 -' entity='client.iscsi.foo.vm02.mxbwmh' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T05:57:33.835 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:57:33 vm02 bash[56371]: audit 2026-03-10T05:57:32.064940+0000 mgr.y (mgr.24992) 308 : audit [DBG] from='client.34459 -' entity='client.iscsi.foo.vm02.mxbwmh' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T05:57:33.835 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:57:33 vm02 bash[55303]: audit 2026-03-10T05:57:32.064940+0000 mgr.y (mgr.24992) 308 : audit [DBG] from='client.34459 -' entity='client.iscsi.foo.vm02.mxbwmh' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T05:57:33.835 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:57:33 vm02 bash[55303]: audit 2026-03-10T05:57:32.064940+0000 mgr.y (mgr.24992) 308 : audit [DBG] from='client.34459 -' entity='client.iscsi.foo.vm02.mxbwmh' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T05:57:34.748 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:57:34 vm05 bash[43541]: cluster 2026-03-10T05:57:32.875393+0000 mgr.y (mgr.24992) 309 : cluster [DBG] pgmap v181: 161 pgs: 161 active+clean; 457 KiB data, 325 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T05:57:34.748 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:57:34 vm05 bash[43541]: cluster 2026-03-10T05:57:32.875393+0000 mgr.y (mgr.24992) 309 : cluster [DBG] pgmap v181: 161 pgs: 161 active+clean; 457 KiB data, 325 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T05:57:34.835 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:57:34 vm02 bash[56371]: cluster 2026-03-10T05:57:32.875393+0000 mgr.y (mgr.24992) 309 : cluster [DBG] pgmap v181: 161 pgs: 161 active+clean; 457 KiB data, 325 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T05:57:34.835 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:57:34 vm02 bash[56371]: cluster 2026-03-10T05:57:32.875393+0000 mgr.y (mgr.24992) 309 : cluster [DBG] pgmap v181: 161 pgs: 161 active+clean; 457 KiB data, 325 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T05:57:34.835 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:57:34 vm02 bash[55303]: cluster 2026-03-10T05:57:32.875393+0000 mgr.y (mgr.24992) 309 : cluster [DBG] pgmap v181: 161 pgs: 161 active+clean; 457 KiB data, 325 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T05:57:34.835 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:57:34 vm02 bash[55303]: cluster 2026-03-10T05:57:32.875393+0000 mgr.y (mgr.24992) 309 : cluster [DBG] pgmap v181: 161 pgs: 161 active+clean; 457 KiB data, 325 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T05:57:36.748 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:57:36 vm05 bash[43541]: cluster 2026-03-10T05:57:34.875767+0000 mgr.y (mgr.24992) 310 : cluster [DBG] pgmap v182: 161 pgs: 161 active+clean; 457 KiB data, 325 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T05:57:36.748 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:57:36 vm05 bash[43541]: cluster 2026-03-10T05:57:34.875767+0000 mgr.y (mgr.24992) 310 : cluster [DBG] pgmap v182: 161 pgs: 161 active+clean; 457 KiB data, 325 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T05:57:36.835 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:57:36 vm02 bash[56371]: cluster 2026-03-10T05:57:34.875767+0000 mgr.y (mgr.24992) 310 : cluster [DBG] pgmap v182: 161 pgs: 161 active+clean; 457 KiB data, 325 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T05:57:36.835 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:57:36 vm02 bash[56371]: cluster 2026-03-10T05:57:34.875767+0000 mgr.y (mgr.24992) 310 : cluster [DBG] pgmap v182: 161 pgs: 161 active+clean; 457 KiB data, 325 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T05:57:36.835 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:57:36 vm02 bash[55303]: cluster 2026-03-10T05:57:34.875767+0000 mgr.y (mgr.24992) 310 : cluster [DBG] pgmap v182: 161 pgs: 161 active+clean; 457 KiB data, 325 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T05:57:36.835 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:57:36 vm02 bash[55303]: cluster 2026-03-10T05:57:34.875767+0000 mgr.y (mgr.24992) 310 : cluster [DBG] pgmap v182: 161 pgs: 161 active+clean; 457 KiB data, 325 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T05:57:38.748 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:57:38 vm05 bash[43541]: cluster 2026-03-10T05:57:36.876158+0000 mgr.y (mgr.24992) 311 : cluster [DBG] pgmap v183: 161 pgs: 161 active+clean; 457 KiB data, 325 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T05:57:38.748 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:57:38 vm05 bash[43541]: cluster 2026-03-10T05:57:36.876158+0000 mgr.y (mgr.24992) 311 : cluster [DBG] pgmap v183: 161 pgs: 161 active+clean; 457 KiB data, 325 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T05:57:38.835 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:57:38 vm02 bash[56371]: cluster 2026-03-10T05:57:36.876158+0000 mgr.y (mgr.24992) 311 : cluster [DBG] pgmap v183: 161 pgs: 161 active+clean; 457 KiB data, 325 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T05:57:38.835 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:57:38 vm02 bash[56371]: cluster 2026-03-10T05:57:36.876158+0000 mgr.y (mgr.24992) 311 : cluster [DBG] pgmap v183: 161 pgs: 161 active+clean; 457 KiB data, 325 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T05:57:38.835 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:57:38 vm02 bash[55303]: cluster 2026-03-10T05:57:36.876158+0000 mgr.y (mgr.24992) 311 : cluster [DBG] pgmap v183: 161 pgs: 161 active+clean; 457 KiB data, 325 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T05:57:38.835 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:57:38 vm02 bash[55303]: cluster 2026-03-10T05:57:36.876158+0000 mgr.y (mgr.24992) 311 : cluster [DBG] pgmap v183: 161 pgs: 161 active+clean; 457 KiB data, 325 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T05:57:40.748 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:57:40 vm05 bash[43541]: cluster 2026-03-10T05:57:38.876543+0000 mgr.y (mgr.24992) 312 : cluster [DBG] pgmap v184: 161 pgs: 161 active+clean; 457 KiB data, 325 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T05:57:40.748 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:57:40 vm05 bash[43541]: cluster 2026-03-10T05:57:38.876543+0000 mgr.y (mgr.24992) 312 : cluster [DBG] pgmap v184: 161 pgs: 161 active+clean; 457 KiB data, 325 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T05:57:40.835 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:57:40 vm02 bash[56371]: cluster 2026-03-10T05:57:38.876543+0000 mgr.y (mgr.24992) 312 : cluster [DBG] pgmap v184: 161 pgs: 161 active+clean; 457 KiB data, 325 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T05:57:40.835 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:57:40 vm02 bash[56371]: cluster 2026-03-10T05:57:38.876543+0000 mgr.y (mgr.24992) 312 : cluster [DBG] pgmap v184: 161 pgs: 161 active+clean; 457 KiB data, 325 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T05:57:40.835 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:57:40 vm02 bash[55303]: cluster 2026-03-10T05:57:38.876543+0000 mgr.y (mgr.24992) 312 : cluster [DBG] pgmap v184: 161 pgs: 161 active+clean; 457 KiB data, 325 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T05:57:40.835 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:57:40 vm02 bash[55303]: cluster 2026-03-10T05:57:38.876543+0000 mgr.y (mgr.24992) 312 : cluster [DBG] pgmap v184: 161 pgs: 161 active+clean; 457 KiB data, 325 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T05:57:41.748 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:57:41 vm05 bash[43541]: audit 2026-03-10T05:57:40.884915+0000 mon.a (mon.0) 738 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T05:57:41.748 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:57:41 vm05 bash[43541]: audit 2026-03-10T05:57:40.884915+0000 mon.a (mon.0) 738 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T05:57:41.835 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:57:41 vm02 bash[56371]: audit 2026-03-10T05:57:40.884915+0000 mon.a (mon.0) 738 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T05:57:41.835 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:57:41 vm02 bash[56371]: audit 2026-03-10T05:57:40.884915+0000 mon.a (mon.0) 738 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T05:57:41.835 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:57:41 vm02 bash[55303]: audit 2026-03-10T05:57:40.884915+0000 mon.a (mon.0) 738 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T05:57:41.835 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:57:41 vm02 bash[55303]: audit 2026-03-10T05:57:40.884915+0000 mon.a (mon.0) 738 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T05:57:42.748 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:57:42 vm05 bash[43541]: cluster 2026-03-10T05:57:40.877019+0000 mgr.y (mgr.24992) 313 : cluster [DBG] pgmap v185: 161 pgs: 161 active+clean; 457 KiB data, 325 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T05:57:42.748 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:57:42 vm05 bash[43541]: cluster 2026-03-10T05:57:40.877019+0000 mgr.y (mgr.24992) 313 : cluster [DBG] pgmap v185: 161 pgs: 161 active+clean; 457 KiB data, 325 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T05:57:42.835 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:57:42 vm02 bash[56371]: cluster 2026-03-10T05:57:40.877019+0000 mgr.y (mgr.24992) 313 : cluster [DBG] pgmap v185: 161 pgs: 161 active+clean; 457 KiB data, 325 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T05:57:42.835 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:57:42 vm02 bash[56371]: cluster 2026-03-10T05:57:40.877019+0000 mgr.y (mgr.24992) 313 : cluster [DBG] pgmap v185: 161 pgs: 161 active+clean; 457 KiB data, 325 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T05:57:42.835 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:57:42 vm02 bash[55303]: cluster 2026-03-10T05:57:40.877019+0000 mgr.y (mgr.24992) 313 : cluster [DBG] pgmap v185: 161 pgs: 161 active+clean; 457 KiB data, 325 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T05:57:42.835 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:57:42 vm02 bash[55303]: cluster 2026-03-10T05:57:40.877019+0000 mgr.y (mgr.24992) 313 : cluster [DBG] pgmap v185: 161 pgs: 161 active+clean; 457 KiB data, 325 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T05:57:43.335 INFO:journalctl@ceph.mgr.y.vm02.stdout:Mar 10 05:57:42 vm02 bash[52264]: ::ffff:192.168.123.105 - - [10/Mar/2026:05:57:42] "GET /metrics HTTP/1.1" 200 38253 "" "Prometheus/2.51.0" 2026-03-10T05:57:43.748 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:57:43 vm05 bash[43541]: audit 2026-03-10T05:57:42.070345+0000 mgr.y (mgr.24992) 314 : audit [DBG] from='client.34459 -' entity='client.iscsi.foo.vm02.mxbwmh' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T05:57:43.748 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:57:43 vm05 bash[43541]: audit 2026-03-10T05:57:42.070345+0000 mgr.y (mgr.24992) 314 : audit [DBG] from='client.34459 -' entity='client.iscsi.foo.vm02.mxbwmh' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T05:57:43.835 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:57:43 vm02 bash[56371]: audit 2026-03-10T05:57:42.070345+0000 mgr.y (mgr.24992) 314 : audit [DBG] from='client.34459 -' entity='client.iscsi.foo.vm02.mxbwmh' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T05:57:43.835 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:57:43 vm02 bash[56371]: audit 2026-03-10T05:57:42.070345+0000 mgr.y (mgr.24992) 314 : audit [DBG] from='client.34459 -' entity='client.iscsi.foo.vm02.mxbwmh' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T05:57:43.835 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:57:43 vm02 bash[55303]: audit 2026-03-10T05:57:42.070345+0000 mgr.y (mgr.24992) 314 : audit [DBG] from='client.34459 -' entity='client.iscsi.foo.vm02.mxbwmh' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T05:57:43.835 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:57:43 vm02 bash[55303]: audit 2026-03-10T05:57:42.070345+0000 mgr.y (mgr.24992) 314 : audit [DBG] from='client.34459 -' entity='client.iscsi.foo.vm02.mxbwmh' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T05:57:44.748 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:57:44 vm05 bash[43541]: cluster 2026-03-10T05:57:42.877385+0000 mgr.y (mgr.24992) 315 : cluster [DBG] pgmap v186: 161 pgs: 161 active+clean; 457 KiB data, 325 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T05:57:44.748 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:57:44 vm05 bash[43541]: cluster 2026-03-10T05:57:42.877385+0000 mgr.y (mgr.24992) 315 : cluster [DBG] pgmap v186: 161 pgs: 161 active+clean; 457 KiB data, 325 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T05:57:44.835 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:57:44 vm02 bash[56371]: cluster 2026-03-10T05:57:42.877385+0000 mgr.y (mgr.24992) 315 : cluster [DBG] pgmap v186: 161 pgs: 161 active+clean; 457 KiB data, 325 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T05:57:44.835 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:57:44 vm02 bash[56371]: cluster 2026-03-10T05:57:42.877385+0000 mgr.y (mgr.24992) 315 : cluster [DBG] pgmap v186: 161 pgs: 161 active+clean; 457 KiB data, 325 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T05:57:44.835 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:57:44 vm02 bash[55303]: cluster 2026-03-10T05:57:42.877385+0000 mgr.y (mgr.24992) 315 : cluster [DBG] pgmap v186: 161 pgs: 161 active+clean; 457 KiB data, 325 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T05:57:44.835 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:57:44 vm02 bash[55303]: cluster 2026-03-10T05:57:42.877385+0000 mgr.y (mgr.24992) 315 : cluster [DBG] pgmap v186: 161 pgs: 161 active+clean; 457 KiB data, 325 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T05:57:46.835 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:57:46 vm02 bash[56371]: cluster 2026-03-10T05:57:44.877726+0000 mgr.y (mgr.24992) 316 : cluster [DBG] pgmap v187: 161 pgs: 161 active+clean; 457 KiB data, 325 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T05:57:46.835 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:57:46 vm02 bash[56371]: cluster 2026-03-10T05:57:44.877726+0000 mgr.y (mgr.24992) 316 : cluster [DBG] pgmap v187: 161 pgs: 161 active+clean; 457 KiB data, 325 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T05:57:46.835 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:57:46 vm02 bash[55303]: cluster 2026-03-10T05:57:44.877726+0000 mgr.y (mgr.24992) 316 : cluster [DBG] pgmap v187: 161 pgs: 161 active+clean; 457 KiB data, 325 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T05:57:46.835 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:57:46 vm02 bash[55303]: cluster 2026-03-10T05:57:44.877726+0000 mgr.y (mgr.24992) 316 : cluster [DBG] pgmap v187: 161 pgs: 161 active+clean; 457 KiB data, 325 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T05:57:46.998 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:57:46 vm05 bash[43541]: cluster 2026-03-10T05:57:44.877726+0000 mgr.y (mgr.24992) 316 : cluster [DBG] pgmap v187: 161 pgs: 161 active+clean; 457 KiB data, 325 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T05:57:46.998 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:57:46 vm05 bash[43541]: cluster 2026-03-10T05:57:44.877726+0000 mgr.y (mgr.24992) 316 : cluster [DBG] pgmap v187: 161 pgs: 161 active+clean; 457 KiB data, 325 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T05:57:48.835 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:57:48 vm02 bash[56371]: cluster 2026-03-10T05:57:46.878199+0000 mgr.y (mgr.24992) 317 : cluster [DBG] pgmap v188: 161 pgs: 161 active+clean; 457 KiB data, 325 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T05:57:48.835 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:57:48 vm02 bash[56371]: cluster 2026-03-10T05:57:46.878199+0000 mgr.y (mgr.24992) 317 : cluster [DBG] pgmap v188: 161 pgs: 161 active+clean; 457 KiB data, 325 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T05:57:48.835 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:57:48 vm02 bash[55303]: cluster 2026-03-10T05:57:46.878199+0000 mgr.y (mgr.24992) 317 : cluster [DBG] pgmap v188: 161 pgs: 161 active+clean; 457 KiB data, 325 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T05:57:48.835 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:57:48 vm02 bash[55303]: cluster 2026-03-10T05:57:46.878199+0000 mgr.y (mgr.24992) 317 : cluster [DBG] pgmap v188: 161 pgs: 161 active+clean; 457 KiB data, 325 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T05:57:48.998 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:57:48 vm05 bash[43541]: cluster 2026-03-10T05:57:46.878199+0000 mgr.y (mgr.24992) 317 : cluster [DBG] pgmap v188: 161 pgs: 161 active+clean; 457 KiB data, 325 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T05:57:48.998 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:57:48 vm05 bash[43541]: cluster 2026-03-10T05:57:46.878199+0000 mgr.y (mgr.24992) 317 : cluster [DBG] pgmap v188: 161 pgs: 161 active+clean; 457 KiB data, 325 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T05:57:50.835 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:57:50 vm02 bash[56371]: cluster 2026-03-10T05:57:48.878533+0000 mgr.y (mgr.24992) 318 : cluster [DBG] pgmap v189: 161 pgs: 161 active+clean; 457 KiB data, 325 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T05:57:50.835 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:57:50 vm02 bash[56371]: cluster 2026-03-10T05:57:48.878533+0000 mgr.y (mgr.24992) 318 : cluster [DBG] pgmap v189: 161 pgs: 161 active+clean; 457 KiB data, 325 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T05:57:50.835 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:57:50 vm02 bash[55303]: cluster 2026-03-10T05:57:48.878533+0000 mgr.y (mgr.24992) 318 : cluster [DBG] pgmap v189: 161 pgs: 161 active+clean; 457 KiB data, 325 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T05:57:50.835 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:57:50 vm02 bash[55303]: cluster 2026-03-10T05:57:48.878533+0000 mgr.y (mgr.24992) 318 : cluster [DBG] pgmap v189: 161 pgs: 161 active+clean; 457 KiB data, 325 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T05:57:50.998 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:57:50 vm05 bash[43541]: cluster 2026-03-10T05:57:48.878533+0000 mgr.y (mgr.24992) 318 : cluster [DBG] pgmap v189: 161 pgs: 161 active+clean; 457 KiB data, 325 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T05:57:50.998 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:57:50 vm05 bash[43541]: cluster 2026-03-10T05:57:48.878533+0000 mgr.y (mgr.24992) 318 : cluster [DBG] pgmap v189: 161 pgs: 161 active+clean; 457 KiB data, 325 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T05:57:52.080 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:57:51 vm02 bash[56371]: cluster 2026-03-10T05:57:50.878915+0000 mgr.y (mgr.24992) 319 : cluster [DBG] pgmap v190: 161 pgs: 161 active+clean; 457 KiB data, 325 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T05:57:52.080 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:57:51 vm02 bash[56371]: cluster 2026-03-10T05:57:50.878915+0000 mgr.y (mgr.24992) 319 : cluster [DBG] pgmap v190: 161 pgs: 161 active+clean; 457 KiB data, 325 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T05:57:52.081 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:57:51 vm02 bash[55303]: cluster 2026-03-10T05:57:50.878915+0000 mgr.y (mgr.24992) 319 : cluster [DBG] pgmap v190: 161 pgs: 161 active+clean; 457 KiB data, 325 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T05:57:52.081 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:57:51 vm02 bash[55303]: cluster 2026-03-10T05:57:50.878915+0000 mgr.y (mgr.24992) 319 : cluster [DBG] pgmap v190: 161 pgs: 161 active+clean; 457 KiB data, 325 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T05:57:52.248 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:57:51 vm05 bash[43541]: cluster 2026-03-10T05:57:50.878915+0000 mgr.y (mgr.24992) 319 : cluster [DBG] pgmap v190: 161 pgs: 161 active+clean; 457 KiB data, 325 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T05:57:52.248 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:57:51 vm05 bash[43541]: cluster 2026-03-10T05:57:50.878915+0000 mgr.y (mgr.24992) 319 : cluster [DBG] pgmap v190: 161 pgs: 161 active+clean; 457 KiB data, 325 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T05:57:53.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:57:52 vm02 bash[56371]: audit 2026-03-10T05:57:52.079274+0000 mgr.y (mgr.24992) 320 : audit [DBG] from='client.34459 -' entity='client.iscsi.foo.vm02.mxbwmh' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T05:57:53.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:57:52 vm02 bash[56371]: audit 2026-03-10T05:57:52.079274+0000 mgr.y (mgr.24992) 320 : audit [DBG] from='client.34459 -' entity='client.iscsi.foo.vm02.mxbwmh' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T05:57:53.085 INFO:journalctl@ceph.mgr.y.vm02.stdout:Mar 10 05:57:52 vm02 bash[52264]: ::ffff:192.168.123.105 - - [10/Mar/2026:05:57:52] "GET /metrics HTTP/1.1" 200 38253 "" "Prometheus/2.51.0" 2026-03-10T05:57:53.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:57:52 vm02 bash[55303]: audit 2026-03-10T05:57:52.079274+0000 mgr.y (mgr.24992) 320 : audit [DBG] from='client.34459 -' entity='client.iscsi.foo.vm02.mxbwmh' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T05:57:53.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:57:52 vm02 bash[55303]: audit 2026-03-10T05:57:52.079274+0000 mgr.y (mgr.24992) 320 : audit [DBG] from='client.34459 -' entity='client.iscsi.foo.vm02.mxbwmh' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T05:57:53.247 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:57:52 vm05 bash[43541]: audit 2026-03-10T05:57:52.079274+0000 mgr.y (mgr.24992) 320 : audit [DBG] from='client.34459 -' entity='client.iscsi.foo.vm02.mxbwmh' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T05:57:53.248 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:57:52 vm05 bash[43541]: audit 2026-03-10T05:57:52.079274+0000 mgr.y (mgr.24992) 320 : audit [DBG] from='client.34459 -' entity='client.iscsi.foo.vm02.mxbwmh' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T05:57:54.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:57:53 vm02 bash[56371]: cluster 2026-03-10T05:57:52.879275+0000 mgr.y (mgr.24992) 321 : cluster [DBG] pgmap v191: 161 pgs: 161 active+clean; 457 KiB data, 325 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T05:57:54.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:57:53 vm02 bash[56371]: cluster 2026-03-10T05:57:52.879275+0000 mgr.y (mgr.24992) 321 : cluster [DBG] pgmap v191: 161 pgs: 161 active+clean; 457 KiB data, 325 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T05:57:54.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:57:53 vm02 bash[55303]: cluster 2026-03-10T05:57:52.879275+0000 mgr.y (mgr.24992) 321 : cluster [DBG] pgmap v191: 161 pgs: 161 active+clean; 457 KiB data, 325 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T05:57:54.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:57:53 vm02 bash[55303]: cluster 2026-03-10T05:57:52.879275+0000 mgr.y (mgr.24992) 321 : cluster [DBG] pgmap v191: 161 pgs: 161 active+clean; 457 KiB data, 325 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T05:57:54.248 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:57:53 vm05 bash[43541]: cluster 2026-03-10T05:57:52.879275+0000 mgr.y (mgr.24992) 321 : cluster [DBG] pgmap v191: 161 pgs: 161 active+clean; 457 KiB data, 325 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T05:57:54.248 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:57:53 vm05 bash[43541]: cluster 2026-03-10T05:57:52.879275+0000 mgr.y (mgr.24992) 321 : cluster [DBG] pgmap v191: 161 pgs: 161 active+clean; 457 KiB data, 325 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T05:57:56.248 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:57:55 vm05 bash[43541]: cluster 2026-03-10T05:57:54.879606+0000 mgr.y (mgr.24992) 322 : cluster [DBG] pgmap v192: 161 pgs: 161 active+clean; 457 KiB data, 325 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T05:57:56.248 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:57:55 vm05 bash[43541]: cluster 2026-03-10T05:57:54.879606+0000 mgr.y (mgr.24992) 322 : cluster [DBG] pgmap v192: 161 pgs: 161 active+clean; 457 KiB data, 325 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T05:57:56.248 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:57:55 vm05 bash[43541]: audit 2026-03-10T05:57:55.884874+0000 mon.a (mon.0) 739 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T05:57:56.248 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:57:55 vm05 bash[43541]: audit 2026-03-10T05:57:55.884874+0000 mon.a (mon.0) 739 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T05:57:56.335 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:57:55 vm02 bash[56371]: cluster 2026-03-10T05:57:54.879606+0000 mgr.y (mgr.24992) 322 : cluster [DBG] pgmap v192: 161 pgs: 161 active+clean; 457 KiB data, 325 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T05:57:56.335 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:57:55 vm02 bash[56371]: cluster 2026-03-10T05:57:54.879606+0000 mgr.y (mgr.24992) 322 : cluster [DBG] pgmap v192: 161 pgs: 161 active+clean; 457 KiB data, 325 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T05:57:56.335 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:57:55 vm02 bash[56371]: audit 2026-03-10T05:57:55.884874+0000 mon.a (mon.0) 739 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T05:57:56.335 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:57:55 vm02 bash[56371]: audit 2026-03-10T05:57:55.884874+0000 mon.a (mon.0) 739 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T05:57:56.335 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:57:55 vm02 bash[55303]: cluster 2026-03-10T05:57:54.879606+0000 mgr.y (mgr.24992) 322 : cluster [DBG] pgmap v192: 161 pgs: 161 active+clean; 457 KiB data, 325 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T05:57:56.335 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:57:55 vm02 bash[55303]: cluster 2026-03-10T05:57:54.879606+0000 mgr.y (mgr.24992) 322 : cluster [DBG] pgmap v192: 161 pgs: 161 active+clean; 457 KiB data, 325 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T05:57:56.335 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:57:55 vm02 bash[55303]: audit 2026-03-10T05:57:55.884874+0000 mon.a (mon.0) 739 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T05:57:56.335 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:57:55 vm02 bash[55303]: audit 2026-03-10T05:57:55.884874+0000 mon.a (mon.0) 739 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T05:57:58.248 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:57:57 vm05 bash[43541]: cluster 2026-03-10T05:57:56.879961+0000 mgr.y (mgr.24992) 323 : cluster [DBG] pgmap v193: 161 pgs: 161 active+clean; 457 KiB data, 325 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T05:57:58.248 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:57:57 vm05 bash[43541]: cluster 2026-03-10T05:57:56.879961+0000 mgr.y (mgr.24992) 323 : cluster [DBG] pgmap v193: 161 pgs: 161 active+clean; 457 KiB data, 325 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T05:57:58.335 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:57:57 vm02 bash[56371]: cluster 2026-03-10T05:57:56.879961+0000 mgr.y (mgr.24992) 323 : cluster [DBG] pgmap v193: 161 pgs: 161 active+clean; 457 KiB data, 325 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T05:57:58.335 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:57:57 vm02 bash[56371]: cluster 2026-03-10T05:57:56.879961+0000 mgr.y (mgr.24992) 323 : cluster [DBG] pgmap v193: 161 pgs: 161 active+clean; 457 KiB data, 325 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T05:57:58.335 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:57:57 vm02 bash[55303]: cluster 2026-03-10T05:57:56.879961+0000 mgr.y (mgr.24992) 323 : cluster [DBG] pgmap v193: 161 pgs: 161 active+clean; 457 KiB data, 325 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T05:57:58.335 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:57:57 vm02 bash[55303]: cluster 2026-03-10T05:57:56.879961+0000 mgr.y (mgr.24992) 323 : cluster [DBG] pgmap v193: 161 pgs: 161 active+clean; 457 KiB data, 325 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T05:58:00.247 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:57:59 vm05 bash[43541]: cluster 2026-03-10T05:57:58.880340+0000 mgr.y (mgr.24992) 324 : cluster [DBG] pgmap v194: 161 pgs: 161 active+clean; 457 KiB data, 325 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T05:58:00.248 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:57:59 vm05 bash[43541]: cluster 2026-03-10T05:57:58.880340+0000 mgr.y (mgr.24992) 324 : cluster [DBG] pgmap v194: 161 pgs: 161 active+clean; 457 KiB data, 325 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T05:58:00.335 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:57:59 vm02 bash[56371]: cluster 2026-03-10T05:57:58.880340+0000 mgr.y (mgr.24992) 324 : cluster [DBG] pgmap v194: 161 pgs: 161 active+clean; 457 KiB data, 325 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T05:58:00.335 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:57:59 vm02 bash[56371]: cluster 2026-03-10T05:57:58.880340+0000 mgr.y (mgr.24992) 324 : cluster [DBG] pgmap v194: 161 pgs: 161 active+clean; 457 KiB data, 325 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T05:58:00.335 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:57:59 vm02 bash[55303]: cluster 2026-03-10T05:57:58.880340+0000 mgr.y (mgr.24992) 324 : cluster [DBG] pgmap v194: 161 pgs: 161 active+clean; 457 KiB data, 325 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T05:58:00.335 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:57:59 vm02 bash[55303]: cluster 2026-03-10T05:57:58.880340+0000 mgr.y (mgr.24992) 324 : cluster [DBG] pgmap v194: 161 pgs: 161 active+clean; 457 KiB data, 325 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T05:58:02.247 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:58:01 vm05 bash[43541]: cluster 2026-03-10T05:58:00.880681+0000 mgr.y (mgr.24992) 325 : cluster [DBG] pgmap v195: 161 pgs: 161 active+clean; 457 KiB data, 325 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T05:58:02.248 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:58:01 vm05 bash[43541]: cluster 2026-03-10T05:58:00.880681+0000 mgr.y (mgr.24992) 325 : cluster [DBG] pgmap v195: 161 pgs: 161 active+clean; 457 KiB data, 325 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T05:58:02.335 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:58:01 vm02 bash[56371]: cluster 2026-03-10T05:58:00.880681+0000 mgr.y (mgr.24992) 325 : cluster [DBG] pgmap v195: 161 pgs: 161 active+clean; 457 KiB data, 325 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T05:58:02.335 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:58:01 vm02 bash[56371]: cluster 2026-03-10T05:58:00.880681+0000 mgr.y (mgr.24992) 325 : cluster [DBG] pgmap v195: 161 pgs: 161 active+clean; 457 KiB data, 325 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T05:58:02.335 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:58:01 vm02 bash[55303]: cluster 2026-03-10T05:58:00.880681+0000 mgr.y (mgr.24992) 325 : cluster [DBG] pgmap v195: 161 pgs: 161 active+clean; 457 KiB data, 325 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T05:58:02.335 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:58:01 vm02 bash[55303]: cluster 2026-03-10T05:58:00.880681+0000 mgr.y (mgr.24992) 325 : cluster [DBG] pgmap v195: 161 pgs: 161 active+clean; 457 KiB data, 325 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T05:58:03.248 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:58:02 vm05 bash[43541]: audit 2026-03-10T05:58:02.086294+0000 mgr.y (mgr.24992) 326 : audit [DBG] from='client.34459 -' entity='client.iscsi.foo.vm02.mxbwmh' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T05:58:03.248 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:58:02 vm05 bash[43541]: audit 2026-03-10T05:58:02.086294+0000 mgr.y (mgr.24992) 326 : audit [DBG] from='client.34459 -' entity='client.iscsi.foo.vm02.mxbwmh' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T05:58:03.335 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:58:02 vm02 bash[56371]: audit 2026-03-10T05:58:02.086294+0000 mgr.y (mgr.24992) 326 : audit [DBG] from='client.34459 -' entity='client.iscsi.foo.vm02.mxbwmh' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T05:58:03.335 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:58:02 vm02 bash[56371]: audit 2026-03-10T05:58:02.086294+0000 mgr.y (mgr.24992) 326 : audit [DBG] from='client.34459 -' entity='client.iscsi.foo.vm02.mxbwmh' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T05:58:03.335 INFO:journalctl@ceph.mgr.y.vm02.stdout:Mar 10 05:58:02 vm02 bash[52264]: ::ffff:192.168.123.105 - - [10/Mar/2026:05:58:02] "GET /metrics HTTP/1.1" 200 38254 "" "Prometheus/2.51.0" 2026-03-10T05:58:03.335 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:58:02 vm02 bash[55303]: audit 2026-03-10T05:58:02.086294+0000 mgr.y (mgr.24992) 326 : audit [DBG] from='client.34459 -' entity='client.iscsi.foo.vm02.mxbwmh' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T05:58:03.335 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:58:02 vm02 bash[55303]: audit 2026-03-10T05:58:02.086294+0000 mgr.y (mgr.24992) 326 : audit [DBG] from='client.34459 -' entity='client.iscsi.foo.vm02.mxbwmh' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T05:58:04.247 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:58:03 vm05 bash[43541]: cluster 2026-03-10T05:58:02.881123+0000 mgr.y (mgr.24992) 327 : cluster [DBG] pgmap v196: 161 pgs: 161 active+clean; 457 KiB data, 325 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T05:58:04.248 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:58:03 vm05 bash[43541]: cluster 2026-03-10T05:58:02.881123+0000 mgr.y (mgr.24992) 327 : cluster [DBG] pgmap v196: 161 pgs: 161 active+clean; 457 KiB data, 325 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T05:58:04.335 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:58:03 vm02 bash[56371]: cluster 2026-03-10T05:58:02.881123+0000 mgr.y (mgr.24992) 327 : cluster [DBG] pgmap v196: 161 pgs: 161 active+clean; 457 KiB data, 325 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T05:58:04.335 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:58:03 vm02 bash[56371]: cluster 2026-03-10T05:58:02.881123+0000 mgr.y (mgr.24992) 327 : cluster [DBG] pgmap v196: 161 pgs: 161 active+clean; 457 KiB data, 325 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T05:58:04.335 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:58:03 vm02 bash[55303]: cluster 2026-03-10T05:58:02.881123+0000 mgr.y (mgr.24992) 327 : cluster [DBG] pgmap v196: 161 pgs: 161 active+clean; 457 KiB data, 325 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T05:58:04.335 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:58:03 vm02 bash[55303]: cluster 2026-03-10T05:58:02.881123+0000 mgr.y (mgr.24992) 327 : cluster [DBG] pgmap v196: 161 pgs: 161 active+clean; 457 KiB data, 325 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T05:58:06.247 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:58:05 vm05 bash[43541]: cluster 2026-03-10T05:58:04.881438+0000 mgr.y (mgr.24992) 328 : cluster [DBG] pgmap v197: 161 pgs: 161 active+clean; 457 KiB data, 325 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T05:58:06.248 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:58:05 vm05 bash[43541]: cluster 2026-03-10T05:58:04.881438+0000 mgr.y (mgr.24992) 328 : cluster [DBG] pgmap v197: 161 pgs: 161 active+clean; 457 KiB data, 325 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T05:58:06.335 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:58:05 vm02 bash[56371]: cluster 2026-03-10T05:58:04.881438+0000 mgr.y (mgr.24992) 328 : cluster [DBG] pgmap v197: 161 pgs: 161 active+clean; 457 KiB data, 325 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T05:58:06.335 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:58:05 vm02 bash[56371]: cluster 2026-03-10T05:58:04.881438+0000 mgr.y (mgr.24992) 328 : cluster [DBG] pgmap v197: 161 pgs: 161 active+clean; 457 KiB data, 325 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T05:58:06.335 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:58:05 vm02 bash[55303]: cluster 2026-03-10T05:58:04.881438+0000 mgr.y (mgr.24992) 328 : cluster [DBG] pgmap v197: 161 pgs: 161 active+clean; 457 KiB data, 325 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T05:58:06.335 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:58:05 vm02 bash[55303]: cluster 2026-03-10T05:58:04.881438+0000 mgr.y (mgr.24992) 328 : cluster [DBG] pgmap v197: 161 pgs: 161 active+clean; 457 KiB data, 325 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T05:58:08.247 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:58:07 vm05 bash[43541]: cluster 2026-03-10T05:58:06.881850+0000 mgr.y (mgr.24992) 329 : cluster [DBG] pgmap v198: 161 pgs: 161 active+clean; 457 KiB data, 325 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T05:58:08.247 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:58:07 vm05 bash[43541]: cluster 2026-03-10T05:58:06.881850+0000 mgr.y (mgr.24992) 329 : cluster [DBG] pgmap v198: 161 pgs: 161 active+clean; 457 KiB data, 325 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T05:58:08.248 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 05:58:08 vm05 bash[59013]: logger=infra.usagestats t=2026-03-10T05:58:08.222219336Z level=info msg="Usage stats are ready to report" 2026-03-10T05:58:08.335 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:58:07 vm02 bash[56371]: cluster 2026-03-10T05:58:06.881850+0000 mgr.y (mgr.24992) 329 : cluster [DBG] pgmap v198: 161 pgs: 161 active+clean; 457 KiB data, 325 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T05:58:08.335 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:58:07 vm02 bash[56371]: cluster 2026-03-10T05:58:06.881850+0000 mgr.y (mgr.24992) 329 : cluster [DBG] pgmap v198: 161 pgs: 161 active+clean; 457 KiB data, 325 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T05:58:08.335 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:58:07 vm02 bash[55303]: cluster 2026-03-10T05:58:06.881850+0000 mgr.y (mgr.24992) 329 : cluster [DBG] pgmap v198: 161 pgs: 161 active+clean; 457 KiB data, 325 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T05:58:08.335 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:58:07 vm02 bash[55303]: cluster 2026-03-10T05:58:06.881850+0000 mgr.y (mgr.24992) 329 : cluster [DBG] pgmap v198: 161 pgs: 161 active+clean; 457 KiB data, 325 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T05:58:10.335 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:58:09 vm02 bash[56371]: cluster 2026-03-10T05:58:08.882225+0000 mgr.y (mgr.24992) 330 : cluster [DBG] pgmap v199: 161 pgs: 161 active+clean; 457 KiB data, 325 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T05:58:10.335 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:58:09 vm02 bash[56371]: cluster 2026-03-10T05:58:08.882225+0000 mgr.y (mgr.24992) 330 : cluster [DBG] pgmap v199: 161 pgs: 161 active+clean; 457 KiB data, 325 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T05:58:10.335 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:58:09 vm02 bash[55303]: cluster 2026-03-10T05:58:08.882225+0000 mgr.y (mgr.24992) 330 : cluster [DBG] pgmap v199: 161 pgs: 161 active+clean; 457 KiB data, 325 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T05:58:10.335 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:58:09 vm02 bash[55303]: cluster 2026-03-10T05:58:08.882225+0000 mgr.y (mgr.24992) 330 : cluster [DBG] pgmap v199: 161 pgs: 161 active+clean; 457 KiB data, 325 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T05:58:10.497 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:58:10 vm05 bash[43541]: cluster 2026-03-10T05:58:08.882225+0000 mgr.y (mgr.24992) 330 : cluster [DBG] pgmap v199: 161 pgs: 161 active+clean; 457 KiB data, 325 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T05:58:10.498 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:58:10 vm05 bash[43541]: cluster 2026-03-10T05:58:08.882225+0000 mgr.y (mgr.24992) 330 : cluster [DBG] pgmap v199: 161 pgs: 161 active+clean; 457 KiB data, 325 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T05:58:11.335 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:58:11 vm02 bash[56371]: audit 2026-03-10T05:58:10.888578+0000 mon.a (mon.0) 740 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T05:58:11.335 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:58:11 vm02 bash[56371]: audit 2026-03-10T05:58:10.888578+0000 mon.a (mon.0) 740 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T05:58:11.335 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:58:11 vm02 bash[55303]: audit 2026-03-10T05:58:10.888578+0000 mon.a (mon.0) 740 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T05:58:11.335 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:58:11 vm02 bash[55303]: audit 2026-03-10T05:58:10.888578+0000 mon.a (mon.0) 740 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T05:58:11.498 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:58:11 vm05 bash[43541]: audit 2026-03-10T05:58:10.888578+0000 mon.a (mon.0) 740 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T05:58:11.498 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:58:11 vm05 bash[43541]: audit 2026-03-10T05:58:10.888578+0000 mon.a (mon.0) 740 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T05:58:12.335 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:58:12 vm02 bash[56371]: cluster 2026-03-10T05:58:10.882560+0000 mgr.y (mgr.24992) 331 : cluster [DBG] pgmap v200: 161 pgs: 161 active+clean; 457 KiB data, 325 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T05:58:12.335 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:58:12 vm02 bash[56371]: cluster 2026-03-10T05:58:10.882560+0000 mgr.y (mgr.24992) 331 : cluster [DBG] pgmap v200: 161 pgs: 161 active+clean; 457 KiB data, 325 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T05:58:12.335 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:58:12 vm02 bash[55303]: cluster 2026-03-10T05:58:10.882560+0000 mgr.y (mgr.24992) 331 : cluster [DBG] pgmap v200: 161 pgs: 161 active+clean; 457 KiB data, 325 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T05:58:12.335 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:58:12 vm02 bash[55303]: cluster 2026-03-10T05:58:10.882560+0000 mgr.y (mgr.24992) 331 : cluster [DBG] pgmap v200: 161 pgs: 161 active+clean; 457 KiB data, 325 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T05:58:12.497 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:58:12 vm05 bash[43541]: cluster 2026-03-10T05:58:10.882560+0000 mgr.y (mgr.24992) 331 : cluster [DBG] pgmap v200: 161 pgs: 161 active+clean; 457 KiB data, 325 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T05:58:12.498 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:58:12 vm05 bash[43541]: cluster 2026-03-10T05:58:10.882560+0000 mgr.y (mgr.24992) 331 : cluster [DBG] pgmap v200: 161 pgs: 161 active+clean; 457 KiB data, 325 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T05:58:13.335 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:58:13 vm02 bash[56371]: audit 2026-03-10T05:58:12.096156+0000 mgr.y (mgr.24992) 332 : audit [DBG] from='client.34459 -' entity='client.iscsi.foo.vm02.mxbwmh' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T05:58:13.335 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:58:13 vm02 bash[56371]: audit 2026-03-10T05:58:12.096156+0000 mgr.y (mgr.24992) 332 : audit [DBG] from='client.34459 -' entity='client.iscsi.foo.vm02.mxbwmh' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T05:58:13.335 INFO:journalctl@ceph.mgr.y.vm02.stdout:Mar 10 05:58:12 vm02 bash[52264]: ::ffff:192.168.123.105 - - [10/Mar/2026:05:58:12] "GET /metrics HTTP/1.1" 200 38256 "" "Prometheus/2.51.0" 2026-03-10T05:58:13.335 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:58:13 vm02 bash[55303]: audit 2026-03-10T05:58:12.096156+0000 mgr.y (mgr.24992) 332 : audit [DBG] from='client.34459 -' entity='client.iscsi.foo.vm02.mxbwmh' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T05:58:13.335 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:58:13 vm02 bash[55303]: audit 2026-03-10T05:58:12.096156+0000 mgr.y (mgr.24992) 332 : audit [DBG] from='client.34459 -' entity='client.iscsi.foo.vm02.mxbwmh' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T05:58:13.497 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:58:13 vm05 bash[43541]: audit 2026-03-10T05:58:12.096156+0000 mgr.y (mgr.24992) 332 : audit [DBG] from='client.34459 -' entity='client.iscsi.foo.vm02.mxbwmh' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T05:58:13.497 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:58:13 vm05 bash[43541]: audit 2026-03-10T05:58:12.096156+0000 mgr.y (mgr.24992) 332 : audit [DBG] from='client.34459 -' entity='client.iscsi.foo.vm02.mxbwmh' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T05:58:14.335 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:58:14 vm02 bash[56371]: cluster 2026-03-10T05:58:12.882950+0000 mgr.y (mgr.24992) 333 : cluster [DBG] pgmap v201: 161 pgs: 161 active+clean; 457 KiB data, 325 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T05:58:14.335 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:58:14 vm02 bash[56371]: cluster 2026-03-10T05:58:12.882950+0000 mgr.y (mgr.24992) 333 : cluster [DBG] pgmap v201: 161 pgs: 161 active+clean; 457 KiB data, 325 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T05:58:14.335 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:58:14 vm02 bash[55303]: cluster 2026-03-10T05:58:12.882950+0000 mgr.y (mgr.24992) 333 : cluster [DBG] pgmap v201: 161 pgs: 161 active+clean; 457 KiB data, 325 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T05:58:14.335 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:58:14 vm02 bash[55303]: cluster 2026-03-10T05:58:12.882950+0000 mgr.y (mgr.24992) 333 : cluster [DBG] pgmap v201: 161 pgs: 161 active+clean; 457 KiB data, 325 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T05:58:14.497 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:58:14 vm05 bash[43541]: cluster 2026-03-10T05:58:12.882950+0000 mgr.y (mgr.24992) 333 : cluster [DBG] pgmap v201: 161 pgs: 161 active+clean; 457 KiB data, 325 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T05:58:14.498 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:58:14 vm05 bash[43541]: cluster 2026-03-10T05:58:12.882950+0000 mgr.y (mgr.24992) 333 : cluster [DBG] pgmap v201: 161 pgs: 161 active+clean; 457 KiB data, 325 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T05:58:15.335 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:58:15 vm02 bash[56371]: audit 2026-03-10T05:58:14.718624+0000 mon.a (mon.0) 741 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:58:15.335 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:58:15 vm02 bash[56371]: audit 2026-03-10T05:58:14.718624+0000 mon.a (mon.0) 741 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:58:15.335 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:58:15 vm02 bash[56371]: audit 2026-03-10T05:58:14.719142+0000 mon.a (mon.0) 742 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T05:58:15.335 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:58:15 vm02 bash[56371]: audit 2026-03-10T05:58:14.719142+0000 mon.a (mon.0) 742 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T05:58:15.335 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:58:15 vm02 bash[56371]: audit 2026-03-10T05:58:14.723965+0000 mon.a (mon.0) 743 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:58:15.335 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:58:15 vm02 bash[56371]: audit 2026-03-10T05:58:14.723965+0000 mon.a (mon.0) 743 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:58:15.335 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:58:15 vm02 bash[55303]: audit 2026-03-10T05:58:14.718624+0000 mon.a (mon.0) 741 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:58:15.335 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:58:15 vm02 bash[55303]: audit 2026-03-10T05:58:14.718624+0000 mon.a (mon.0) 741 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:58:15.335 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:58:15 vm02 bash[55303]: audit 2026-03-10T05:58:14.719142+0000 mon.a (mon.0) 742 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T05:58:15.335 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:58:15 vm02 bash[55303]: audit 2026-03-10T05:58:14.719142+0000 mon.a (mon.0) 742 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T05:58:15.335 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:58:15 vm02 bash[55303]: audit 2026-03-10T05:58:14.723965+0000 mon.a (mon.0) 743 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:58:15.335 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:58:15 vm02 bash[55303]: audit 2026-03-10T05:58:14.723965+0000 mon.a (mon.0) 743 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:58:15.497 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:58:15 vm05 bash[43541]: audit 2026-03-10T05:58:14.718624+0000 mon.a (mon.0) 741 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:58:15.498 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:58:15 vm05 bash[43541]: audit 2026-03-10T05:58:14.718624+0000 mon.a (mon.0) 741 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T05:58:15.498 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:58:15 vm05 bash[43541]: audit 2026-03-10T05:58:14.719142+0000 mon.a (mon.0) 742 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T05:58:15.498 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:58:15 vm05 bash[43541]: audit 2026-03-10T05:58:14.719142+0000 mon.a (mon.0) 742 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T05:58:15.498 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:58:15 vm05 bash[43541]: audit 2026-03-10T05:58:14.723965+0000 mon.a (mon.0) 743 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:58:15.498 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:58:15 vm05 bash[43541]: audit 2026-03-10T05:58:14.723965+0000 mon.a (mon.0) 743 : audit [INF] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' 2026-03-10T05:58:16.997 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:58:16 vm05 bash[43541]: cluster 2026-03-10T05:58:14.883291+0000 mgr.y (mgr.24992) 334 : cluster [DBG] pgmap v202: 161 pgs: 161 active+clean; 457 KiB data, 325 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T05:58:16.997 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:58:16 vm05 bash[43541]: cluster 2026-03-10T05:58:14.883291+0000 mgr.y (mgr.24992) 334 : cluster [DBG] pgmap v202: 161 pgs: 161 active+clean; 457 KiB data, 325 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T05:58:17.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:58:16 vm02 bash[56371]: cluster 2026-03-10T05:58:14.883291+0000 mgr.y (mgr.24992) 334 : cluster [DBG] pgmap v202: 161 pgs: 161 active+clean; 457 KiB data, 325 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T05:58:17.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:58:16 vm02 bash[56371]: cluster 2026-03-10T05:58:14.883291+0000 mgr.y (mgr.24992) 334 : cluster [DBG] pgmap v202: 161 pgs: 161 active+clean; 457 KiB data, 325 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T05:58:17.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:58:16 vm02 bash[55303]: cluster 2026-03-10T05:58:14.883291+0000 mgr.y (mgr.24992) 334 : cluster [DBG] pgmap v202: 161 pgs: 161 active+clean; 457 KiB data, 325 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T05:58:17.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:58:16 vm02 bash[55303]: cluster 2026-03-10T05:58:14.883291+0000 mgr.y (mgr.24992) 334 : cluster [DBG] pgmap v202: 161 pgs: 161 active+clean; 457 KiB data, 325 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T05:58:18.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:58:17 vm02 bash[56371]: cluster 2026-03-10T05:58:16.883742+0000 mgr.y (mgr.24992) 335 : cluster [DBG] pgmap v203: 161 pgs: 161 active+clean; 457 KiB data, 325 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T05:58:18.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:58:17 vm02 bash[56371]: cluster 2026-03-10T05:58:16.883742+0000 mgr.y (mgr.24992) 335 : cluster [DBG] pgmap v203: 161 pgs: 161 active+clean; 457 KiB data, 325 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T05:58:18.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:58:17 vm02 bash[55303]: cluster 2026-03-10T05:58:16.883742+0000 mgr.y (mgr.24992) 335 : cluster [DBG] pgmap v203: 161 pgs: 161 active+clean; 457 KiB data, 325 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T05:58:18.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:58:17 vm02 bash[55303]: cluster 2026-03-10T05:58:16.883742+0000 mgr.y (mgr.24992) 335 : cluster [DBG] pgmap v203: 161 pgs: 161 active+clean; 457 KiB data, 325 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T05:58:18.247 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:58:17 vm05 bash[43541]: cluster 2026-03-10T05:58:16.883742+0000 mgr.y (mgr.24992) 335 : cluster [DBG] pgmap v203: 161 pgs: 161 active+clean; 457 KiB data, 325 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T05:58:18.247 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:58:17 vm05 bash[43541]: cluster 2026-03-10T05:58:16.883742+0000 mgr.y (mgr.24992) 335 : cluster [DBG] pgmap v203: 161 pgs: 161 active+clean; 457 KiB data, 325 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T05:58:20.247 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:58:19 vm05 bash[43541]: cluster 2026-03-10T05:58:18.884088+0000 mgr.y (mgr.24992) 336 : cluster [DBG] pgmap v204: 161 pgs: 161 active+clean; 457 KiB data, 325 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T05:58:20.247 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:58:19 vm05 bash[43541]: cluster 2026-03-10T05:58:18.884088+0000 mgr.y (mgr.24992) 336 : cluster [DBG] pgmap v204: 161 pgs: 161 active+clean; 457 KiB data, 325 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T05:58:20.335 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:58:19 vm02 bash[56371]: cluster 2026-03-10T05:58:18.884088+0000 mgr.y (mgr.24992) 336 : cluster [DBG] pgmap v204: 161 pgs: 161 active+clean; 457 KiB data, 325 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T05:58:20.335 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:58:19 vm02 bash[56371]: cluster 2026-03-10T05:58:18.884088+0000 mgr.y (mgr.24992) 336 : cluster [DBG] pgmap v204: 161 pgs: 161 active+clean; 457 KiB data, 325 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T05:58:20.335 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:58:19 vm02 bash[55303]: cluster 2026-03-10T05:58:18.884088+0000 mgr.y (mgr.24992) 336 : cluster [DBG] pgmap v204: 161 pgs: 161 active+clean; 457 KiB data, 325 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T05:58:20.335 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:58:19 vm02 bash[55303]: cluster 2026-03-10T05:58:18.884088+0000 mgr.y (mgr.24992) 336 : cluster [DBG] pgmap v204: 161 pgs: 161 active+clean; 457 KiB data, 325 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T05:58:22.247 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:58:21 vm05 bash[43541]: cluster 2026-03-10T05:58:20.884448+0000 mgr.y (mgr.24992) 337 : cluster [DBG] pgmap v205: 161 pgs: 161 active+clean; 457 KiB data, 325 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T05:58:22.247 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:58:21 vm05 bash[43541]: cluster 2026-03-10T05:58:20.884448+0000 mgr.y (mgr.24992) 337 : cluster [DBG] pgmap v205: 161 pgs: 161 active+clean; 457 KiB data, 325 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T05:58:22.335 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:58:21 vm02 bash[56371]: cluster 2026-03-10T05:58:20.884448+0000 mgr.y (mgr.24992) 337 : cluster [DBG] pgmap v205: 161 pgs: 161 active+clean; 457 KiB data, 325 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T05:58:22.335 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:58:21 vm02 bash[56371]: cluster 2026-03-10T05:58:20.884448+0000 mgr.y (mgr.24992) 337 : cluster [DBG] pgmap v205: 161 pgs: 161 active+clean; 457 KiB data, 325 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T05:58:22.335 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:58:21 vm02 bash[55303]: cluster 2026-03-10T05:58:20.884448+0000 mgr.y (mgr.24992) 337 : cluster [DBG] pgmap v205: 161 pgs: 161 active+clean; 457 KiB data, 325 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T05:58:22.335 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:58:21 vm02 bash[55303]: cluster 2026-03-10T05:58:20.884448+0000 mgr.y (mgr.24992) 337 : cluster [DBG] pgmap v205: 161 pgs: 161 active+clean; 457 KiB data, 325 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T05:58:23.247 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:58:22 vm05 bash[43541]: audit 2026-03-10T05:58:22.102277+0000 mgr.y (mgr.24992) 338 : audit [DBG] from='client.34459 -' entity='client.iscsi.foo.vm02.mxbwmh' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T05:58:23.247 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:58:22 vm05 bash[43541]: audit 2026-03-10T05:58:22.102277+0000 mgr.y (mgr.24992) 338 : audit [DBG] from='client.34459 -' entity='client.iscsi.foo.vm02.mxbwmh' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T05:58:23.335 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:58:22 vm02 bash[56371]: audit 2026-03-10T05:58:22.102277+0000 mgr.y (mgr.24992) 338 : audit [DBG] from='client.34459 -' entity='client.iscsi.foo.vm02.mxbwmh' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T05:58:23.335 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:58:22 vm02 bash[56371]: audit 2026-03-10T05:58:22.102277+0000 mgr.y (mgr.24992) 338 : audit [DBG] from='client.34459 -' entity='client.iscsi.foo.vm02.mxbwmh' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T05:58:23.335 INFO:journalctl@ceph.mgr.y.vm02.stdout:Mar 10 05:58:22 vm02 bash[52264]: ::ffff:192.168.123.105 - - [10/Mar/2026:05:58:22] "GET /metrics HTTP/1.1" 200 38256 "" "Prometheus/2.51.0" 2026-03-10T05:58:23.335 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:58:22 vm02 bash[55303]: audit 2026-03-10T05:58:22.102277+0000 mgr.y (mgr.24992) 338 : audit [DBG] from='client.34459 -' entity='client.iscsi.foo.vm02.mxbwmh' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T05:58:23.335 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:58:22 vm02 bash[55303]: audit 2026-03-10T05:58:22.102277+0000 mgr.y (mgr.24992) 338 : audit [DBG] from='client.34459 -' entity='client.iscsi.foo.vm02.mxbwmh' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T05:58:24.247 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:58:23 vm05 bash[43541]: cluster 2026-03-10T05:58:22.884857+0000 mgr.y (mgr.24992) 339 : cluster [DBG] pgmap v206: 161 pgs: 161 active+clean; 457 KiB data, 325 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T05:58:24.247 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:58:23 vm05 bash[43541]: cluster 2026-03-10T05:58:22.884857+0000 mgr.y (mgr.24992) 339 : cluster [DBG] pgmap v206: 161 pgs: 161 active+clean; 457 KiB data, 325 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T05:58:24.335 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:58:23 vm02 bash[56371]: cluster 2026-03-10T05:58:22.884857+0000 mgr.y (mgr.24992) 339 : cluster [DBG] pgmap v206: 161 pgs: 161 active+clean; 457 KiB data, 325 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T05:58:24.335 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:58:23 vm02 bash[56371]: cluster 2026-03-10T05:58:22.884857+0000 mgr.y (mgr.24992) 339 : cluster [DBG] pgmap v206: 161 pgs: 161 active+clean; 457 KiB data, 325 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T05:58:24.335 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:58:23 vm02 bash[55303]: cluster 2026-03-10T05:58:22.884857+0000 mgr.y (mgr.24992) 339 : cluster [DBG] pgmap v206: 161 pgs: 161 active+clean; 457 KiB data, 325 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T05:58:24.335 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:58:23 vm02 bash[55303]: cluster 2026-03-10T05:58:22.884857+0000 mgr.y (mgr.24992) 339 : cluster [DBG] pgmap v206: 161 pgs: 161 active+clean; 457 KiB data, 325 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T05:58:26.247 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:58:25 vm05 bash[43541]: cluster 2026-03-10T05:58:24.885172+0000 mgr.y (mgr.24992) 340 : cluster [DBG] pgmap v207: 161 pgs: 161 active+clean; 457 KiB data, 325 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T05:58:26.247 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:58:25 vm05 bash[43541]: cluster 2026-03-10T05:58:24.885172+0000 mgr.y (mgr.24992) 340 : cluster [DBG] pgmap v207: 161 pgs: 161 active+clean; 457 KiB data, 325 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T05:58:26.247 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:58:25 vm05 bash[43541]: audit 2026-03-10T05:58:25.887828+0000 mon.a (mon.0) 744 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T05:58:26.247 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:58:25 vm05 bash[43541]: audit 2026-03-10T05:58:25.887828+0000 mon.a (mon.0) 744 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T05:58:26.335 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:58:25 vm02 bash[56371]: cluster 2026-03-10T05:58:24.885172+0000 mgr.y (mgr.24992) 340 : cluster [DBG] pgmap v207: 161 pgs: 161 active+clean; 457 KiB data, 325 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T05:58:26.335 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:58:25 vm02 bash[56371]: cluster 2026-03-10T05:58:24.885172+0000 mgr.y (mgr.24992) 340 : cluster [DBG] pgmap v207: 161 pgs: 161 active+clean; 457 KiB data, 325 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T05:58:26.335 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:58:25 vm02 bash[56371]: audit 2026-03-10T05:58:25.887828+0000 mon.a (mon.0) 744 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T05:58:26.335 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:58:25 vm02 bash[56371]: audit 2026-03-10T05:58:25.887828+0000 mon.a (mon.0) 744 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T05:58:26.335 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:58:25 vm02 bash[55303]: cluster 2026-03-10T05:58:24.885172+0000 mgr.y (mgr.24992) 340 : cluster [DBG] pgmap v207: 161 pgs: 161 active+clean; 457 KiB data, 325 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T05:58:26.335 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:58:25 vm02 bash[55303]: cluster 2026-03-10T05:58:24.885172+0000 mgr.y (mgr.24992) 340 : cluster [DBG] pgmap v207: 161 pgs: 161 active+clean; 457 KiB data, 325 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T05:58:26.335 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:58:25 vm02 bash[55303]: audit 2026-03-10T05:58:25.887828+0000 mon.a (mon.0) 744 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T05:58:26.335 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:58:25 vm02 bash[55303]: audit 2026-03-10T05:58:25.887828+0000 mon.a (mon.0) 744 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T05:58:28.247 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:58:27 vm05 bash[43541]: cluster 2026-03-10T05:58:26.885573+0000 mgr.y (mgr.24992) 341 : cluster [DBG] pgmap v208: 161 pgs: 161 active+clean; 457 KiB data, 325 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T05:58:28.247 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:58:27 vm05 bash[43541]: cluster 2026-03-10T05:58:26.885573+0000 mgr.y (mgr.24992) 341 : cluster [DBG] pgmap v208: 161 pgs: 161 active+clean; 457 KiB data, 325 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T05:58:28.335 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:58:27 vm02 bash[56371]: cluster 2026-03-10T05:58:26.885573+0000 mgr.y (mgr.24992) 341 : cluster [DBG] pgmap v208: 161 pgs: 161 active+clean; 457 KiB data, 325 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T05:58:28.335 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:58:27 vm02 bash[56371]: cluster 2026-03-10T05:58:26.885573+0000 mgr.y (mgr.24992) 341 : cluster [DBG] pgmap v208: 161 pgs: 161 active+clean; 457 KiB data, 325 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T05:58:28.335 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:58:27 vm02 bash[55303]: cluster 2026-03-10T05:58:26.885573+0000 mgr.y (mgr.24992) 341 : cluster [DBG] pgmap v208: 161 pgs: 161 active+clean; 457 KiB data, 325 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T05:58:28.335 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:58:27 vm02 bash[55303]: cluster 2026-03-10T05:58:26.885573+0000 mgr.y (mgr.24992) 341 : cluster [DBG] pgmap v208: 161 pgs: 161 active+clean; 457 KiB data, 325 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T05:58:30.247 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:58:29 vm05 bash[43541]: cluster 2026-03-10T05:58:28.885953+0000 mgr.y (mgr.24992) 342 : cluster [DBG] pgmap v209: 161 pgs: 161 active+clean; 457 KiB data, 325 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T05:58:30.247 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:58:29 vm05 bash[43541]: cluster 2026-03-10T05:58:28.885953+0000 mgr.y (mgr.24992) 342 : cluster [DBG] pgmap v209: 161 pgs: 161 active+clean; 457 KiB data, 325 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T05:58:30.335 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:58:29 vm02 bash[56371]: cluster 2026-03-10T05:58:28.885953+0000 mgr.y (mgr.24992) 342 : cluster [DBG] pgmap v209: 161 pgs: 161 active+clean; 457 KiB data, 325 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T05:58:30.335 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:58:29 vm02 bash[56371]: cluster 2026-03-10T05:58:28.885953+0000 mgr.y (mgr.24992) 342 : cluster [DBG] pgmap v209: 161 pgs: 161 active+clean; 457 KiB data, 325 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T05:58:30.335 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:58:29 vm02 bash[55303]: cluster 2026-03-10T05:58:28.885953+0000 mgr.y (mgr.24992) 342 : cluster [DBG] pgmap v209: 161 pgs: 161 active+clean; 457 KiB data, 325 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T05:58:30.335 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:58:29 vm02 bash[55303]: cluster 2026-03-10T05:58:28.885953+0000 mgr.y (mgr.24992) 342 : cluster [DBG] pgmap v209: 161 pgs: 161 active+clean; 457 KiB data, 325 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T05:58:30.833 DEBUG:teuthology.orchestra.run.vm02:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 107483ae-1c44-11f1-b530-c1172cd6122a -e sha1=e911bdebe5c8faa3800735d1568fcdca65db60df -- bash -c 'ceph orch ps' 2026-03-10T05:58:31.295 INFO:teuthology.orchestra.run.vm02.stdout:NAME HOST PORTS STATUS REFRESHED AGE MEM USE MEM LIM VERSION IMAGE ID CONTAINER ID 2026-03-10T05:58:31.295 INFO:teuthology.orchestra.run.vm02.stdout:alertmanager.a vm02 *:9093,9094 running (6m) 83s ago 11m 13.2M - 0.25.0 c8568f914cd2 7a7c5c2cddb6 2026-03-10T05:58:31.295 INFO:teuthology.orchestra.run.vm02.stdout:grafana.a vm05 *:3000 running (88s) 83s ago 11m 58.0M - 10.4.0 c8b91775d855 5f00ef7c3fac 2026-03-10T05:58:31.295 INFO:teuthology.orchestra.run.vm02.stdout:iscsi.foo.vm02.mxbwmh vm02 running (109s) 83s ago 10m 48.3M - 3.9 654f31e6858e f1b577537dcd 2026-03-10T05:58:31.295 INFO:teuthology.orchestra.run.vm02.stdout:mgr.x vm05 *:8443,9283,8765 running (5m) 83s ago 13m 465M - 19.2.3-678-ge911bdeb 654f31e6858e 7579626ada90 2026-03-10T05:58:31.295 INFO:teuthology.orchestra.run.vm02.stdout:mgr.y vm02 *:8443,9283,8765 running (6m) 83s ago 14m 544M - 19.2.3-678-ge911bdeb 654f31e6858e ef46d0f7b15e 2026-03-10T05:58:31.296 INFO:teuthology.orchestra.run.vm02.stdout:mon.a vm02 running (5m) 83s ago 14m 60.2M 2048M 19.2.3-678-ge911bdeb 654f31e6858e df3a0a290a95 2026-03-10T05:58:31.296 INFO:teuthology.orchestra.run.vm02.stdout:mon.b vm05 running (5m) 83s ago 14m 51.4M 2048M 19.2.3-678-ge911bdeb 654f31e6858e 1da04b90d16b 2026-03-10T05:58:31.296 INFO:teuthology.orchestra.run.vm02.stdout:mon.c vm02 running (5m) 83s ago 14m 57.7M 2048M 19.2.3-678-ge911bdeb 654f31e6858e 7f2cdf1b7aa6 2026-03-10T05:58:31.296 INFO:teuthology.orchestra.run.vm02.stdout:node-exporter.a vm02 *:9100 running (6m) 83s ago 11m 7560k - 1.7.0 72c9c2088986 90288450bd1f 2026-03-10T05:58:31.296 INFO:teuthology.orchestra.run.vm02.stdout:node-exporter.b vm05 *:9100 running (6m) 83s ago 11m 7591k - 1.7.0 72c9c2088986 4e859143cb0e 2026-03-10T05:58:31.296 INFO:teuthology.orchestra.run.vm02.stdout:osd.0 vm02 running (4m) 83s ago 13m 75.8M 4096M 19.2.3-678-ge911bdeb 654f31e6858e 640360275f83 2026-03-10T05:58:31.296 INFO:teuthology.orchestra.run.vm02.stdout:osd.1 vm02 running (3m) 83s ago 13m 57.1M 4096M 19.2.3-678-ge911bdeb 654f31e6858e 4de5c460789a 2026-03-10T05:58:31.296 INFO:teuthology.orchestra.run.vm02.stdout:osd.2 vm02 running (4m) 83s ago 13m 51.3M 4096M 19.2.3-678-ge911bdeb 654f31e6858e 51dac2f581d9 2026-03-10T05:58:31.296 INFO:teuthology.orchestra.run.vm02.stdout:osd.3 vm02 running (4m) 83s ago 12m 81.4M 4096M 19.2.3-678-ge911bdeb 654f31e6858e 0eca961791f4 2026-03-10T05:58:31.296 INFO:teuthology.orchestra.run.vm02.stdout:osd.4 vm05 running (3m) 83s ago 12m 57.2M 4096M 19.2.3-678-ge911bdeb 654f31e6858e 2c1b499265f4 2026-03-10T05:58:31.296 INFO:teuthology.orchestra.run.vm02.stdout:osd.5 vm05 running (3m) 83s ago 12m 75.9M 4096M 19.2.3-678-ge911bdeb 654f31e6858e 7ec1a1246098 2026-03-10T05:58:31.296 INFO:teuthology.orchestra.run.vm02.stdout:osd.6 vm05 running (2m) 83s ago 12m 73.8M 4096M 19.2.3-678-ge911bdeb 654f31e6858e bd151ab03026 2026-03-10T05:58:31.296 INFO:teuthology.orchestra.run.vm02.stdout:osd.7 vm05 running (2m) 83s ago 11m 73.1M 4096M 19.2.3-678-ge911bdeb 654f31e6858e 83fe4a7f26f5 2026-03-10T05:58:31.296 INFO:teuthology.orchestra.run.vm02.stdout:prometheus.a vm05 *:9095 running (5m) 83s ago 11m 39.3M - 2.51.0 1d3b7f56885b 3328811f8f28 2026-03-10T05:58:31.296 INFO:teuthology.orchestra.run.vm02.stdout:rgw.foo.vm02.pbogjd vm02 *:8000 running (2m) 83s ago 10m 92.6M - 19.2.3-678-ge911bdeb 654f31e6858e 4e1a47dc4ede 2026-03-10T05:58:31.296 INFO:teuthology.orchestra.run.vm02.stdout:rgw.foo.vm05.hvmsxl vm05 *:8000 running (2m) 83s ago 10m 92.6M - 19.2.3-678-ge911bdeb 654f31e6858e 51931a978021 2026-03-10T05:58:31.296 INFO:teuthology.orchestra.run.vm02.stdout:rgw.smpl.vm02.pglcfm vm02 *:80 running (2m) 83s ago 10m 92.6M - 19.2.3-678-ge911bdeb 654f31e6858e a59d6d93b54c 2026-03-10T05:58:31.296 INFO:teuthology.orchestra.run.vm02.stdout:rgw.smpl.vm05.hqqmap vm05 *:80 running (2m) 83s ago 10m 92.4M - 19.2.3-678-ge911bdeb 654f31e6858e 62b012e7d3ec 2026-03-10T05:58:31.343 DEBUG:teuthology.orchestra.run.vm02:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 107483ae-1c44-11f1-b530-c1172cd6122a -e sha1=e911bdebe5c8faa3800735d1568fcdca65db60df -- bash -c 'ceph versions' 2026-03-10T05:58:31.793 INFO:teuthology.orchestra.run.vm02.stdout:{ 2026-03-10T05:58:31.793 INFO:teuthology.orchestra.run.vm02.stdout: "mon": { 2026-03-10T05:58:31.793 INFO:teuthology.orchestra.run.vm02.stdout: "ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable)": 3 2026-03-10T05:58:31.793 INFO:teuthology.orchestra.run.vm02.stdout: }, 2026-03-10T05:58:31.793 INFO:teuthology.orchestra.run.vm02.stdout: "mgr": { 2026-03-10T05:58:31.793 INFO:teuthology.orchestra.run.vm02.stdout: "ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable)": 2 2026-03-10T05:58:31.793 INFO:teuthology.orchestra.run.vm02.stdout: }, 2026-03-10T05:58:31.793 INFO:teuthology.orchestra.run.vm02.stdout: "osd": { 2026-03-10T05:58:31.793 INFO:teuthology.orchestra.run.vm02.stdout: "ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable)": 8 2026-03-10T05:58:31.793 INFO:teuthology.orchestra.run.vm02.stdout: }, 2026-03-10T05:58:31.793 INFO:teuthology.orchestra.run.vm02.stdout: "rgw": { 2026-03-10T05:58:31.793 INFO:teuthology.orchestra.run.vm02.stdout: "ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable)": 4 2026-03-10T05:58:31.793 INFO:teuthology.orchestra.run.vm02.stdout: }, 2026-03-10T05:58:31.793 INFO:teuthology.orchestra.run.vm02.stdout: "overall": { 2026-03-10T05:58:31.793 INFO:teuthology.orchestra.run.vm02.stdout: "ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable)": 17 2026-03-10T05:58:31.793 INFO:teuthology.orchestra.run.vm02.stdout: } 2026-03-10T05:58:31.793 INFO:teuthology.orchestra.run.vm02.stdout:} 2026-03-10T05:58:31.847 DEBUG:teuthology.orchestra.run.vm02:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 107483ae-1c44-11f1-b530-c1172cd6122a -e sha1=e911bdebe5c8faa3800735d1568fcdca65db60df -- bash -c 'ceph orch upgrade status' 2026-03-10T05:58:32.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:58:31 vm02 bash[56371]: cluster 2026-03-10T05:58:30.886360+0000 mgr.y (mgr.24992) 343 : cluster [DBG] pgmap v210: 161 pgs: 161 active+clean; 457 KiB data, 325 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T05:58:32.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:58:31 vm02 bash[56371]: cluster 2026-03-10T05:58:30.886360+0000 mgr.y (mgr.24992) 343 : cluster [DBG] pgmap v210: 161 pgs: 161 active+clean; 457 KiB data, 325 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T05:58:32.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:58:31 vm02 bash[56371]: audit 2026-03-10T05:58:31.290239+0000 mgr.y (mgr.24992) 344 : audit [DBG] from='client.54470 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T05:58:32.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:58:31 vm02 bash[56371]: audit 2026-03-10T05:58:31.290239+0000 mgr.y (mgr.24992) 344 : audit [DBG] from='client.54470 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T05:58:32.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:58:31 vm02 bash[56371]: audit 2026-03-10T05:58:31.791911+0000 mon.c (mon.1) 20 : audit [DBG] from='client.? 192.168.123.102:0/2183467475' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T05:58:32.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:58:31 vm02 bash[56371]: audit 2026-03-10T05:58:31.791911+0000 mon.c (mon.1) 20 : audit [DBG] from='client.? 192.168.123.102:0/2183467475' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T05:58:32.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:58:31 vm02 bash[55303]: cluster 2026-03-10T05:58:30.886360+0000 mgr.y (mgr.24992) 343 : cluster [DBG] pgmap v210: 161 pgs: 161 active+clean; 457 KiB data, 325 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T05:58:32.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:58:31 vm02 bash[55303]: cluster 2026-03-10T05:58:30.886360+0000 mgr.y (mgr.24992) 343 : cluster [DBG] pgmap v210: 161 pgs: 161 active+clean; 457 KiB data, 325 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T05:58:32.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:58:31 vm02 bash[55303]: audit 2026-03-10T05:58:31.290239+0000 mgr.y (mgr.24992) 344 : audit [DBG] from='client.54470 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T05:58:32.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:58:31 vm02 bash[55303]: audit 2026-03-10T05:58:31.290239+0000 mgr.y (mgr.24992) 344 : audit [DBG] from='client.54470 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T05:58:32.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:58:31 vm02 bash[55303]: audit 2026-03-10T05:58:31.791911+0000 mon.c (mon.1) 20 : audit [DBG] from='client.? 192.168.123.102:0/2183467475' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T05:58:32.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:58:31 vm02 bash[55303]: audit 2026-03-10T05:58:31.791911+0000 mon.c (mon.1) 20 : audit [DBG] from='client.? 192.168.123.102:0/2183467475' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T05:58:32.247 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:58:31 vm05 bash[43541]: cluster 2026-03-10T05:58:30.886360+0000 mgr.y (mgr.24992) 343 : cluster [DBG] pgmap v210: 161 pgs: 161 active+clean; 457 KiB data, 325 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T05:58:32.247 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:58:31 vm05 bash[43541]: cluster 2026-03-10T05:58:30.886360+0000 mgr.y (mgr.24992) 343 : cluster [DBG] pgmap v210: 161 pgs: 161 active+clean; 457 KiB data, 325 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T05:58:32.248 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:58:31 vm05 bash[43541]: audit 2026-03-10T05:58:31.290239+0000 mgr.y (mgr.24992) 344 : audit [DBG] from='client.54470 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T05:58:32.248 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:58:31 vm05 bash[43541]: audit 2026-03-10T05:58:31.290239+0000 mgr.y (mgr.24992) 344 : audit [DBG] from='client.54470 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T05:58:32.248 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:58:31 vm05 bash[43541]: audit 2026-03-10T05:58:31.791911+0000 mon.c (mon.1) 20 : audit [DBG] from='client.? 192.168.123.102:0/2183467475' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T05:58:32.248 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:58:31 vm05 bash[43541]: audit 2026-03-10T05:58:31.791911+0000 mon.c (mon.1) 20 : audit [DBG] from='client.? 192.168.123.102:0/2183467475' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T05:58:32.298 INFO:teuthology.orchestra.run.vm02.stdout:{ 2026-03-10T05:58:32.298 INFO:teuthology.orchestra.run.vm02.stdout: "target_image": null, 2026-03-10T05:58:32.298 INFO:teuthology.orchestra.run.vm02.stdout: "in_progress": false, 2026-03-10T05:58:32.298 INFO:teuthology.orchestra.run.vm02.stdout: "which": "", 2026-03-10T05:58:32.298 INFO:teuthology.orchestra.run.vm02.stdout: "services_complete": [], 2026-03-10T05:58:32.298 INFO:teuthology.orchestra.run.vm02.stdout: "progress": null, 2026-03-10T05:58:32.298 INFO:teuthology.orchestra.run.vm02.stdout: "message": "", 2026-03-10T05:58:32.298 INFO:teuthology.orchestra.run.vm02.stdout: "is_paused": false 2026-03-10T05:58:32.298 INFO:teuthology.orchestra.run.vm02.stdout:} 2026-03-10T05:58:32.345 DEBUG:teuthology.orchestra.run.vm02:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 107483ae-1c44-11f1-b530-c1172cd6122a -e sha1=e911bdebe5c8faa3800735d1568fcdca65db60df -- bash -c 'ceph health detail' 2026-03-10T05:58:32.811 INFO:teuthology.orchestra.run.vm02.stdout:HEALTH_OK 2026-03-10T05:58:32.868 DEBUG:teuthology.orchestra.run.vm02:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 107483ae-1c44-11f1-b530-c1172cd6122a -e sha1=e911bdebe5c8faa3800735d1568fcdca65db60df -- bash -c 'ceph versions | jq -e '"'"'.overall | length == 1'"'"'' 2026-03-10T05:58:33.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:58:32 vm02 bash[56371]: audit 2026-03-10T05:58:32.112646+0000 mgr.y (mgr.24992) 345 : audit [DBG] from='client.34459 -' entity='client.iscsi.foo.vm02.mxbwmh' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T05:58:33.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:58:32 vm02 bash[56371]: audit 2026-03-10T05:58:32.112646+0000 mgr.y (mgr.24992) 345 : audit [DBG] from='client.34459 -' entity='client.iscsi.foo.vm02.mxbwmh' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T05:58:33.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:58:32 vm02 bash[56371]: audit 2026-03-10T05:58:32.296617+0000 mgr.y (mgr.24992) 346 : audit [DBG] from='client.54482 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T05:58:33.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:58:32 vm02 bash[56371]: audit 2026-03-10T05:58:32.296617+0000 mgr.y (mgr.24992) 346 : audit [DBG] from='client.54482 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T05:58:33.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:58:32 vm02 bash[56371]: audit 2026-03-10T05:58:32.809964+0000 mon.c (mon.1) 21 : audit [DBG] from='client.? 192.168.123.102:0/2310588032' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch 2026-03-10T05:58:33.085 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:58:32 vm02 bash[56371]: audit 2026-03-10T05:58:32.809964+0000 mon.c (mon.1) 21 : audit [DBG] from='client.? 192.168.123.102:0/2310588032' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch 2026-03-10T05:58:33.085 INFO:journalctl@ceph.mgr.y.vm02.stdout:Mar 10 05:58:32 vm02 bash[52264]: ::ffff:192.168.123.105 - - [10/Mar/2026:05:58:32] "GET /metrics HTTP/1.1" 200 38256 "" "Prometheus/2.51.0" 2026-03-10T05:58:33.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:58:32 vm02 bash[55303]: audit 2026-03-10T05:58:32.112646+0000 mgr.y (mgr.24992) 345 : audit [DBG] from='client.34459 -' entity='client.iscsi.foo.vm02.mxbwmh' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T05:58:33.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:58:32 vm02 bash[55303]: audit 2026-03-10T05:58:32.112646+0000 mgr.y (mgr.24992) 345 : audit [DBG] from='client.34459 -' entity='client.iscsi.foo.vm02.mxbwmh' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T05:58:33.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:58:32 vm02 bash[55303]: audit 2026-03-10T05:58:32.296617+0000 mgr.y (mgr.24992) 346 : audit [DBG] from='client.54482 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T05:58:33.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:58:32 vm02 bash[55303]: audit 2026-03-10T05:58:32.296617+0000 mgr.y (mgr.24992) 346 : audit [DBG] from='client.54482 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T05:58:33.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:58:32 vm02 bash[55303]: audit 2026-03-10T05:58:32.809964+0000 mon.c (mon.1) 21 : audit [DBG] from='client.? 192.168.123.102:0/2310588032' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch 2026-03-10T05:58:33.085 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:58:32 vm02 bash[55303]: audit 2026-03-10T05:58:32.809964+0000 mon.c (mon.1) 21 : audit [DBG] from='client.? 192.168.123.102:0/2310588032' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch 2026-03-10T05:58:33.247 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:58:32 vm05 bash[43541]: audit 2026-03-10T05:58:32.112646+0000 mgr.y (mgr.24992) 345 : audit [DBG] from='client.34459 -' entity='client.iscsi.foo.vm02.mxbwmh' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T05:58:33.247 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:58:32 vm05 bash[43541]: audit 2026-03-10T05:58:32.112646+0000 mgr.y (mgr.24992) 345 : audit [DBG] from='client.34459 -' entity='client.iscsi.foo.vm02.mxbwmh' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T05:58:33.247 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:58:32 vm05 bash[43541]: audit 2026-03-10T05:58:32.296617+0000 mgr.y (mgr.24992) 346 : audit [DBG] from='client.54482 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T05:58:33.247 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:58:32 vm05 bash[43541]: audit 2026-03-10T05:58:32.296617+0000 mgr.y (mgr.24992) 346 : audit [DBG] from='client.54482 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T05:58:33.247 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:58:32 vm05 bash[43541]: audit 2026-03-10T05:58:32.809964+0000 mon.c (mon.1) 21 : audit [DBG] from='client.? 192.168.123.102:0/2310588032' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch 2026-03-10T05:58:33.247 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:58:32 vm05 bash[43541]: audit 2026-03-10T05:58:32.809964+0000 mon.c (mon.1) 21 : audit [DBG] from='client.? 192.168.123.102:0/2310588032' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch 2026-03-10T05:58:33.354 INFO:teuthology.orchestra.run.vm02.stdout:true 2026-03-10T05:58:33.396 DEBUG:teuthology.orchestra.run.vm02:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 107483ae-1c44-11f1-b530-c1172cd6122a -e sha1=e911bdebe5c8faa3800735d1568fcdca65db60df -- bash -c 'ceph versions | jq -e '"'"'.overall | keys'"'"' | grep $sha1' 2026-03-10T05:58:33.874 INFO:teuthology.orchestra.run.vm02.stdout: "ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable)" 2026-03-10T05:58:33.916 DEBUG:teuthology.orchestra.run.vm02:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 107483ae-1c44-11f1-b530-c1172cd6122a -e sha1=e911bdebe5c8faa3800735d1568fcdca65db60df -- bash -c 'ceph orch ls | grep '"'"'^osd '"'"'' 2026-03-10T05:58:34.149 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:58:33 vm02 bash[56371]: cluster 2026-03-10T05:58:32.886767+0000 mgr.y (mgr.24992) 347 : cluster [DBG] pgmap v211: 161 pgs: 161 active+clean; 457 KiB data, 325 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T05:58:34.149 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:58:33 vm02 bash[56371]: cluster 2026-03-10T05:58:32.886767+0000 mgr.y (mgr.24992) 347 : cluster [DBG] pgmap v211: 161 pgs: 161 active+clean; 457 KiB data, 325 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T05:58:34.149 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:58:33 vm02 bash[56371]: audit 2026-03-10T05:58:33.349016+0000 mon.b (mon.2) 16 : audit [DBG] from='client.? 192.168.123.102:0/622982935' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T05:58:34.149 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:58:33 vm02 bash[56371]: audit 2026-03-10T05:58:33.349016+0000 mon.b (mon.2) 16 : audit [DBG] from='client.? 192.168.123.102:0/622982935' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T05:58:34.149 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:58:33 vm02 bash[56371]: audit 2026-03-10T05:58:33.868311+0000 mon.b (mon.2) 17 : audit [DBG] from='client.? 192.168.123.102:0/273969834' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T05:58:34.149 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:58:33 vm02 bash[56371]: audit 2026-03-10T05:58:33.868311+0000 mon.b (mon.2) 17 : audit [DBG] from='client.? 192.168.123.102:0/273969834' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T05:58:34.149 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:58:33 vm02 bash[55303]: cluster 2026-03-10T05:58:32.886767+0000 mgr.y (mgr.24992) 347 : cluster [DBG] pgmap v211: 161 pgs: 161 active+clean; 457 KiB data, 325 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T05:58:34.149 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:58:33 vm02 bash[55303]: cluster 2026-03-10T05:58:32.886767+0000 mgr.y (mgr.24992) 347 : cluster [DBG] pgmap v211: 161 pgs: 161 active+clean; 457 KiB data, 325 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T05:58:34.149 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:58:33 vm02 bash[55303]: audit 2026-03-10T05:58:33.349016+0000 mon.b (mon.2) 16 : audit [DBG] from='client.? 192.168.123.102:0/622982935' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T05:58:34.149 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:58:33 vm02 bash[55303]: audit 2026-03-10T05:58:33.349016+0000 mon.b (mon.2) 16 : audit [DBG] from='client.? 192.168.123.102:0/622982935' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T05:58:34.149 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:58:33 vm02 bash[55303]: audit 2026-03-10T05:58:33.868311+0000 mon.b (mon.2) 17 : audit [DBG] from='client.? 192.168.123.102:0/273969834' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T05:58:34.149 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:58:33 vm02 bash[55303]: audit 2026-03-10T05:58:33.868311+0000 mon.b (mon.2) 17 : audit [DBG] from='client.? 192.168.123.102:0/273969834' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T05:58:34.247 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:58:33 vm05 bash[43541]: cluster 2026-03-10T05:58:32.886767+0000 mgr.y (mgr.24992) 347 : cluster [DBG] pgmap v211: 161 pgs: 161 active+clean; 457 KiB data, 325 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T05:58:34.247 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:58:33 vm05 bash[43541]: cluster 2026-03-10T05:58:32.886767+0000 mgr.y (mgr.24992) 347 : cluster [DBG] pgmap v211: 161 pgs: 161 active+clean; 457 KiB data, 325 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T05:58:34.247 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:58:33 vm05 bash[43541]: audit 2026-03-10T05:58:33.349016+0000 mon.b (mon.2) 16 : audit [DBG] from='client.? 192.168.123.102:0/622982935' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T05:58:34.247 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:58:33 vm05 bash[43541]: audit 2026-03-10T05:58:33.349016+0000 mon.b (mon.2) 16 : audit [DBG] from='client.? 192.168.123.102:0/622982935' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T05:58:34.247 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:58:33 vm05 bash[43541]: audit 2026-03-10T05:58:33.868311+0000 mon.b (mon.2) 17 : audit [DBG] from='client.? 192.168.123.102:0/273969834' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T05:58:34.247 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:58:33 vm05 bash[43541]: audit 2026-03-10T05:58:33.868311+0000 mon.b (mon.2) 17 : audit [DBG] from='client.? 192.168.123.102:0/273969834' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T05:58:34.364 INFO:teuthology.orchestra.run.vm02.stdout:osd 8 86s ago - 2026-03-10T05:58:34.424 INFO:teuthology.run_tasks:Running task cephadm.shell... 2026-03-10T05:58:34.426 INFO:tasks.cephadm:Running commands on role mon.a host ubuntu@vm02.local 2026-03-10T05:58:34.427 DEBUG:teuthology.orchestra.run.vm02:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 107483ae-1c44-11f1-b530-c1172cd6122a -- bash -c 'ceph orch upgrade ls' 2026-03-10T05:58:35.247 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:58:34 vm05 bash[43541]: audit 2026-03-10T05:58:34.351496+0000 mgr.y (mgr.24992) 348 : audit [DBG] from='client.54500 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T05:58:35.247 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:58:34 vm05 bash[43541]: audit 2026-03-10T05:58:34.351496+0000 mgr.y (mgr.24992) 348 : audit [DBG] from='client.54500 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T05:58:35.335 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:58:34 vm02 bash[56371]: audit 2026-03-10T05:58:34.351496+0000 mgr.y (mgr.24992) 348 : audit [DBG] from='client.54500 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T05:58:35.335 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:58:34 vm02 bash[56371]: audit 2026-03-10T05:58:34.351496+0000 mgr.y (mgr.24992) 348 : audit [DBG] from='client.54500 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T05:58:35.335 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:58:34 vm02 bash[55303]: audit 2026-03-10T05:58:34.351496+0000 mgr.y (mgr.24992) 348 : audit [DBG] from='client.54500 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T05:58:35.335 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:58:34 vm02 bash[55303]: audit 2026-03-10T05:58:34.351496+0000 mgr.y (mgr.24992) 348 : audit [DBG] from='client.54500 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T05:58:36.335 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:58:36 vm02 bash[56371]: audit 2026-03-10T05:58:34.859422+0000 mgr.y (mgr.24992) 349 : audit [DBG] from='client.44592 -' entity='client.admin' cmd=[{"prefix": "orch upgrade ls", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T05:58:36.335 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:58:36 vm02 bash[56371]: audit 2026-03-10T05:58:34.859422+0000 mgr.y (mgr.24992) 349 : audit [DBG] from='client.44592 -' entity='client.admin' cmd=[{"prefix": "orch upgrade ls", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T05:58:36.335 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:58:36 vm02 bash[56371]: cluster 2026-03-10T05:58:34.887083+0000 mgr.y (mgr.24992) 350 : cluster [DBG] pgmap v212: 161 pgs: 161 active+clean; 457 KiB data, 325 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T05:58:36.335 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:58:36 vm02 bash[56371]: cluster 2026-03-10T05:58:34.887083+0000 mgr.y (mgr.24992) 350 : cluster [DBG] pgmap v212: 161 pgs: 161 active+clean; 457 KiB data, 325 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T05:58:36.335 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:58:36 vm02 bash[55303]: audit 2026-03-10T05:58:34.859422+0000 mgr.y (mgr.24992) 349 : audit [DBG] from='client.44592 -' entity='client.admin' cmd=[{"prefix": "orch upgrade ls", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T05:58:36.335 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:58:36 vm02 bash[55303]: audit 2026-03-10T05:58:34.859422+0000 mgr.y (mgr.24992) 349 : audit [DBG] from='client.44592 -' entity='client.admin' cmd=[{"prefix": "orch upgrade ls", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T05:58:36.335 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:58:36 vm02 bash[55303]: cluster 2026-03-10T05:58:34.887083+0000 mgr.y (mgr.24992) 350 : cluster [DBG] pgmap v212: 161 pgs: 161 active+clean; 457 KiB data, 325 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T05:58:36.335 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:58:36 vm02 bash[55303]: cluster 2026-03-10T05:58:34.887083+0000 mgr.y (mgr.24992) 350 : cluster [DBG] pgmap v212: 161 pgs: 161 active+clean; 457 KiB data, 325 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T05:58:36.497 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:58:36 vm05 bash[43541]: audit 2026-03-10T05:58:34.859422+0000 mgr.y (mgr.24992) 349 : audit [DBG] from='client.44592 -' entity='client.admin' cmd=[{"prefix": "orch upgrade ls", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T05:58:36.497 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:58:36 vm05 bash[43541]: audit 2026-03-10T05:58:34.859422+0000 mgr.y (mgr.24992) 349 : audit [DBG] from='client.44592 -' entity='client.admin' cmd=[{"prefix": "orch upgrade ls", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T05:58:36.497 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:58:36 vm05 bash[43541]: cluster 2026-03-10T05:58:34.887083+0000 mgr.y (mgr.24992) 350 : cluster [DBG] pgmap v212: 161 pgs: 161 active+clean; 457 KiB data, 325 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T05:58:36.497 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:58:36 vm05 bash[43541]: cluster 2026-03-10T05:58:34.887083+0000 mgr.y (mgr.24992) 350 : cluster [DBG] pgmap v212: 161 pgs: 161 active+clean; 457 KiB data, 325 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T05:58:38.335 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:58:38 vm02 bash[56371]: cluster 2026-03-10T05:58:36.887487+0000 mgr.y (mgr.24992) 351 : cluster [DBG] pgmap v213: 161 pgs: 161 active+clean; 457 KiB data, 325 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T05:58:38.335 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:58:38 vm02 bash[56371]: cluster 2026-03-10T05:58:36.887487+0000 mgr.y (mgr.24992) 351 : cluster [DBG] pgmap v213: 161 pgs: 161 active+clean; 457 KiB data, 325 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T05:58:38.335 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:58:38 vm02 bash[55303]: cluster 2026-03-10T05:58:36.887487+0000 mgr.y (mgr.24992) 351 : cluster [DBG] pgmap v213: 161 pgs: 161 active+clean; 457 KiB data, 325 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T05:58:38.335 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:58:38 vm02 bash[55303]: cluster 2026-03-10T05:58:36.887487+0000 mgr.y (mgr.24992) 351 : cluster [DBG] pgmap v213: 161 pgs: 161 active+clean; 457 KiB data, 325 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T05:58:38.497 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:58:38 vm05 bash[43541]: cluster 2026-03-10T05:58:36.887487+0000 mgr.y (mgr.24992) 351 : cluster [DBG] pgmap v213: 161 pgs: 161 active+clean; 457 KiB data, 325 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T05:58:38.497 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:58:38 vm05 bash[43541]: cluster 2026-03-10T05:58:36.887487+0000 mgr.y (mgr.24992) 351 : cluster [DBG] pgmap v213: 161 pgs: 161 active+clean; 457 KiB data, 325 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T05:58:40.335 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:58:40 vm02 bash[56371]: cluster 2026-03-10T05:58:38.887904+0000 mgr.y (mgr.24992) 352 : cluster [DBG] pgmap v214: 161 pgs: 161 active+clean; 457 KiB data, 325 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T05:58:40.335 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:58:40 vm02 bash[56371]: cluster 2026-03-10T05:58:38.887904+0000 mgr.y (mgr.24992) 352 : cluster [DBG] pgmap v214: 161 pgs: 161 active+clean; 457 KiB data, 325 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T05:58:40.335 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:58:40 vm02 bash[55303]: cluster 2026-03-10T05:58:38.887904+0000 mgr.y (mgr.24992) 352 : cluster [DBG] pgmap v214: 161 pgs: 161 active+clean; 457 KiB data, 325 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T05:58:40.335 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:58:40 vm02 bash[55303]: cluster 2026-03-10T05:58:38.887904+0000 mgr.y (mgr.24992) 352 : cluster [DBG] pgmap v214: 161 pgs: 161 active+clean; 457 KiB data, 325 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T05:58:40.497 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:58:40 vm05 bash[43541]: cluster 2026-03-10T05:58:38.887904+0000 mgr.y (mgr.24992) 352 : cluster [DBG] pgmap v214: 161 pgs: 161 active+clean; 457 KiB data, 325 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T05:58:40.497 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:58:40 vm05 bash[43541]: cluster 2026-03-10T05:58:38.887904+0000 mgr.y (mgr.24992) 352 : cluster [DBG] pgmap v214: 161 pgs: 161 active+clean; 457 KiB data, 325 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T05:58:41.325 INFO:teuthology.orchestra.run.vm02.stdout:{ 2026-03-10T05:58:41.326 INFO:teuthology.orchestra.run.vm02.stdout: "image": "quay.io/ceph/ceph", 2026-03-10T05:58:41.326 INFO:teuthology.orchestra.run.vm02.stdout: "registry": "quay.io", 2026-03-10T05:58:41.326 INFO:teuthology.orchestra.run.vm02.stdout: "bare_image": "ceph/ceph", 2026-03-10T05:58:41.326 INFO:teuthology.orchestra.run.vm02.stdout: "versions": [ 2026-03-10T05:58:41.326 INFO:teuthology.orchestra.run.vm02.stdout: "20.2.0", 2026-03-10T05:58:41.326 INFO:teuthology.orchestra.run.vm02.stdout: "20.1.1", 2026-03-10T05:58:41.326 INFO:teuthology.orchestra.run.vm02.stdout: "20.1.0", 2026-03-10T05:58:41.326 INFO:teuthology.orchestra.run.vm02.stdout: "19.2.3", 2026-03-10T05:58:41.326 INFO:teuthology.orchestra.run.vm02.stdout: "19.2.2", 2026-03-10T05:58:41.326 INFO:teuthology.orchestra.run.vm02.stdout: "19.2.1", 2026-03-10T05:58:41.326 INFO:teuthology.orchestra.run.vm02.stdout: "19.2.0" 2026-03-10T05:58:41.326 INFO:teuthology.orchestra.run.vm02.stdout: ] 2026-03-10T05:58:41.326 INFO:teuthology.orchestra.run.vm02.stdout:} 2026-03-10T05:58:41.335 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:58:41 vm02 bash[56371]: audit 2026-03-10T05:58:40.887970+0000 mon.a (mon.0) 745 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T05:58:41.335 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:58:41 vm02 bash[56371]: audit 2026-03-10T05:58:40.887970+0000 mon.a (mon.0) 745 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T05:58:41.335 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:58:41 vm02 bash[55303]: audit 2026-03-10T05:58:40.887970+0000 mon.a (mon.0) 745 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T05:58:41.335 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:58:41 vm02 bash[55303]: audit 2026-03-10T05:58:40.887970+0000 mon.a (mon.0) 745 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T05:58:41.382 DEBUG:teuthology.orchestra.run.vm02:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 107483ae-1c44-11f1-b530-c1172cd6122a -- bash -c 'ceph orch upgrade ls --image quay.io/ceph/ceph --show-all-versions | grep 16.2.0' 2026-03-10T05:58:41.497 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:58:41 vm05 bash[43541]: audit 2026-03-10T05:58:40.887970+0000 mon.a (mon.0) 745 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T05:58:41.497 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:58:41 vm05 bash[43541]: audit 2026-03-10T05:58:40.887970+0000 mon.a (mon.0) 745 : audit [DBG] from='mgr.24992 192.168.123.102:0/3915358871' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T05:58:42.335 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:58:42 vm02 bash[56371]: cluster 2026-03-10T05:58:40.888299+0000 mgr.y (mgr.24992) 353 : cluster [DBG] pgmap v215: 161 pgs: 161 active+clean; 457 KiB data, 325 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T05:58:42.335 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:58:42 vm02 bash[56371]: cluster 2026-03-10T05:58:40.888299+0000 mgr.y (mgr.24992) 353 : cluster [DBG] pgmap v215: 161 pgs: 161 active+clean; 457 KiB data, 325 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T05:58:42.335 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:58:42 vm02 bash[55303]: cluster 2026-03-10T05:58:40.888299+0000 mgr.y (mgr.24992) 353 : cluster [DBG] pgmap v215: 161 pgs: 161 active+clean; 457 KiB data, 325 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T05:58:42.335 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:58:42 vm02 bash[55303]: cluster 2026-03-10T05:58:40.888299+0000 mgr.y (mgr.24992) 353 : cluster [DBG] pgmap v215: 161 pgs: 161 active+clean; 457 KiB data, 325 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T05:58:42.497 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:58:42 vm05 bash[43541]: cluster 2026-03-10T05:58:40.888299+0000 mgr.y (mgr.24992) 353 : cluster [DBG] pgmap v215: 161 pgs: 161 active+clean; 457 KiB data, 325 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T05:58:42.497 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:58:42 vm05 bash[43541]: cluster 2026-03-10T05:58:40.888299+0000 mgr.y (mgr.24992) 353 : cluster [DBG] pgmap v215: 161 pgs: 161 active+clean; 457 KiB data, 325 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T05:58:43.322 INFO:teuthology.orchestra.run.vm02.stdout: "16.2.0", 2026-03-10T05:58:43.323 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:58:43 vm02 bash[56371]: audit 2026-03-10T05:58:41.838592+0000 mgr.y (mgr.24992) 354 : audit [DBG] from='client.34606 -' entity='client.admin' cmd=[{"prefix": "orch upgrade ls", "image": "quay.io/ceph/ceph", "show_all_versions": true, "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T05:58:43.323 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:58:43 vm02 bash[56371]: audit 2026-03-10T05:58:41.838592+0000 mgr.y (mgr.24992) 354 : audit [DBG] from='client.34606 -' entity='client.admin' cmd=[{"prefix": "orch upgrade ls", "image": "quay.io/ceph/ceph", "show_all_versions": true, "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T05:58:43.323 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:58:43 vm02 bash[56371]: audit 2026-03-10T05:58:42.118325+0000 mgr.y (mgr.24992) 355 : audit [DBG] from='client.34459 -' entity='client.iscsi.foo.vm02.mxbwmh' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T05:58:43.323 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:58:43 vm02 bash[56371]: audit 2026-03-10T05:58:42.118325+0000 mgr.y (mgr.24992) 355 : audit [DBG] from='client.34459 -' entity='client.iscsi.foo.vm02.mxbwmh' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T05:58:43.323 INFO:journalctl@ceph.mgr.y.vm02.stdout:Mar 10 05:58:42 vm02 bash[52264]: ::ffff:192.168.123.105 - - [10/Mar/2026:05:58:42] "GET /metrics HTTP/1.1" 200 38251 "" "Prometheus/2.51.0" 2026-03-10T05:58:43.323 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:58:43 vm02 bash[55303]: audit 2026-03-10T05:58:41.838592+0000 mgr.y (mgr.24992) 354 : audit [DBG] from='client.34606 -' entity='client.admin' cmd=[{"prefix": "orch upgrade ls", "image": "quay.io/ceph/ceph", "show_all_versions": true, "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T05:58:43.323 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:58:43 vm02 bash[55303]: audit 2026-03-10T05:58:41.838592+0000 mgr.y (mgr.24992) 354 : audit [DBG] from='client.34606 -' entity='client.admin' cmd=[{"prefix": "orch upgrade ls", "image": "quay.io/ceph/ceph", "show_all_versions": true, "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T05:58:43.323 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:58:43 vm02 bash[55303]: audit 2026-03-10T05:58:42.118325+0000 mgr.y (mgr.24992) 355 : audit [DBG] from='client.34459 -' entity='client.iscsi.foo.vm02.mxbwmh' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T05:58:43.323 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:58:43 vm02 bash[55303]: audit 2026-03-10T05:58:42.118325+0000 mgr.y (mgr.24992) 355 : audit [DBG] from='client.34459 -' entity='client.iscsi.foo.vm02.mxbwmh' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T05:58:43.497 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:58:43 vm05 bash[43541]: audit 2026-03-10T05:58:41.838592+0000 mgr.y (mgr.24992) 354 : audit [DBG] from='client.34606 -' entity='client.admin' cmd=[{"prefix": "orch upgrade ls", "image": "quay.io/ceph/ceph", "show_all_versions": true, "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T05:58:43.497 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:58:43 vm05 bash[43541]: audit 2026-03-10T05:58:41.838592+0000 mgr.y (mgr.24992) 354 : audit [DBG] from='client.34606 -' entity='client.admin' cmd=[{"prefix": "orch upgrade ls", "image": "quay.io/ceph/ceph", "show_all_versions": true, "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T05:58:43.497 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:58:43 vm05 bash[43541]: audit 2026-03-10T05:58:42.118325+0000 mgr.y (mgr.24992) 355 : audit [DBG] from='client.34459 -' entity='client.iscsi.foo.vm02.mxbwmh' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T05:58:43.497 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:58:43 vm05 bash[43541]: audit 2026-03-10T05:58:42.118325+0000 mgr.y (mgr.24992) 355 : audit [DBG] from='client.34459 -' entity='client.iscsi.foo.vm02.mxbwmh' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T05:58:43.894 DEBUG:teuthology.orchestra.run.vm02:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 107483ae-1c44-11f1-b530-c1172cd6122a -- bash -c 'ceph orch upgrade ls --image quay.io/ceph/ceph --tags | grep v16.2.2' 2026-03-10T05:58:44.145 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:58:44 vm02 bash[56371]: cluster 2026-03-10T05:58:42.888685+0000 mgr.y (mgr.24992) 356 : cluster [DBG] pgmap v216: 161 pgs: 161 active+clean; 457 KiB data, 325 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T05:58:44.146 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:58:44 vm02 bash[56371]: cluster 2026-03-10T05:58:42.888685+0000 mgr.y (mgr.24992) 356 : cluster [DBG] pgmap v216: 161 pgs: 161 active+clean; 457 KiB data, 325 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T05:58:44.146 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:58:44 vm02 bash[55303]: cluster 2026-03-10T05:58:42.888685+0000 mgr.y (mgr.24992) 356 : cluster [DBG] pgmap v216: 161 pgs: 161 active+clean; 457 KiB data, 325 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T05:58:44.146 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:58:44 vm02 bash[55303]: cluster 2026-03-10T05:58:42.888685+0000 mgr.y (mgr.24992) 356 : cluster [DBG] pgmap v216: 161 pgs: 161 active+clean; 457 KiB data, 325 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T05:58:44.497 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:58:44 vm05 bash[43541]: cluster 2026-03-10T05:58:42.888685+0000 mgr.y (mgr.24992) 356 : cluster [DBG] pgmap v216: 161 pgs: 161 active+clean; 457 KiB data, 325 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T05:58:44.497 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:58:44 vm05 bash[43541]: cluster 2026-03-10T05:58:42.888685+0000 mgr.y (mgr.24992) 356 : cluster [DBG] pgmap v216: 161 pgs: 161 active+clean; 457 KiB data, 325 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T05:58:45.335 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:58:45 vm02 bash[56371]: audit 2026-03-10T05:58:44.343746+0000 mgr.y (mgr.24992) 357 : audit [DBG] from='client.54512 -' entity='client.admin' cmd=[{"prefix": "orch upgrade ls", "image": "quay.io/ceph/ceph", "tags": true, "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T05:58:45.335 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:58:45 vm02 bash[56371]: audit 2026-03-10T05:58:44.343746+0000 mgr.y (mgr.24992) 357 : audit [DBG] from='client.54512 -' entity='client.admin' cmd=[{"prefix": "orch upgrade ls", "image": "quay.io/ceph/ceph", "tags": true, "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T05:58:45.335 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:58:45 vm02 bash[55303]: audit 2026-03-10T05:58:44.343746+0000 mgr.y (mgr.24992) 357 : audit [DBG] from='client.54512 -' entity='client.admin' cmd=[{"prefix": "orch upgrade ls", "image": "quay.io/ceph/ceph", "tags": true, "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T05:58:45.335 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:58:45 vm02 bash[55303]: audit 2026-03-10T05:58:44.343746+0000 mgr.y (mgr.24992) 357 : audit [DBG] from='client.54512 -' entity='client.admin' cmd=[{"prefix": "orch upgrade ls", "image": "quay.io/ceph/ceph", "tags": true, "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T05:58:45.497 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:58:45 vm05 bash[43541]: audit 2026-03-10T05:58:44.343746+0000 mgr.y (mgr.24992) 357 : audit [DBG] from='client.54512 -' entity='client.admin' cmd=[{"prefix": "orch upgrade ls", "image": "quay.io/ceph/ceph", "tags": true, "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T05:58:45.497 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:58:45 vm05 bash[43541]: audit 2026-03-10T05:58:44.343746+0000 mgr.y (mgr.24992) 357 : audit [DBG] from='client.54512 -' entity='client.admin' cmd=[{"prefix": "orch upgrade ls", "image": "quay.io/ceph/ceph", "tags": true, "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T05:58:45.723 INFO:teuthology.orchestra.run.vm02.stdout: "v16.2.2", 2026-03-10T05:58:45.723 INFO:teuthology.orchestra.run.vm02.stdout: "v16.2.2-20210505", 2026-03-10T05:58:45.771 DEBUG:teuthology.run_tasks:Unwinding manager cephadm 2026-03-10T05:58:45.773 INFO:tasks.cephadm:Teardown begin 2026-03-10T05:58:45.773 DEBUG:teuthology.orchestra.run.vm02:> sudo rm -f /etc/ceph/ceph.conf /etc/ceph/ceph.client.admin.keyring 2026-03-10T05:58:45.781 DEBUG:teuthology.orchestra.run.vm05:> sudo rm -f /etc/ceph/ceph.conf /etc/ceph/ceph.client.admin.keyring 2026-03-10T05:58:45.798 INFO:tasks.cephadm:Disabling cephadm mgr module 2026-03-10T05:58:45.798 DEBUG:teuthology.orchestra.run.vm02:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 107483ae-1c44-11f1-b530-c1172cd6122a -- ceph mgr module disable cephadm 2026-03-10T05:58:46.118 INFO:teuthology.orchestra.run.vm02.stderr:Error initializing cluster client: ObjectNotFound('RADOS object not found (error calling conf_read_file)',) 2026-03-10T05:58:46.178 DEBUG:teuthology.orchestra.run:got remote process result: 1 2026-03-10T05:58:46.179 INFO:tasks.cephadm:Cleaning up testdir ceph.* files... 2026-03-10T05:58:46.179 DEBUG:teuthology.orchestra.run.vm02:> rm -f /home/ubuntu/cephtest/seed.ceph.conf /home/ubuntu/cephtest/ceph.pub 2026-03-10T05:58:46.181 DEBUG:teuthology.orchestra.run.vm05:> rm -f /home/ubuntu/cephtest/seed.ceph.conf /home/ubuntu/cephtest/ceph.pub 2026-03-10T05:58:46.184 INFO:tasks.cephadm:Stopping all daemons... 2026-03-10T05:58:46.184 INFO:tasks.cephadm.mon.a:Stopping mon.a... 2026-03-10T05:58:46.184 DEBUG:teuthology.orchestra.run.vm02:> sudo systemctl stop ceph-107483ae-1c44-11f1-b530-c1172cd6122a@mon.a 2026-03-10T05:58:46.231 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:58:46 vm02 bash[56371]: cluster 2026-03-10T05:58:44.888998+0000 mgr.y (mgr.24992) 358 : cluster [DBG] pgmap v217: 161 pgs: 161 active+clean; 457 KiB data, 325 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T05:58:46.231 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:58:46 vm02 bash[56371]: cluster 2026-03-10T05:58:44.888998+0000 mgr.y (mgr.24992) 358 : cluster [DBG] pgmap v217: 161 pgs: 161 active+clean; 457 KiB data, 325 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T05:58:46.231 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:58:46 vm02 bash[55303]: cluster 2026-03-10T05:58:44.888998+0000 mgr.y (mgr.24992) 358 : cluster [DBG] pgmap v217: 161 pgs: 161 active+clean; 457 KiB data, 325 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T05:58:46.231 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:58:46 vm02 bash[55303]: cluster 2026-03-10T05:58:44.888998+0000 mgr.y (mgr.24992) 358 : cluster [DBG] pgmap v217: 161 pgs: 161 active+clean; 457 KiB data, 325 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T05:58:46.488 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:58:46 vm02 systemd[1]: Stopping Ceph mon.a for 107483ae-1c44-11f1-b530-c1172cd6122a... 2026-03-10T05:58:46.488 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:58:46 vm02 bash[56371]: debug 2026-03-10T05:58:46.263+0000 7fc41e0ad640 -1 received signal: Terminated from /sbin/docker-init -- /usr/bin/ceph-mon -n mon.a -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-stderr=true --default-log-stderr-prefix=debug --default-mon-cluster-log-to-file=false --default-mon-cluster-log-to-stderr=true (PID: 1) UID: 0 2026-03-10T05:58:46.488 INFO:journalctl@ceph.mon.a.vm02.stdout:Mar 10 05:58:46 vm02 bash[56371]: debug 2026-03-10T05:58:46.263+0000 7fc41e0ad640 -1 mon.a@0(leader) e4 *** Got Signal Terminated *** 2026-03-10T05:58:46.488 INFO:journalctl@ceph.mgr.y.vm02.stdout:Mar 10 05:58:46 vm02 bash[52264]: [10/Mar/2026:05:58:46] ENGINE Bus STOPPING 2026-03-10T05:58:46.488 INFO:journalctl@ceph.mgr.y.vm02.stdout:Mar 10 05:58:46 vm02 bash[52264]: [10/Mar/2026:05:58:46] ENGINE HTTP Server cherrypy._cpwsgi_server.CPWSGIServer(('::', 9283)) shut down 2026-03-10T05:58:46.488 INFO:journalctl@ceph.mgr.y.vm02.stdout:Mar 10 05:58:46 vm02 bash[52264]: [10/Mar/2026:05:58:46] ENGINE Bus STOPPED 2026-03-10T05:58:46.488 INFO:journalctl@ceph.mgr.y.vm02.stdout:Mar 10 05:58:46 vm02 bash[52264]: [10/Mar/2026:05:58:46] ENGINE Bus STARTING 2026-03-10T05:58:46.497 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:58:46 vm05 bash[43541]: cluster 2026-03-10T05:58:44.888998+0000 mgr.y (mgr.24992) 358 : cluster [DBG] pgmap v217: 161 pgs: 161 active+clean; 457 KiB data, 325 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T05:58:46.497 INFO:journalctl@ceph.mon.b.vm05.stdout:Mar 10 05:58:46 vm05 bash[43541]: cluster 2026-03-10T05:58:44.888998+0000 mgr.y (mgr.24992) 358 : cluster [DBG] pgmap v217: 161 pgs: 161 active+clean; 457 KiB data, 325 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T05:58:46.595 DEBUG:teuthology.orchestra.run.vm02:> sudo pkill -f 'journalctl -f -n 0 -u ceph-107483ae-1c44-11f1-b530-c1172cd6122a@mon.a.service' 2026-03-10T05:58:46.651 DEBUG:teuthology.orchestra.run:got remote process result: None 2026-03-10T05:58:46.651 INFO:tasks.cephadm.mon.a:Stopped mon.a 2026-03-10T05:58:46.651 INFO:tasks.cephadm.mon.b:Stopping mon.c... 2026-03-10T05:58:46.651 DEBUG:teuthology.orchestra.run.vm02:> sudo systemctl stop ceph-107483ae-1c44-11f1-b530-c1172cd6122a@mon.c 2026-03-10T05:58:46.740 INFO:journalctl@ceph.mgr.y.vm02.stdout:Mar 10 05:58:46 vm02 bash[52264]: [10/Mar/2026:05:58:46] ENGINE Serving on http://:::9283 2026-03-10T05:58:46.740 INFO:journalctl@ceph.mgr.y.vm02.stdout:Mar 10 05:58:46 vm02 bash[52264]: [10/Mar/2026:05:58:46] ENGINE Bus STARTED 2026-03-10T05:58:46.740 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:58:46 vm02 systemd[1]: Stopping Ceph mon.c for 107483ae-1c44-11f1-b530-c1172cd6122a... 2026-03-10T05:58:46.915 INFO:journalctl@ceph.mgr.y.vm02.stdout:Mar 10 05:58:46 vm02 bash[52264]: [10/Mar/2026:05:58:46] ENGINE Bus STOPPING 2026-03-10T05:58:46.916 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:58:46 vm02 bash[55303]: debug 2026-03-10T05:58:46.735+0000 7f5219c45640 -1 received signal: Terminated from /sbin/docker-init -- /usr/bin/ceph-mon -n mon.c -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-stderr=true --default-log-stderr-prefix=debug --default-mon-cluster-log-to-file=false --default-mon-cluster-log-to-stderr=true (PID: 1) UID: 0 2026-03-10T05:58:46.916 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:58:46 vm02 bash[55303]: debug 2026-03-10T05:58:46.735+0000 7f5219c45640 -1 mon.c@1(peon) e4 *** Got Signal Terminated *** 2026-03-10T05:58:46.971 DEBUG:teuthology.orchestra.run.vm02:> sudo pkill -f 'journalctl -f -n 0 -u ceph-107483ae-1c44-11f1-b530-c1172cd6122a@mon.c.service' 2026-03-10T05:58:46.996 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:58:46 vm02 bash[80291]: ceph-107483ae-1c44-11f1-b530-c1172cd6122a-mon-c 2026-03-10T05:58:46.996 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:58:46 vm02 systemd[1]: ceph-107483ae-1c44-11f1-b530-c1172cd6122a@mon.c.service: Deactivated successfully. 2026-03-10T05:58:46.996 INFO:journalctl@ceph.mon.c.vm02.stdout:Mar 10 05:58:46 vm02 systemd[1]: Stopped Ceph mon.c for 107483ae-1c44-11f1-b530-c1172cd6122a. 2026-03-10T05:58:47.008 DEBUG:teuthology.orchestra.run:got remote process result: None 2026-03-10T05:58:47.008 INFO:tasks.cephadm.mon.b:Stopped mon.c 2026-03-10T05:58:47.008 INFO:tasks.cephadm.mon.b:Stopping mon.b... 2026-03-10T05:58:47.008 DEBUG:teuthology.orchestra.run.vm05:> sudo systemctl stop ceph-107483ae-1c44-11f1-b530-c1172cd6122a@mon.b 2026-03-10T05:58:47.140 DEBUG:teuthology.orchestra.run.vm05:> sudo pkill -f 'journalctl -f -n 0 -u ceph-107483ae-1c44-11f1-b530-c1172cd6122a@mon.b.service' 2026-03-10T05:58:47.152 DEBUG:teuthology.orchestra.run:got remote process result: None 2026-03-10T05:58:47.152 INFO:tasks.cephadm.mon.b:Stopped mon.b 2026-03-10T05:58:47.152 INFO:tasks.cephadm.mgr.y:Stopping mgr.y... 2026-03-10T05:58:47.152 DEBUG:teuthology.orchestra.run.vm02:> sudo systemctl stop ceph-107483ae-1c44-11f1-b530-c1172cd6122a@mgr.y 2026-03-10T05:58:47.229 INFO:journalctl@ceph.mgr.y.vm02.stdout:Mar 10 05:58:46 vm02 bash[52264]: [10/Mar/2026:05:58:46] ENGINE HTTP Server cherrypy._cpwsgi_server.CPWSGIServer(('::', 9283)) shut down 2026-03-10T05:58:47.229 INFO:journalctl@ceph.mgr.y.vm02.stdout:Mar 10 05:58:46 vm02 bash[52264]: [10/Mar/2026:05:58:46] ENGINE Bus STOPPED 2026-03-10T05:58:47.229 INFO:journalctl@ceph.mgr.y.vm02.stdout:Mar 10 05:58:46 vm02 bash[52264]: [10/Mar/2026:05:58:46] ENGINE Bus STARTING 2026-03-10T05:58:47.229 INFO:journalctl@ceph.mgr.y.vm02.stdout:Mar 10 05:58:47 vm02 bash[52264]: [10/Mar/2026:05:58:47] ENGINE Serving on http://:::9283 2026-03-10T05:58:47.229 INFO:journalctl@ceph.mgr.y.vm02.stdout:Mar 10 05:58:47 vm02 bash[52264]: [10/Mar/2026:05:58:47] ENGINE Bus STARTED 2026-03-10T05:58:47.229 INFO:journalctl@ceph.mgr.y.vm02.stdout:Mar 10 05:58:47 vm02 systemd[1]: Stopping Ceph mgr.y for 107483ae-1c44-11f1-b530-c1172cd6122a... 2026-03-10T05:58:47.229 INFO:journalctl@ceph.mgr.y.vm02.stdout:Mar 10 05:58:47 vm02 bash[52264]: debug 2026-03-10T05:58:47.191+0000 7fd67bb80640 -1 received signal: Terminated from /sbin/docker-init -- /usr/bin/ceph-mgr -n mgr.y -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-stderr=true --default-log-stderr-prefix=debug (PID: 1) UID: 0 2026-03-10T05:58:47.229 INFO:journalctl@ceph.mgr.y.vm02.stdout:Mar 10 05:58:47 vm02 bash[52264]: debug 2026-03-10T05:58:47.191+0000 7fd67bb80640 -1 mgr handle_mgr_signal *** Got signal Terminated *** 2026-03-10T05:58:47.307 DEBUG:teuthology.orchestra.run.vm02:> sudo pkill -f 'journalctl -f -n 0 -u ceph-107483ae-1c44-11f1-b530-c1172cd6122a@mgr.y.service' 2026-03-10T05:58:47.319 DEBUG:teuthology.orchestra.run:got remote process result: None 2026-03-10T05:58:47.319 INFO:tasks.cephadm.mgr.y:Stopped mgr.y 2026-03-10T05:58:47.319 INFO:tasks.cephadm.mgr.x:Stopping mgr.x... 2026-03-10T05:58:47.319 DEBUG:teuthology.orchestra.run.vm05:> sudo systemctl stop ceph-107483ae-1c44-11f1-b530-c1172cd6122a@mgr.x 2026-03-10T05:58:47.579 INFO:journalctl@ceph.mgr.x.vm05.stdout:Mar 10 05:58:47 vm05 systemd[1]: Stopping Ceph mgr.x for 107483ae-1c44-11f1-b530-c1172cd6122a... 2026-03-10T05:58:47.621 DEBUG:teuthology.orchestra.run.vm05:> sudo pkill -f 'journalctl -f -n 0 -u ceph-107483ae-1c44-11f1-b530-c1172cd6122a@mgr.x.service' 2026-03-10T05:58:47.633 DEBUG:teuthology.orchestra.run:got remote process result: None 2026-03-10T05:58:47.633 INFO:tasks.cephadm.mgr.x:Stopped mgr.x 2026-03-10T05:58:47.633 INFO:tasks.cephadm.osd.0:Stopping osd.0... 2026-03-10T05:58:47.633 DEBUG:teuthology.orchestra.run.vm02:> sudo systemctl stop ceph-107483ae-1c44-11f1-b530-c1172cd6122a@osd.0 2026-03-10T05:58:48.085 INFO:journalctl@ceph.osd.0.vm02.stdout:Mar 10 05:58:47 vm02 systemd[1]: Stopping Ceph osd.0 for 107483ae-1c44-11f1-b530-c1172cd6122a... 2026-03-10T05:58:48.085 INFO:journalctl@ceph.osd.0.vm02.stdout:Mar 10 05:58:47 vm02 bash[63533]: debug 2026-03-10T05:58:47.679+0000 7fb0de450640 -1 received signal: Terminated from /sbin/docker-init -- /usr/bin/ceph-osd -n osd.0 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-stderr=true --default-log-stderr-prefix=debug (PID: 1) UID: 0 2026-03-10T05:58:48.085 INFO:journalctl@ceph.osd.0.vm02.stdout:Mar 10 05:58:47 vm02 bash[63533]: debug 2026-03-10T05:58:47.679+0000 7fb0de450640 -1 osd.0 138 *** Got signal Terminated *** 2026-03-10T05:58:48.085 INFO:journalctl@ceph.osd.0.vm02.stdout:Mar 10 05:58:47 vm02 bash[63533]: debug 2026-03-10T05:58:47.679+0000 7fb0de450640 -1 osd.0 138 *** Immediate shutdown (osd_fast_shutdown=true) *** 2026-03-10T05:58:53.012 INFO:journalctl@ceph.osd.0.vm02.stdout:Mar 10 05:58:52 vm02 bash[80475]: ceph-107483ae-1c44-11f1-b530-c1172cd6122a-osd-0 2026-03-10T05:58:53.060 DEBUG:teuthology.orchestra.run.vm02:> sudo pkill -f 'journalctl -f -n 0 -u ceph-107483ae-1c44-11f1-b530-c1172cd6122a@osd.0.service' 2026-03-10T05:58:53.074 DEBUG:teuthology.orchestra.run:got remote process result: None 2026-03-10T05:58:53.074 INFO:tasks.cephadm.osd.0:Stopped osd.0 2026-03-10T05:58:53.074 INFO:tasks.cephadm.osd.1:Stopping osd.1... 2026-03-10T05:58:53.074 DEBUG:teuthology.orchestra.run.vm02:> sudo systemctl stop ceph-107483ae-1c44-11f1-b530-c1172cd6122a@osd.1 2026-03-10T05:58:53.285 INFO:journalctl@ceph.osd.1.vm02.stdout:Mar 10 05:58:53 vm02 systemd[1]: Stopping Ceph osd.1 for 107483ae-1c44-11f1-b530-c1172cd6122a... 2026-03-10T05:58:53.335 INFO:journalctl@ceph.osd.1.vm02.stdout:Mar 10 05:58:53 vm02 bash[65730]: debug 2026-03-10T05:58:53.279+0000 7f8547f0c640 -1 received signal: Terminated from /sbin/docker-init -- /usr/bin/ceph-osd -n osd.1 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-stderr=true --default-log-stderr-prefix=debug (PID: 1) UID: 0 2026-03-10T05:58:53.335 INFO:journalctl@ceph.osd.1.vm02.stdout:Mar 10 05:58:53 vm02 bash[65730]: debug 2026-03-10T05:58:53.279+0000 7f8547f0c640 -1 osd.1 138 *** Got signal Terminated *** 2026-03-10T05:58:53.335 INFO:journalctl@ceph.osd.1.vm02.stdout:Mar 10 05:58:53 vm02 bash[65730]: debug 2026-03-10T05:58:53.279+0000 7f8547f0c640 -1 osd.1 138 *** Immediate shutdown (osd_fast_shutdown=true) *** 2026-03-10T05:58:58.585 INFO:journalctl@ceph.osd.1.vm02.stdout:Mar 10 05:58:58 vm02 bash[80656]: ceph-107483ae-1c44-11f1-b530-c1172cd6122a-osd-1 2026-03-10T05:58:58.677 DEBUG:teuthology.orchestra.run.vm02:> sudo pkill -f 'journalctl -f -n 0 -u ceph-107483ae-1c44-11f1-b530-c1172cd6122a@osd.1.service' 2026-03-10T05:58:58.688 DEBUG:teuthology.orchestra.run:got remote process result: None 2026-03-10T05:58:58.688 INFO:tasks.cephadm.osd.1:Stopped osd.1 2026-03-10T05:58:58.688 INFO:tasks.cephadm.osd.2:Stopping osd.2... 2026-03-10T05:58:58.688 DEBUG:teuthology.orchestra.run.vm02:> sudo systemctl stop ceph-107483ae-1c44-11f1-b530-c1172cd6122a@osd.2 2026-03-10T05:58:59.085 INFO:journalctl@ceph.osd.2.vm02.stdout:Mar 10 05:58:58 vm02 systemd[1]: Stopping Ceph osd.2 for 107483ae-1c44-11f1-b530-c1172cd6122a... 2026-03-10T05:58:59.085 INFO:journalctl@ceph.osd.2.vm02.stdout:Mar 10 05:58:58 vm02 bash[61325]: debug 2026-03-10T05:58:58.771+0000 7f8dcbbb6640 -1 received signal: Terminated from /sbin/docker-init -- /usr/bin/ceph-osd -n osd.2 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-stderr=true --default-log-stderr-prefix=debug (PID: 1) UID: 0 2026-03-10T05:58:59.085 INFO:journalctl@ceph.osd.2.vm02.stdout:Mar 10 05:58:58 vm02 bash[61325]: debug 2026-03-10T05:58:58.771+0000 7f8dcbbb6640 -1 osd.2 138 *** Got signal Terminated *** 2026-03-10T05:58:59.085 INFO:journalctl@ceph.osd.2.vm02.stdout:Mar 10 05:58:58 vm02 bash[61325]: debug 2026-03-10T05:58:58.771+0000 7f8dcbbb6640 -1 osd.2 138 *** Immediate shutdown (osd_fast_shutdown=true) *** 2026-03-10T05:59:04.085 INFO:journalctl@ceph.osd.2.vm02.stdout:Mar 10 05:59:03 vm02 bash[80841]: ceph-107483ae-1c44-11f1-b530-c1172cd6122a-osd-2 2026-03-10T05:59:04.153 DEBUG:teuthology.orchestra.run.vm02:> sudo pkill -f 'journalctl -f -n 0 -u ceph-107483ae-1c44-11f1-b530-c1172cd6122a@osd.2.service' 2026-03-10T05:59:04.164 DEBUG:teuthology.orchestra.run:got remote process result: None 2026-03-10T05:59:04.164 INFO:tasks.cephadm.osd.2:Stopped osd.2 2026-03-10T05:59:04.164 INFO:tasks.cephadm.osd.3:Stopping osd.3... 2026-03-10T05:59:04.164 DEBUG:teuthology.orchestra.run.vm02:> sudo systemctl stop ceph-107483ae-1c44-11f1-b530-c1172cd6122a@osd.3 2026-03-10T05:59:04.585 INFO:journalctl@ceph.osd.3.vm02.stdout:Mar 10 05:59:04 vm02 systemd[1]: Stopping Ceph osd.3 for 107483ae-1c44-11f1-b530-c1172cd6122a... 2026-03-10T05:59:04.585 INFO:journalctl@ceph.osd.3.vm02.stdout:Mar 10 05:59:04 vm02 bash[59145]: debug 2026-03-10T05:59:04.251+0000 7f7cfb43a640 -1 received signal: Terminated from /sbin/docker-init -- /usr/bin/ceph-osd -n osd.3 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-stderr=true --default-log-stderr-prefix=debug (PID: 1) UID: 0 2026-03-10T05:59:04.585 INFO:journalctl@ceph.osd.3.vm02.stdout:Mar 10 05:59:04 vm02 bash[59145]: debug 2026-03-10T05:59:04.251+0000 7f7cfb43a640 -1 osd.3 138 *** Got signal Terminated *** 2026-03-10T05:59:04.585 INFO:journalctl@ceph.osd.3.vm02.stdout:Mar 10 05:59:04 vm02 bash[59145]: debug 2026-03-10T05:59:04.251+0000 7f7cfb43a640 -1 osd.3 138 *** Immediate shutdown (osd_fast_shutdown=true) *** 2026-03-10T05:59:09.577 INFO:journalctl@ceph.osd.3.vm02.stdout:Mar 10 05:59:09 vm02 bash[81017]: ceph-107483ae-1c44-11f1-b530-c1172cd6122a-osd-3 2026-03-10T05:59:09.660 DEBUG:teuthology.orchestra.run.vm02:> sudo pkill -f 'journalctl -f -n 0 -u ceph-107483ae-1c44-11f1-b530-c1172cd6122a@osd.3.service' 2026-03-10T05:59:09.670 DEBUG:teuthology.orchestra.run:got remote process result: None 2026-03-10T05:59:09.670 INFO:tasks.cephadm.osd.3:Stopped osd.3 2026-03-10T05:59:09.670 INFO:tasks.cephadm.osd.4:Stopping osd.4... 2026-03-10T05:59:09.670 DEBUG:teuthology.orchestra.run.vm05:> sudo systemctl stop ceph-107483ae-1c44-11f1-b530-c1172cd6122a@osd.4 2026-03-10T05:59:09.997 INFO:journalctl@ceph.osd.4.vm05.stdout:Mar 10 05:59:09 vm05 systemd[1]: Stopping Ceph osd.4 for 107483ae-1c44-11f1-b530-c1172cd6122a... 2026-03-10T05:59:09.997 INFO:journalctl@ceph.osd.4.vm05.stdout:Mar 10 05:59:09 vm05 bash[45790]: debug 2026-03-10T05:59:09.721+0000 7f237cfea640 -1 received signal: Terminated from /sbin/docker-init -- /usr/bin/ceph-osd -n osd.4 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-stderr=true --default-log-stderr-prefix=debug (PID: 1) UID: 0 2026-03-10T05:59:09.997 INFO:journalctl@ceph.osd.4.vm05.stdout:Mar 10 05:59:09 vm05 bash[45790]: debug 2026-03-10T05:59:09.721+0000 7f237cfea640 -1 osd.4 138 *** Got signal Terminated *** 2026-03-10T05:59:09.997 INFO:journalctl@ceph.osd.4.vm05.stdout:Mar 10 05:59:09 vm05 bash[45790]: debug 2026-03-10T05:59:09.721+0000 7f237cfea640 -1 osd.4 138 *** Immediate shutdown (osd_fast_shutdown=true) *** 2026-03-10T05:59:14.497 INFO:journalctl@ceph.osd.4.vm05.stdout:Mar 10 05:59:14 vm05 bash[45790]: debug 2026-03-10T05:59:14.189+0000 7f2379603640 -1 osd.4 138 heartbeat_check: no reply from 192.168.123.102:6806 osd.0 since back 2026-03-10T05:58:51.256242+0000 front 2026-03-10T05:58:51.256257+0000 (oldest deadline 2026-03-10T05:59:14.156071+0000) 2026-03-10T05:59:14.497 INFO:journalctl@ceph.osd.7.vm05.stdout:Mar 10 05:59:14 vm05 bash[51877]: debug 2026-03-10T05:59:14.009+0000 7f51501a9640 -1 osd.7 138 heartbeat_check: no reply from 192.168.123.102:6806 osd.0 since back 2026-03-10T05:58:50.891468+0000 front 2026-03-10T05:58:50.891432+0000 (oldest deadline 2026-03-10T05:59:13.791326+0000) 2026-03-10T05:59:15.037 INFO:journalctl@ceph.osd.4.vm05.stdout:Mar 10 05:59:14 vm05 bash[62730]: ceph-107483ae-1c44-11f1-b530-c1172cd6122a-osd-4 2026-03-10T05:59:15.108 DEBUG:teuthology.orchestra.run.vm05:> sudo pkill -f 'journalctl -f -n 0 -u ceph-107483ae-1c44-11f1-b530-c1172cd6122a@osd.4.service' 2026-03-10T05:59:15.119 DEBUG:teuthology.orchestra.run:got remote process result: None 2026-03-10T05:59:15.119 INFO:tasks.cephadm.osd.4:Stopped osd.4 2026-03-10T05:59:15.119 INFO:tasks.cephadm.osd.5:Stopping osd.5... 2026-03-10T05:59:15.119 DEBUG:teuthology.orchestra.run.vm05:> sudo systemctl stop ceph-107483ae-1c44-11f1-b530-c1172cd6122a@osd.5 2026-03-10T05:59:15.497 INFO:journalctl@ceph.osd.5.vm05.stdout:Mar 10 05:59:15 vm05 systemd[1]: Stopping Ceph osd.5 for 107483ae-1c44-11f1-b530-c1172cd6122a... 2026-03-10T05:59:15.497 INFO:journalctl@ceph.osd.5.vm05.stdout:Mar 10 05:59:15 vm05 bash[47813]: debug 2026-03-10T05:59:15.209+0000 7f086a03b640 -1 received signal: Terminated from /sbin/docker-init -- /usr/bin/ceph-osd -n osd.5 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-stderr=true --default-log-stderr-prefix=debug (PID: 1) UID: 0 2026-03-10T05:59:15.497 INFO:journalctl@ceph.osd.5.vm05.stdout:Mar 10 05:59:15 vm05 bash[47813]: debug 2026-03-10T05:59:15.209+0000 7f086a03b640 -1 osd.5 138 *** Got signal Terminated *** 2026-03-10T05:59:15.497 INFO:journalctl@ceph.osd.5.vm05.stdout:Mar 10 05:59:15 vm05 bash[47813]: debug 2026-03-10T05:59:15.209+0000 7f086a03b640 -1 osd.5 138 *** Immediate shutdown (osd_fast_shutdown=true) *** 2026-03-10T05:59:15.497 INFO:journalctl@ceph.osd.7.vm05.stdout:Mar 10 05:59:15 vm05 bash[51877]: debug 2026-03-10T05:59:15.037+0000 7f51501a9640 -1 osd.7 138 heartbeat_check: no reply from 192.168.123.102:6806 osd.0 since back 2026-03-10T05:58:50.891468+0000 front 2026-03-10T05:58:50.891432+0000 (oldest deadline 2026-03-10T05:59:13.791326+0000) 2026-03-10T05:59:16.334 INFO:journalctl@ceph.osd.7.vm05.stdout:Mar 10 05:59:16 vm05 bash[51877]: debug 2026-03-10T05:59:16.077+0000 7f51501a9640 -1 osd.7 138 heartbeat_check: no reply from 192.168.123.102:6806 osd.0 since back 2026-03-10T05:58:50.891468+0000 front 2026-03-10T05:58:50.891432+0000 (oldest deadline 2026-03-10T05:59:13.791326+0000) 2026-03-10T05:59:16.747 INFO:journalctl@ceph.osd.6.vm05.stdout:Mar 10 05:59:16 vm05 bash[49827]: debug 2026-03-10T05:59:16.337+0000 7fac19e10640 -1 osd.6 138 heartbeat_check: no reply from 192.168.123.102:6806 osd.0 since back 2026-03-10T05:58:50.942600+0000 front 2026-03-10T05:58:50.942435+0000 (oldest deadline 2026-03-10T05:59:16.242335+0000) 2026-03-10T05:59:17.321 INFO:journalctl@ceph.osd.7.vm05.stdout:Mar 10 05:59:17 vm05 bash[51877]: debug 2026-03-10T05:59:17.025+0000 7f51501a9640 -1 osd.7 138 heartbeat_check: no reply from 192.168.123.102:6806 osd.0 since back 2026-03-10T05:58:50.891468+0000 front 2026-03-10T05:58:50.891432+0000 (oldest deadline 2026-03-10T05:59:13.791326+0000) 2026-03-10T05:59:17.747 INFO:journalctl@ceph.osd.6.vm05.stdout:Mar 10 05:59:17 vm05 bash[49827]: debug 2026-03-10T05:59:17.321+0000 7fac19e10640 -1 osd.6 138 heartbeat_check: no reply from 192.168.123.102:6806 osd.0 since back 2026-03-10T05:58:50.942600+0000 front 2026-03-10T05:58:50.942435+0000 (oldest deadline 2026-03-10T05:59:16.242335+0000) 2026-03-10T05:59:18.344 INFO:journalctl@ceph.osd.5.vm05.stdout:Mar 10 05:59:18 vm05 bash[47813]: debug 2026-03-10T05:59:18.166+0000 7f0865e53640 -1 osd.5 138 heartbeat_check: no reply from 192.168.123.102:6806 osd.0 since back 2026-03-10T05:58:51.890792+0000 front 2026-03-10T05:58:51.890542+0000 (oldest deadline 2026-03-10T05:59:17.190122+0000) 2026-03-10T05:59:18.344 INFO:journalctl@ceph.osd.7.vm05.stdout:Mar 10 05:59:18 vm05 bash[51877]: debug 2026-03-10T05:59:18.066+0000 7f51501a9640 -1 osd.7 138 heartbeat_check: no reply from 192.168.123.102:6806 osd.0 since back 2026-03-10T05:58:50.891468+0000 front 2026-03-10T05:58:50.891432+0000 (oldest deadline 2026-03-10T05:59:13.791326+0000) 2026-03-10T05:59:18.747 INFO:journalctl@ceph.osd.6.vm05.stdout:Mar 10 05:59:18 vm05 bash[49827]: debug 2026-03-10T05:59:18.346+0000 7fac19e10640 -1 osd.6 138 heartbeat_check: no reply from 192.168.123.102:6806 osd.0 since back 2026-03-10T05:58:50.942600+0000 front 2026-03-10T05:58:50.942435+0000 (oldest deadline 2026-03-10T05:59:16.242335+0000) 2026-03-10T05:59:19.358 INFO:journalctl@ceph.osd.5.vm05.stdout:Mar 10 05:59:19 vm05 bash[47813]: debug 2026-03-10T05:59:19.126+0000 7f0865e53640 -1 osd.5 138 heartbeat_check: no reply from 192.168.123.102:6806 osd.0 since back 2026-03-10T05:58:51.890792+0000 front 2026-03-10T05:58:51.890542+0000 (oldest deadline 2026-03-10T05:59:17.190122+0000) 2026-03-10T05:59:19.359 INFO:journalctl@ceph.osd.7.vm05.stdout:Mar 10 05:59:19 vm05 bash[51877]: debug 2026-03-10T05:59:19.098+0000 7f51501a9640 -1 osd.7 138 heartbeat_check: no reply from 192.168.123.102:6806 osd.0 since back 2026-03-10T05:58:50.891468+0000 front 2026-03-10T05:58:50.891432+0000 (oldest deadline 2026-03-10T05:59:13.791326+0000) 2026-03-10T05:59:19.747 INFO:journalctl@ceph.osd.6.vm05.stdout:Mar 10 05:59:19 vm05 bash[49827]: debug 2026-03-10T05:59:19.362+0000 7fac19e10640 -1 osd.6 138 heartbeat_check: no reply from 192.168.123.102:6806 osd.0 since back 2026-03-10T05:58:50.942600+0000 front 2026-03-10T05:58:50.942435+0000 (oldest deadline 2026-03-10T05:59:16.242335+0000) 2026-03-10T05:59:20.363 INFO:journalctl@ceph.osd.5.vm05.stdout:Mar 10 05:59:20 vm05 bash[47813]: debug 2026-03-10T05:59:20.130+0000 7f0865e53640 -1 osd.5 138 heartbeat_check: no reply from 192.168.123.102:6806 osd.0 since back 2026-03-10T05:58:51.890792+0000 front 2026-03-10T05:58:51.890542+0000 (oldest deadline 2026-03-10T05:59:17.190122+0000) 2026-03-10T05:59:20.363 INFO:journalctl@ceph.osd.5.vm05.stdout:Mar 10 05:59:20 vm05 bash[62913]: ceph-107483ae-1c44-11f1-b530-c1172cd6122a-osd-5 2026-03-10T05:59:20.363 INFO:journalctl@ceph.osd.7.vm05.stdout:Mar 10 05:59:20 vm05 bash[51877]: debug 2026-03-10T05:59:20.114+0000 7f51501a9640 -1 osd.7 138 heartbeat_check: no reply from 192.168.123.102:6806 osd.0 since back 2026-03-10T05:58:50.891468+0000 front 2026-03-10T05:58:50.891432+0000 (oldest deadline 2026-03-10T05:59:13.791326+0000) 2026-03-10T05:59:20.579 DEBUG:teuthology.orchestra.run.vm05:> sudo pkill -f 'journalctl -f -n 0 -u ceph-107483ae-1c44-11f1-b530-c1172cd6122a@osd.5.service' 2026-03-10T05:59:20.590 DEBUG:teuthology.orchestra.run:got remote process result: None 2026-03-10T05:59:20.590 INFO:tasks.cephadm.osd.5:Stopped osd.5 2026-03-10T05:59:20.590 INFO:tasks.cephadm.osd.6:Stopping osd.6... 2026-03-10T05:59:20.590 DEBUG:teuthology.orchestra.run.vm05:> sudo systemctl stop ceph-107483ae-1c44-11f1-b530-c1172cd6122a@osd.6 2026-03-10T05:59:20.635 INFO:journalctl@ceph.osd.6.vm05.stdout:Mar 10 05:59:20 vm05 bash[49827]: debug 2026-03-10T05:59:20.382+0000 7fac19e10640 -1 osd.6 138 heartbeat_check: no reply from 192.168.123.102:6806 osd.0 since back 2026-03-10T05:58:50.942600+0000 front 2026-03-10T05:58:50.942435+0000 (oldest deadline 2026-03-10T05:59:16.242335+0000) 2026-03-10T05:59:20.635 INFO:journalctl@ceph.osd.6.vm05.stdout:Mar 10 05:59:20 vm05 bash[49827]: debug 2026-03-10T05:59:20.382+0000 7fac19e10640 -1 osd.6 138 heartbeat_check: no reply from 192.168.123.102:6814 osd.1 since back 2026-03-10T05:58:56.242653+0000 front 2026-03-10T05:58:56.242835+0000 (oldest deadline 2026-03-10T05:59:20.342586+0000) 2026-03-10T05:59:20.997 INFO:journalctl@ceph.osd.6.vm05.stdout:Mar 10 05:59:20 vm05 systemd[1]: Stopping Ceph osd.6 for 107483ae-1c44-11f1-b530-c1172cd6122a... 2026-03-10T05:59:20.997 INFO:journalctl@ceph.osd.6.vm05.stdout:Mar 10 05:59:20 vm05 bash[49827]: debug 2026-03-10T05:59:20.674+0000 7fac1dff8640 -1 received signal: Terminated from /sbin/docker-init -- /usr/bin/ceph-osd -n osd.6 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-stderr=true --default-log-stderr-prefix=debug (PID: 1) UID: 0 2026-03-10T05:59:20.997 INFO:journalctl@ceph.osd.6.vm05.stdout:Mar 10 05:59:20 vm05 bash[49827]: debug 2026-03-10T05:59:20.674+0000 7fac1dff8640 -1 osd.6 138 *** Got signal Terminated *** 2026-03-10T05:59:20.997 INFO:journalctl@ceph.osd.6.vm05.stdout:Mar 10 05:59:20 vm05 bash[49827]: debug 2026-03-10T05:59:20.674+0000 7fac1dff8640 -1 osd.6 138 *** Immediate shutdown (osd_fast_shutdown=true) *** 2026-03-10T05:59:21.410 INFO:journalctl@ceph.osd.7.vm05.stdout:Mar 10 05:59:21 vm05 bash[51877]: debug 2026-03-10T05:59:21.126+0000 7f51501a9640 -1 osd.7 138 heartbeat_check: no reply from 192.168.123.102:6806 osd.0 since back 2026-03-10T05:58:50.891468+0000 front 2026-03-10T05:58:50.891432+0000 (oldest deadline 2026-03-10T05:59:13.791326+0000) 2026-03-10T05:59:21.411 INFO:journalctl@ceph.osd.7.vm05.stdout:Mar 10 05:59:21 vm05 bash[51877]: debug 2026-03-10T05:59:21.126+0000 7f51501a9640 -1 osd.7 138 heartbeat_check: no reply from 192.168.123.102:6814 osd.1 since back 2026-03-10T05:58:54.292082+0000 front 2026-03-10T05:58:54.292191+0000 (oldest deadline 2026-03-10T05:59:20.191759+0000) 2026-03-10T05:59:21.747 INFO:journalctl@ceph.osd.6.vm05.stdout:Mar 10 05:59:21 vm05 bash[49827]: debug 2026-03-10T05:59:21.410+0000 7fac19e10640 -1 osd.6 138 heartbeat_check: no reply from 192.168.123.102:6806 osd.0 since back 2026-03-10T05:58:50.942600+0000 front 2026-03-10T05:58:50.942435+0000 (oldest deadline 2026-03-10T05:59:16.242335+0000) 2026-03-10T05:59:21.747 INFO:journalctl@ceph.osd.6.vm05.stdout:Mar 10 05:59:21 vm05 bash[49827]: debug 2026-03-10T05:59:21.410+0000 7fac19e10640 -1 osd.6 138 heartbeat_check: no reply from 192.168.123.102:6814 osd.1 since back 2026-03-10T05:58:56.242653+0000 front 2026-03-10T05:58:56.242835+0000 (oldest deadline 2026-03-10T05:59:20.342586+0000) 2026-03-10T05:59:22.424 INFO:journalctl@ceph.osd.7.vm05.stdout:Mar 10 05:59:22 vm05 bash[51877]: debug 2026-03-10T05:59:22.162+0000 7f51501a9640 -1 osd.7 138 heartbeat_check: no reply from 192.168.123.102:6806 osd.0 since back 2026-03-10T05:58:50.891468+0000 front 2026-03-10T05:58:50.891432+0000 (oldest deadline 2026-03-10T05:59:13.791326+0000) 2026-03-10T05:59:22.424 INFO:journalctl@ceph.osd.7.vm05.stdout:Mar 10 05:59:22 vm05 bash[51877]: debug 2026-03-10T05:59:22.162+0000 7f51501a9640 -1 osd.7 138 heartbeat_check: no reply from 192.168.123.102:6814 osd.1 since back 2026-03-10T05:58:54.292082+0000 front 2026-03-10T05:58:54.292191+0000 (oldest deadline 2026-03-10T05:59:20.191759+0000) 2026-03-10T05:59:22.747 INFO:journalctl@ceph.osd.6.vm05.stdout:Mar 10 05:59:22 vm05 bash[49827]: debug 2026-03-10T05:59:22.426+0000 7fac19e10640 -1 osd.6 138 heartbeat_check: no reply from 192.168.123.102:6806 osd.0 since back 2026-03-10T05:58:50.942600+0000 front 2026-03-10T05:58:50.942435+0000 (oldest deadline 2026-03-10T05:59:16.242335+0000) 2026-03-10T05:59:22.747 INFO:journalctl@ceph.osd.6.vm05.stdout:Mar 10 05:59:22 vm05 bash[49827]: debug 2026-03-10T05:59:22.426+0000 7fac19e10640 -1 osd.6 138 heartbeat_check: no reply from 192.168.123.102:6814 osd.1 since back 2026-03-10T05:58:56.242653+0000 front 2026-03-10T05:58:56.242835+0000 (oldest deadline 2026-03-10T05:59:20.342586+0000) 2026-03-10T05:59:23.497 INFO:journalctl@ceph.osd.6.vm05.stdout:Mar 10 05:59:23 vm05 bash[49827]: debug 2026-03-10T05:59:23.398+0000 7fac19e10640 -1 osd.6 138 heartbeat_check: no reply from 192.168.123.102:6806 osd.0 since back 2026-03-10T05:58:50.942600+0000 front 2026-03-10T05:58:50.942435+0000 (oldest deadline 2026-03-10T05:59:16.242335+0000) 2026-03-10T05:59:23.497 INFO:journalctl@ceph.osd.6.vm05.stdout:Mar 10 05:59:23 vm05 bash[49827]: debug 2026-03-10T05:59:23.398+0000 7fac19e10640 -1 osd.6 138 heartbeat_check: no reply from 192.168.123.102:6814 osd.1 since back 2026-03-10T05:58:56.242653+0000 front 2026-03-10T05:58:56.242835+0000 (oldest deadline 2026-03-10T05:59:20.342586+0000) 2026-03-10T05:59:23.497 INFO:journalctl@ceph.osd.7.vm05.stdout:Mar 10 05:59:23 vm05 bash[51877]: debug 2026-03-10T05:59:23.154+0000 7f51501a9640 -1 osd.7 138 heartbeat_check: no reply from 192.168.123.102:6806 osd.0 since back 2026-03-10T05:58:50.891468+0000 front 2026-03-10T05:58:50.891432+0000 (oldest deadline 2026-03-10T05:59:13.791326+0000) 2026-03-10T05:59:23.497 INFO:journalctl@ceph.osd.7.vm05.stdout:Mar 10 05:59:23 vm05 bash[51877]: debug 2026-03-10T05:59:23.154+0000 7f51501a9640 -1 osd.7 138 heartbeat_check: no reply from 192.168.123.102:6814 osd.1 since back 2026-03-10T05:58:54.292082+0000 front 2026-03-10T05:58:54.292191+0000 (oldest deadline 2026-03-10T05:59:20.191759+0000) 2026-03-10T05:59:24.441 INFO:journalctl@ceph.osd.7.vm05.stdout:Mar 10 05:59:24 vm05 bash[51877]: debug 2026-03-10T05:59:24.114+0000 7f51501a9640 -1 osd.7 138 heartbeat_check: no reply from 192.168.123.102:6806 osd.0 since back 2026-03-10T05:58:50.891468+0000 front 2026-03-10T05:58:50.891432+0000 (oldest deadline 2026-03-10T05:59:13.791326+0000) 2026-03-10T05:59:24.441 INFO:journalctl@ceph.osd.7.vm05.stdout:Mar 10 05:59:24 vm05 bash[51877]: debug 2026-03-10T05:59:24.114+0000 7f51501a9640 -1 osd.7 138 heartbeat_check: no reply from 192.168.123.102:6814 osd.1 since back 2026-03-10T05:58:54.292082+0000 front 2026-03-10T05:58:54.292191+0000 (oldest deadline 2026-03-10T05:59:20.191759+0000) 2026-03-10T05:59:24.746 INFO:journalctl@ceph.osd.6.vm05.stdout:Mar 10 05:59:24 vm05 bash[49827]: debug 2026-03-10T05:59:24.442+0000 7fac19e10640 -1 osd.6 138 heartbeat_check: no reply from 192.168.123.102:6806 osd.0 since back 2026-03-10T05:58:50.942600+0000 front 2026-03-10T05:58:50.942435+0000 (oldest deadline 2026-03-10T05:59:16.242335+0000) 2026-03-10T05:59:24.747 INFO:journalctl@ceph.osd.6.vm05.stdout:Mar 10 05:59:24 vm05 bash[49827]: debug 2026-03-10T05:59:24.442+0000 7fac19e10640 -1 osd.6 138 heartbeat_check: no reply from 192.168.123.102:6814 osd.1 since back 2026-03-10T05:58:56.242653+0000 front 2026-03-10T05:58:56.242835+0000 (oldest deadline 2026-03-10T05:59:20.342586+0000) 2026-03-10T05:59:25.400 INFO:journalctl@ceph.osd.7.vm05.stdout:Mar 10 05:59:25 vm05 bash[51877]: debug 2026-03-10T05:59:25.154+0000 7f51501a9640 -1 osd.7 138 heartbeat_check: no reply from 192.168.123.102:6806 osd.0 since back 2026-03-10T05:58:50.891468+0000 front 2026-03-10T05:58:50.891432+0000 (oldest deadline 2026-03-10T05:59:13.791326+0000) 2026-03-10T05:59:25.400 INFO:journalctl@ceph.osd.7.vm05.stdout:Mar 10 05:59:25 vm05 bash[51877]: debug 2026-03-10T05:59:25.154+0000 7f51501a9640 -1 osd.7 138 heartbeat_check: no reply from 192.168.123.102:6814 osd.1 since back 2026-03-10T05:58:54.292082+0000 front 2026-03-10T05:58:54.292191+0000 (oldest deadline 2026-03-10T05:59:20.191759+0000) 2026-03-10T05:59:25.400 INFO:journalctl@ceph.osd.7.vm05.stdout:Mar 10 05:59:25 vm05 bash[51877]: debug 2026-03-10T05:59:25.154+0000 7f51501a9640 -1 osd.7 138 heartbeat_check: no reply from 192.168.123.102:6822 osd.2 since back 2026-03-10T05:59:00.192476+0000 front 2026-03-10T05:59:00.192411+0000 (oldest deadline 2026-03-10T05:59:24.892071+0000) 2026-03-10T05:59:25.715 INFO:journalctl@ceph.osd.6.vm05.stdout:Mar 10 05:59:25 vm05 bash[49827]: debug 2026-03-10T05:59:25.402+0000 7fac19e10640 -1 osd.6 138 heartbeat_check: no reply from 192.168.123.102:6806 osd.0 since back 2026-03-10T05:58:50.942600+0000 front 2026-03-10T05:58:50.942435+0000 (oldest deadline 2026-03-10T05:59:16.242335+0000) 2026-03-10T05:59:25.715 INFO:journalctl@ceph.osd.6.vm05.stdout:Mar 10 05:59:25 vm05 bash[49827]: debug 2026-03-10T05:59:25.402+0000 7fac19e10640 -1 osd.6 138 heartbeat_check: no reply from 192.168.123.102:6814 osd.1 since back 2026-03-10T05:58:56.242653+0000 front 2026-03-10T05:58:56.242835+0000 (oldest deadline 2026-03-10T05:59:20.342586+0000) 2026-03-10T05:59:25.715 INFO:journalctl@ceph.osd.6.vm05.stdout:Mar 10 05:59:25 vm05 bash[49827]: debug 2026-03-10T05:59:25.402+0000 7fac19e10640 -1 osd.6 138 heartbeat_check: no reply from 192.168.123.102:6822 osd.2 since back 2026-03-10T05:59:00.343237+0000 front 2026-03-10T05:59:00.343080+0000 (oldest deadline 2026-03-10T05:59:25.042813+0000) 2026-03-10T05:59:25.977 INFO:journalctl@ceph.osd.6.vm05.stdout:Mar 10 05:59:25 vm05 bash[63095]: ceph-107483ae-1c44-11f1-b530-c1172cd6122a-osd-6 2026-03-10T05:59:26.026 DEBUG:teuthology.orchestra.run.vm05:> sudo pkill -f 'journalctl -f -n 0 -u ceph-107483ae-1c44-11f1-b530-c1172cd6122a@osd.6.service' 2026-03-10T05:59:26.080 DEBUG:teuthology.orchestra.run:got remote process result: None 2026-03-10T05:59:26.080 INFO:tasks.cephadm.osd.6:Stopped osd.6 2026-03-10T05:59:26.080 INFO:tasks.cephadm.osd.7:Stopping osd.7... 2026-03-10T05:59:26.080 DEBUG:teuthology.orchestra.run.vm05:> sudo systemctl stop ceph-107483ae-1c44-11f1-b530-c1172cd6122a@osd.7 2026-03-10T05:59:26.247 INFO:journalctl@ceph.osd.7.vm05.stdout:Mar 10 05:59:26 vm05 systemd[1]: Stopping Ceph osd.7 for 107483ae-1c44-11f1-b530-c1172cd6122a... 2026-03-10T05:59:26.247 INFO:journalctl@ceph.osd.7.vm05.stdout:Mar 10 05:59:26 vm05 bash[51877]: debug 2026-03-10T05:59:26.166+0000 7f5154391640 -1 received signal: Terminated from /sbin/docker-init -- /usr/bin/ceph-osd -n osd.7 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-stderr=true --default-log-stderr-prefix=debug (PID: 1) UID: 0 2026-03-10T05:59:26.247 INFO:journalctl@ceph.osd.7.vm05.stdout:Mar 10 05:59:26 vm05 bash[51877]: debug 2026-03-10T05:59:26.166+0000 7f5154391640 -1 osd.7 138 *** Got signal Terminated *** 2026-03-10T05:59:26.247 INFO:journalctl@ceph.osd.7.vm05.stdout:Mar 10 05:59:26 vm05 bash[51877]: debug 2026-03-10T05:59:26.166+0000 7f5154391640 -1 osd.7 138 *** Immediate shutdown (osd_fast_shutdown=true) *** 2026-03-10T05:59:26.247 INFO:journalctl@ceph.osd.7.vm05.stdout:Mar 10 05:59:26 vm05 bash[51877]: debug 2026-03-10T05:59:26.170+0000 7f51501a9640 -1 osd.7 138 heartbeat_check: no reply from 192.168.123.102:6806 osd.0 since back 2026-03-10T05:58:50.891468+0000 front 2026-03-10T05:58:50.891432+0000 (oldest deadline 2026-03-10T05:59:13.791326+0000) 2026-03-10T05:59:26.247 INFO:journalctl@ceph.osd.7.vm05.stdout:Mar 10 05:59:26 vm05 bash[51877]: debug 2026-03-10T05:59:26.170+0000 7f51501a9640 -1 osd.7 138 heartbeat_check: no reply from 192.168.123.102:6814 osd.1 since back 2026-03-10T05:58:54.292082+0000 front 2026-03-10T05:58:54.292191+0000 (oldest deadline 2026-03-10T05:59:20.191759+0000) 2026-03-10T05:59:26.247 INFO:journalctl@ceph.osd.7.vm05.stdout:Mar 10 05:59:26 vm05 bash[51877]: debug 2026-03-10T05:59:26.170+0000 7f51501a9640 -1 osd.7 138 heartbeat_check: no reply from 192.168.123.102:6822 osd.2 since back 2026-03-10T05:59:00.192476+0000 front 2026-03-10T05:59:00.192411+0000 (oldest deadline 2026-03-10T05:59:24.892071+0000) 2026-03-10T05:59:27.496 INFO:journalctl@ceph.osd.7.vm05.stdout:Mar 10 05:59:27 vm05 bash[51877]: debug 2026-03-10T05:59:27.190+0000 7f51501a9640 -1 osd.7 138 heartbeat_check: no reply from 192.168.123.102:6806 osd.0 since back 2026-03-10T05:58:50.891468+0000 front 2026-03-10T05:58:50.891432+0000 (oldest deadline 2026-03-10T05:59:13.791326+0000) 2026-03-10T05:59:27.497 INFO:journalctl@ceph.osd.7.vm05.stdout:Mar 10 05:59:27 vm05 bash[51877]: debug 2026-03-10T05:59:27.190+0000 7f51501a9640 -1 osd.7 138 heartbeat_check: no reply from 192.168.123.102:6814 osd.1 since back 2026-03-10T05:58:54.292082+0000 front 2026-03-10T05:58:54.292191+0000 (oldest deadline 2026-03-10T05:59:20.191759+0000) 2026-03-10T05:59:27.497 INFO:journalctl@ceph.osd.7.vm05.stdout:Mar 10 05:59:27 vm05 bash[51877]: debug 2026-03-10T05:59:27.190+0000 7f51501a9640 -1 osd.7 138 heartbeat_check: no reply from 192.168.123.102:6822 osd.2 since back 2026-03-10T05:59:00.192476+0000 front 2026-03-10T05:59:00.192411+0000 (oldest deadline 2026-03-10T05:59:24.892071+0000) 2026-03-10T05:59:28.497 INFO:journalctl@ceph.osd.7.vm05.stdout:Mar 10 05:59:28 vm05 bash[51877]: debug 2026-03-10T05:59:28.158+0000 7f51501a9640 -1 osd.7 138 heartbeat_check: no reply from 192.168.123.102:6806 osd.0 since back 2026-03-10T05:58:50.891468+0000 front 2026-03-10T05:58:50.891432+0000 (oldest deadline 2026-03-10T05:59:13.791326+0000) 2026-03-10T05:59:28.497 INFO:journalctl@ceph.osd.7.vm05.stdout:Mar 10 05:59:28 vm05 bash[51877]: debug 2026-03-10T05:59:28.158+0000 7f51501a9640 -1 osd.7 138 heartbeat_check: no reply from 192.168.123.102:6814 osd.1 since back 2026-03-10T05:58:54.292082+0000 front 2026-03-10T05:58:54.292191+0000 (oldest deadline 2026-03-10T05:59:20.191759+0000) 2026-03-10T05:59:28.497 INFO:journalctl@ceph.osd.7.vm05.stdout:Mar 10 05:59:28 vm05 bash[51877]: debug 2026-03-10T05:59:28.158+0000 7f51501a9640 -1 osd.7 138 heartbeat_check: no reply from 192.168.123.102:6822 osd.2 since back 2026-03-10T05:59:00.192476+0000 front 2026-03-10T05:59:00.192411+0000 (oldest deadline 2026-03-10T05:59:24.892071+0000) 2026-03-10T05:59:29.496 INFO:journalctl@ceph.osd.7.vm05.stdout:Mar 10 05:59:29 vm05 bash[51877]: debug 2026-03-10T05:59:29.138+0000 7f51501a9640 -1 osd.7 138 heartbeat_check: no reply from 192.168.123.102:6806 osd.0 since back 2026-03-10T05:58:50.891468+0000 front 2026-03-10T05:58:50.891432+0000 (oldest deadline 2026-03-10T05:59:13.791326+0000) 2026-03-10T05:59:29.497 INFO:journalctl@ceph.osd.7.vm05.stdout:Mar 10 05:59:29 vm05 bash[51877]: debug 2026-03-10T05:59:29.138+0000 7f51501a9640 -1 osd.7 138 heartbeat_check: no reply from 192.168.123.102:6814 osd.1 since back 2026-03-10T05:58:54.292082+0000 front 2026-03-10T05:58:54.292191+0000 (oldest deadline 2026-03-10T05:59:20.191759+0000) 2026-03-10T05:59:29.497 INFO:journalctl@ceph.osd.7.vm05.stdout:Mar 10 05:59:29 vm05 bash[51877]: debug 2026-03-10T05:59:29.138+0000 7f51501a9640 -1 osd.7 138 heartbeat_check: no reply from 192.168.123.102:6822 osd.2 since back 2026-03-10T05:59:00.192476+0000 front 2026-03-10T05:59:00.192411+0000 (oldest deadline 2026-03-10T05:59:24.892071+0000) 2026-03-10T05:59:30.496 INFO:journalctl@ceph.osd.7.vm05.stdout:Mar 10 05:59:30 vm05 bash[51877]: debug 2026-03-10T05:59:30.146+0000 7f51501a9640 -1 osd.7 138 heartbeat_check: no reply from 192.168.123.102:6806 osd.0 since back 2026-03-10T05:58:50.891468+0000 front 2026-03-10T05:58:50.891432+0000 (oldest deadline 2026-03-10T05:59:13.791326+0000) 2026-03-10T05:59:30.497 INFO:journalctl@ceph.osd.7.vm05.stdout:Mar 10 05:59:30 vm05 bash[51877]: debug 2026-03-10T05:59:30.146+0000 7f51501a9640 -1 osd.7 138 heartbeat_check: no reply from 192.168.123.102:6814 osd.1 since back 2026-03-10T05:58:54.292082+0000 front 2026-03-10T05:58:54.292191+0000 (oldest deadline 2026-03-10T05:59:20.191759+0000) 2026-03-10T05:59:30.497 INFO:journalctl@ceph.osd.7.vm05.stdout:Mar 10 05:59:30 vm05 bash[51877]: debug 2026-03-10T05:59:30.146+0000 7f51501a9640 -1 osd.7 138 heartbeat_check: no reply from 192.168.123.102:6822 osd.2 since back 2026-03-10T05:59:00.192476+0000 front 2026-03-10T05:59:00.192411+0000 (oldest deadline 2026-03-10T05:59:24.892071+0000) 2026-03-10T05:59:30.497 INFO:journalctl@ceph.osd.7.vm05.stdout:Mar 10 05:59:30 vm05 bash[51877]: debug 2026-03-10T05:59:30.146+0000 7f51501a9640 -1 osd.7 138 heartbeat_check: no reply from 192.168.123.102:6830 osd.3 since back 2026-03-10T05:59:08.392576+0000 front 2026-03-10T05:59:08.392625+0000 (oldest deadline 2026-03-10T05:59:29.492496+0000) 2026-03-10T05:59:31.483 INFO:journalctl@ceph.osd.7.vm05.stdout:Mar 10 05:59:31 vm05 bash[51877]: debug 2026-03-10T05:59:31.118+0000 7f51501a9640 -1 osd.7 138 heartbeat_check: no reply from 192.168.123.102:6806 osd.0 since back 2026-03-10T05:58:50.891468+0000 front 2026-03-10T05:58:50.891432+0000 (oldest deadline 2026-03-10T05:59:13.791326+0000) 2026-03-10T05:59:31.483 INFO:journalctl@ceph.osd.7.vm05.stdout:Mar 10 05:59:31 vm05 bash[51877]: debug 2026-03-10T05:59:31.118+0000 7f51501a9640 -1 osd.7 138 heartbeat_check: no reply from 192.168.123.102:6814 osd.1 since back 2026-03-10T05:58:54.292082+0000 front 2026-03-10T05:58:54.292191+0000 (oldest deadline 2026-03-10T05:59:20.191759+0000) 2026-03-10T05:59:31.483 INFO:journalctl@ceph.osd.7.vm05.stdout:Mar 10 05:59:31 vm05 bash[51877]: debug 2026-03-10T05:59:31.118+0000 7f51501a9640 -1 osd.7 138 heartbeat_check: no reply from 192.168.123.102:6822 osd.2 since back 2026-03-10T05:59:00.192476+0000 front 2026-03-10T05:59:00.192411+0000 (oldest deadline 2026-03-10T05:59:24.892071+0000) 2026-03-10T05:59:31.483 INFO:journalctl@ceph.osd.7.vm05.stdout:Mar 10 05:59:31 vm05 bash[51877]: debug 2026-03-10T05:59:31.118+0000 7f51501a9640 -1 osd.7 138 heartbeat_check: no reply from 192.168.123.102:6830 osd.3 since back 2026-03-10T05:59:08.392576+0000 front 2026-03-10T05:59:08.392625+0000 (oldest deadline 2026-03-10T05:59:29.492496+0000) 2026-03-10T05:59:31.483 INFO:journalctl@ceph.osd.7.vm05.stdout:Mar 10 05:59:31 vm05 bash[63278]: ceph-107483ae-1c44-11f1-b530-c1172cd6122a-osd-7 2026-03-10T05:59:31.510 DEBUG:teuthology.orchestra.run.vm05:> sudo pkill -f 'journalctl -f -n 0 -u ceph-107483ae-1c44-11f1-b530-c1172cd6122a@osd.7.service' 2026-03-10T05:59:31.523 DEBUG:teuthology.orchestra.run:got remote process result: None 2026-03-10T05:59:31.523 INFO:tasks.cephadm.osd.7:Stopped osd.7 2026-03-10T05:59:31.523 INFO:tasks.cephadm.prometheus.a:Stopping prometheus.a... 2026-03-10T05:59:31.523 DEBUG:teuthology.orchestra.run.vm05:> sudo systemctl stop ceph-107483ae-1c44-11f1-b530-c1172cd6122a@prometheus.a 2026-03-10T05:59:31.667 DEBUG:teuthology.orchestra.run.vm05:> sudo pkill -f 'journalctl -f -n 0 -u ceph-107483ae-1c44-11f1-b530-c1172cd6122a@prometheus.a.service' 2026-03-10T05:59:31.687 DEBUG:teuthology.orchestra.run:got remote process result: None 2026-03-10T05:59:31.687 INFO:tasks.cephadm.prometheus.a:Stopped prometheus.a 2026-03-10T05:59:31.687 DEBUG:teuthology.orchestra.run.vm02:> sudo /home/ubuntu/cephtest/cephadm rm-cluster --fsid 107483ae-1c44-11f1-b530-c1172cd6122a --force --keep-logs 2026-03-10T05:59:34.585 INFO:journalctl@ceph.alertmanager.a.vm02.stdout:Mar 10 05:59:34 vm02 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:59:34.585 INFO:journalctl@ceph.node-exporter.a.vm02.stdout:Mar 10 05:59:34 vm02 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:59:44.776 INFO:journalctl@ceph.alertmanager.a.vm02.stdout:Mar 10 05:59:44 vm02 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:59:44.776 INFO:journalctl@ceph.node-exporter.a.vm02.stdout:Mar 10 05:59:44 vm02 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:59:45.049 INFO:journalctl@ceph.alertmanager.a.vm02.stdout:Mar 10 05:59:44 vm02 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:59:45.049 INFO:journalctl@ceph.alertmanager.a.vm02.stdout:Mar 10 05:59:44 vm02 systemd[1]: Stopping Ceph alertmanager.a for 107483ae-1c44-11f1-b530-c1172cd6122a... 2026-03-10T05:59:45.049 INFO:journalctl@ceph.alertmanager.a.vm02.stdout:Mar 10 05:59:44 vm02 bash[51578]: ts=2026-03-10T05:59:44.986Z caller=main.go:583 level=info msg="Received SIGTERM, exiting gracefully..." 2026-03-10T05:59:45.049 INFO:journalctl@ceph.alertmanager.a.vm02.stdout:Mar 10 05:59:45 vm02 bash[81405]: ceph-107483ae-1c44-11f1-b530-c1172cd6122a-alertmanager-a 2026-03-10T05:59:45.049 INFO:journalctl@ceph.node-exporter.a.vm02.stdout:Mar 10 05:59:44 vm02 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:59:45.335 INFO:journalctl@ceph.alertmanager.a.vm02.stdout:Mar 10 05:59:45 vm02 systemd[1]: ceph-107483ae-1c44-11f1-b530-c1172cd6122a@alertmanager.a.service: Deactivated successfully. 2026-03-10T05:59:45.335 INFO:journalctl@ceph.alertmanager.a.vm02.stdout:Mar 10 05:59:45 vm02 systemd[1]: Stopped Ceph alertmanager.a for 107483ae-1c44-11f1-b530-c1172cd6122a. 2026-03-10T05:59:45.335 INFO:journalctl@ceph.alertmanager.a.vm02.stdout:Mar 10 05:59:45 vm02 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:59:45.335 INFO:journalctl@ceph.node-exporter.a.vm02.stdout:Mar 10 05:59:45 vm02 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:59:55.574 INFO:journalctl@ceph.node-exporter.a.vm02.stdout:Mar 10 05:59:55 vm02 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T05:59:55.575 INFO:journalctl@ceph.node-exporter.a.vm02.stdout:Mar 10 05:59:55 vm02 systemd[1]: Stopping Ceph node-exporter.a for 107483ae-1c44-11f1-b530-c1172cd6122a... 2026-03-10T05:59:55.835 INFO:journalctl@ceph.node-exporter.a.vm02.stdout:Mar 10 05:59:55 vm02 bash[81645]: ceph-107483ae-1c44-11f1-b530-c1172cd6122a-node-exporter-a 2026-03-10T05:59:55.835 INFO:journalctl@ceph.node-exporter.a.vm02.stdout:Mar 10 05:59:55 vm02 systemd[1]: ceph-107483ae-1c44-11f1-b530-c1172cd6122a@node-exporter.a.service: Main process exited, code=exited, status=143/n/a 2026-03-10T05:59:55.835 INFO:journalctl@ceph.node-exporter.a.vm02.stdout:Mar 10 05:59:55 vm02 systemd[1]: ceph-107483ae-1c44-11f1-b530-c1172cd6122a@node-exporter.a.service: Failed with result 'exit-code'. 2026-03-10T05:59:55.835 INFO:journalctl@ceph.node-exporter.a.vm02.stdout:Mar 10 05:59:55 vm02 systemd[1]: Stopped Ceph node-exporter.a for 107483ae-1c44-11f1-b530-c1172cd6122a. 2026-03-10T05:59:55.835 INFO:journalctl@ceph.node-exporter.a.vm02.stdout:Mar 10 05:59:55 vm02 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T06:00:07.257 INFO:teuthology.orchestra.run.vm02.stderr:Traceback (most recent call last): 2026-03-10T06:00:07.257 INFO:teuthology.orchestra.run.vm02.stderr: File "/home/ubuntu/cephtest/cephadm", line 8634, in 2026-03-10T06:00:07.257 INFO:teuthology.orchestra.run.vm02.stderr: main() 2026-03-10T06:00:07.257 INFO:teuthology.orchestra.run.vm02.stderr: File "/home/ubuntu/cephtest/cephadm", line 8622, in main 2026-03-10T06:00:07.258 INFO:teuthology.orchestra.run.vm02.stderr: r = ctx.func(ctx) 2026-03-10T06:00:07.258 INFO:teuthology.orchestra.run.vm02.stderr: File "/home/ubuntu/cephtest/cephadm", line 6538, in command_rm_cluster 2026-03-10T06:00:07.258 INFO:teuthology.orchestra.run.vm02.stderr: with open(files[0]) as f: 2026-03-10T06:00:07.258 INFO:teuthology.orchestra.run.vm02.stderr:IsADirectoryError: [Errno 21] Is a directory: '/etc/ceph/ceph.conf' 2026-03-10T06:00:07.270 DEBUG:teuthology.orchestra.run:got remote process result: 1 2026-03-10T06:00:07.270 DEBUG:teuthology.orchestra.run.vm05:> sudo /home/ubuntu/cephtest/cephadm rm-cluster --fsid 107483ae-1c44-11f1-b530-c1172cd6122a --force --keep-logs 2026-03-10T06:00:10.130 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 06:00:09 vm05 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T06:00:10.130 INFO:journalctl@ceph.node-exporter.b.vm05.stdout:Mar 10 06:00:09 vm05 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T06:00:10.393 INFO:journalctl@ceph.node-exporter.b.vm05.stdout:Mar 10 06:00:10 vm05 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T06:00:10.393 INFO:journalctl@ceph.node-exporter.b.vm05.stdout:Mar 10 06:00:10 vm05 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T06:00:10.393 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 06:00:10 vm05 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T06:00:10.393 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 06:00:10 vm05 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T06:00:10.746 INFO:journalctl@ceph.node-exporter.b.vm05.stdout:Mar 10 06:00:10 vm05 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T06:00:10.746 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 06:00:10 vm05 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T06:00:20.873 INFO:journalctl@ceph.node-exporter.b.vm05.stdout:Mar 10 06:00:20 vm05 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T06:00:20.873 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 06:00:20 vm05 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T06:00:21.126 INFO:journalctl@ceph.node-exporter.b.vm05.stdout:Mar 10 06:00:21 vm05 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T06:00:21.126 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 06:00:21 vm05 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T06:00:21.126 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 06:00:21 vm05 systemd[1]: Stopping Ceph grafana.a for 107483ae-1c44-11f1-b530-c1172cd6122a... 2026-03-10T06:00:21.126 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 06:00:21 vm05 bash[59013]: logger=server t=2026-03-10T06:00:21.122400326Z level=info msg="Shutdown started" reason="System signal: terminated" 2026-03-10T06:00:21.126 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 06:00:21 vm05 bash[59013]: logger=tracing t=2026-03-10T06:00:21.122558115Z level=info msg="Closing tracing" 2026-03-10T06:00:21.126 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 06:00:21 vm05 bash[59013]: logger=ticker t=2026-03-10T06:00:21.122976372Z level=info msg=stopped last_tick=2026-03-10T06:00:20Z 2026-03-10T06:00:21.126 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 06:00:21 vm05 bash[59013]: logger=grafana-apiserver t=2026-03-10T06:00:21.123161912Z level=info msg="StorageObjectCountTracker pruner is exiting" 2026-03-10T06:00:21.496 INFO:journalctl@ceph.node-exporter.b.vm05.stdout:Mar 10 06:00:21 vm05 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T06:00:21.496 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 06:00:21 vm05 bash[63869]: ceph-107483ae-1c44-11f1-b530-c1172cd6122a-grafana-a 2026-03-10T06:00:21.496 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 06:00:21 vm05 systemd[1]: ceph-107483ae-1c44-11f1-b530-c1172cd6122a@grafana.a.service: Deactivated successfully. 2026-03-10T06:00:21.496 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 06:00:21 vm05 systemd[1]: Stopped Ceph grafana.a for 107483ae-1c44-11f1-b530-c1172cd6122a. 2026-03-10T06:00:21.496 INFO:journalctl@ceph.grafana.a.vm05.stdout:Mar 10 06:00:21 vm05 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T06:00:31.662 INFO:journalctl@ceph.node-exporter.b.vm05.stdout:Mar 10 06:00:31 vm05 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T06:00:31.915 INFO:journalctl@ceph.node-exporter.b.vm05.stdout:Mar 10 06:00:31 vm05 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T06:00:31.915 INFO:journalctl@ceph.node-exporter.b.vm05.stdout:Mar 10 06:00:31 vm05 systemd[1]: Stopping Ceph node-exporter.b for 107483ae-1c44-11f1-b530-c1172cd6122a... 2026-03-10T06:00:31.915 INFO:journalctl@ceph.node-exporter.b.vm05.stdout:Mar 10 06:00:31 vm05 bash[64145]: ceph-107483ae-1c44-11f1-b530-c1172cd6122a-node-exporter-b 2026-03-10T06:00:32.240 INFO:journalctl@ceph.node-exporter.b.vm05.stdout:Mar 10 06:00:31 vm05 systemd[1]: ceph-107483ae-1c44-11f1-b530-c1172cd6122a@node-exporter.b.service: Main process exited, code=exited, status=143/n/a 2026-03-10T06:00:32.240 INFO:journalctl@ceph.node-exporter.b.vm05.stdout:Mar 10 06:00:31 vm05 systemd[1]: ceph-107483ae-1c44-11f1-b530-c1172cd6122a@node-exporter.b.service: Failed with result 'exit-code'. 2026-03-10T06:00:32.240 INFO:journalctl@ceph.node-exporter.b.vm05.stdout:Mar 10 06:00:31 vm05 systemd[1]: Stopped Ceph node-exporter.b for 107483ae-1c44-11f1-b530-c1172cd6122a. 2026-03-10T06:00:32.240 INFO:journalctl@ceph.node-exporter.b.vm05.stdout:Mar 10 06:00:32 vm05 systemd[1]: /etc/systemd/system/ceph-107483ae-1c44-11f1-b530-c1172cd6122a@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T06:00:32.529 DEBUG:teuthology.orchestra.run.vm02:> sudo rm -f /etc/ceph/ceph.conf /etc/ceph/ceph.client.admin.keyring 2026-03-10T06:00:32.536 INFO:teuthology.orchestra.run.vm02.stderr:rm: cannot remove '/etc/ceph/ceph.conf': Is a directory 2026-03-10T06:00:32.536 INFO:teuthology.orchestra.run.vm02.stderr:rm: cannot remove '/etc/ceph/ceph.client.admin.keyring': Is a directory 2026-03-10T06:00:32.537 DEBUG:teuthology.orchestra.run:got remote process result: 1 2026-03-10T06:00:32.537 DEBUG:teuthology.orchestra.run.vm05:> sudo rm -f /etc/ceph/ceph.conf /etc/ceph/ceph.client.admin.keyring 2026-03-10T06:00:32.544 INFO:tasks.cephadm:Archiving crash dumps... 2026-03-10T06:00:32.544 DEBUG:teuthology.misc:Transferring archived files from vm02:/var/lib/ceph/107483ae-1c44-11f1-b530-c1172cd6122a/crash to /archive/kyr-2026-03-10_01:00:38-orch-squid-none-default-vps/919/remote/vm02/crash 2026-03-10T06:00:32.544 DEBUG:teuthology.orchestra.run.vm02:> sudo tar c -f - -C /var/lib/ceph/107483ae-1c44-11f1-b530-c1172cd6122a/crash -- . 2026-03-10T06:00:32.587 INFO:teuthology.orchestra.run.vm02.stderr:tar: /var/lib/ceph/107483ae-1c44-11f1-b530-c1172cd6122a/crash: Cannot open: No such file or directory 2026-03-10T06:00:32.587 INFO:teuthology.orchestra.run.vm02.stderr:tar: Error is not recoverable: exiting now 2026-03-10T06:00:32.587 DEBUG:teuthology.misc:Transferring archived files from vm05:/var/lib/ceph/107483ae-1c44-11f1-b530-c1172cd6122a/crash to /archive/kyr-2026-03-10_01:00:38-orch-squid-none-default-vps/919/remote/vm05/crash 2026-03-10T06:00:32.587 DEBUG:teuthology.orchestra.run.vm05:> sudo tar c -f - -C /var/lib/ceph/107483ae-1c44-11f1-b530-c1172cd6122a/crash -- . 2026-03-10T06:00:32.594 INFO:teuthology.orchestra.run.vm05.stderr:tar: /var/lib/ceph/107483ae-1c44-11f1-b530-c1172cd6122a/crash: Cannot open: No such file or directory 2026-03-10T06:00:32.594 INFO:teuthology.orchestra.run.vm05.stderr:tar: Error is not recoverable: exiting now 2026-03-10T06:00:32.595 INFO:tasks.cephadm:Checking cluster log for badness... 2026-03-10T06:00:32.595 DEBUG:teuthology.orchestra.run.vm02:> sudo egrep '\[ERR\]|\[WRN\]|\[SEC\]' /var/log/ceph/107483ae-1c44-11f1-b530-c1172cd6122a/ceph.log | egrep CEPHADM_ | egrep -v '\(MDS_ALL_DOWN\)' | egrep -v '\(MDS_UP_LESS_THAN_MAX\)' | egrep -v CEPHADM_STRAY_DAEMON | egrep -v CEPHADM_FAILED_DAEMON | egrep -v CEPHADM_AGENT_DOWN | head -n 1 2026-03-10T06:00:32.639 INFO:tasks.cephadm:Compressing logs... 2026-03-10T06:00:32.640 DEBUG:teuthology.orchestra.run.vm02:> time sudo find /var/log/ceph /var/log/rbd-target-api -name '*.log' -print0 | sudo xargs --max-args=1 --max-procs=0 --verbose -0 --no-run-if-empty -- gzip -5 --verbose -- 2026-03-10T06:00:32.683 DEBUG:teuthology.orchestra.run.vm05:> time sudo find /var/log/ceph /var/log/rbd-target-api -name '*.log' -print0 | sudo xargs --max-args=1 --max-procs=0 --verbose -0 --no-run-if-empty -- gzip -5 --verbose -- 2026-03-10T06:00:32.689 INFO:teuthology.orchestra.run.vm02.stderr:find: ‘/var/log/rbd-target-api’: No such file or directory 2026-03-10T06:00:32.690 INFO:teuthology.orchestra.run.vm02.stderr:gzip -5 --verbose -- /var/log/ceph/cephadm.log 2026-03-10T06:00:32.691 INFO:teuthology.orchestra.run.vm02.stderr:gzip -5 --verbose -- /var/log/ceph/107483ae-1c44-11f1-b530-c1172cd6122a/ceph-osd.3.log 2026-03-10T06:00:32.691 INFO:teuthology.orchestra.run.vm05.stderr:find: gzip -5 --verbose -- /var/log/ceph/cephadm.log 2026-03-10T06:00:32.691 INFO:teuthology.orchestra.run.vm05.stderr:‘/var/log/rbd-target-api’: No such file or directory 2026-03-10T06:00:32.691 INFO:teuthology.orchestra.run.vm02.stderr:/var/log/ceph/cephadm.log: gzip -5 --verbose -- /var/log/ceph/107483ae-1c44-11f1-b530-c1172cd6122a/ceph.log 2026-03-10T06:00:32.691 INFO:teuthology.orchestra.run.vm05.stderr:gzip -5 --verbose -- /var/log/ceph/107483ae-1c44-11f1-b530-c1172cd6122a/ceph-client.rgw.smpl.vm05.hqqmap.log 2026-03-10T06:00:32.692 INFO:teuthology.orchestra.run.vm05.stderr:/var/log/ceph/cephadm.log: gzip -5 --verbose -- /var/log/ceph/107483ae-1c44-11f1-b530-c1172cd6122a/ceph-mgr.x.log 2026-03-10T06:00:32.692 INFO:teuthology.orchestra.run.vm05.stderr:/var/log/ceph/107483ae-1c44-11f1-b530-c1172cd6122a/ceph-client.rgw.smpl.vm05.hqqmap.log: 75.7% -- replaced with /var/log/ceph/107483ae-1c44-11f1-b530-c1172cd6122a/ceph-client.rgw.smpl.vm05.hqqmap.log.gz 2026-03-10T06:00:32.692 INFO:teuthology.orchestra.run.vm05.stderr:gzip -5 --verbose -- /var/log/ceph/107483ae-1c44-11f1-b530-c1172cd6122a/ceph.log 2026-03-10T06:00:32.695 INFO:teuthology.orchestra.run.vm02.stderr:/var/log/ceph/107483ae-1c44-11f1-b530-c1172cd6122a/ceph-osd.3.log: 90.1% -- replaced with /var/log/ceph/cephadm.log.gz 2026-03-10T06:00:32.695 INFO:teuthology.orchestra.run.vm02.stderr:gzip -5 --verbose -- /var/log/ceph/107483ae-1c44-11f1-b530-c1172cd6122a/ceph-mon.c.log 2026-03-10T06:00:32.697 INFO:teuthology.orchestra.run.vm02.stderr:/var/log/ceph/107483ae-1c44-11f1-b530-c1172cd6122a/ceph.log: 92.8% -- replaced with /var/log/ceph/107483ae-1c44-11f1-b530-c1172cd6122a/ceph.log.gz 2026-03-10T06:00:32.697 INFO:teuthology.orchestra.run.vm02.stderr:gzip -5 --verbose -- /var/log/ceph/107483ae-1c44-11f1-b530-c1172cd6122a/ceph-client.rgw.smpl.vm02.pglcfm.log 2026-03-10T06:00:32.698 INFO:teuthology.orchestra.run.vm05.stderr:/var/log/ceph/107483ae-1c44-11f1-b530-c1172cd6122a/ceph-mgr.x.log: gzip -5 --verbose -- /var/log/ceph/107483ae-1c44-11f1-b530-c1172cd6122a/ceph-client.rgw.foo.vm05.hvmsxl.log 2026-03-10T06:00:32.703 INFO:teuthology.orchestra.run.vm05.stderr:/var/log/ceph/107483ae-1c44-11f1-b530-c1172cd6122a/ceph.log: 87.2% -- replaced with /var/log/ceph/107483ae-1c44-11f1-b530-c1172cd6122a/ceph.log.gz 2026-03-10T06:00:32.703 INFO:teuthology.orchestra.run.vm05.stderr:gzip -5 --verbose -- /var/log/ceph/107483ae-1c44-11f1-b530-c1172cd6122a/ceph-mon.b.log 2026-03-10T06:00:32.703 INFO:teuthology.orchestra.run.vm02.stderr:/var/log/ceph/107483ae-1c44-11f1-b530-c1172cd6122a/ceph-mon.c.log: gzip -5 --verbose -- /var/log/ceph/107483ae-1c44-11f1-b530-c1172cd6122a/ceph-client.rgw.foo.vm02.pbogjd.log 2026-03-10T06:00:32.705 INFO:teuthology.orchestra.run.vm05.stderr:/var/log/ceph/107483ae-1c44-11f1-b530-c1172cd6122a/ceph-client.rgw.foo.vm05.hvmsxl.log: 91.3% -- replaced with /var/log/ceph/cephadm.log.gz 2026-03-10T06:00:32.705 INFO:teuthology.orchestra.run.vm05.stderr: 76.9% -- replaced with /var/log/ceph/107483ae-1c44-11f1-b530-c1172cd6122a/ceph-client.rgw.foo.vm05.hvmsxl.log.gz 2026-03-10T06:00:32.706 INFO:teuthology.orchestra.run.vm05.stderr:gzip -5 --verbose -- /var/log/ceph/107483ae-1c44-11f1-b530-c1172cd6122a/ceph-osd.5.log 2026-03-10T06:00:32.707 INFO:teuthology.orchestra.run.vm02.stderr:/var/log/ceph/107483ae-1c44-11f1-b530-c1172cd6122a/ceph-client.rgw.smpl.vm02.pglcfm.log: 76.4% -- replaced with /var/log/ceph/107483ae-1c44-11f1-b530-c1172cd6122a/ceph-client.rgw.smpl.vm02.pglcfm.log.gz 2026-03-10T06:00:32.707 INFO:teuthology.orchestra.run.vm02.stderr:gzip -5 --verbose -- /var/log/ceph/107483ae-1c44-11f1-b530-c1172cd6122a/ceph-osd.1.log 2026-03-10T06:00:32.711 INFO:teuthology.orchestra.run.vm02.stderr:/var/log/ceph/107483ae-1c44-11f1-b530-c1172cd6122a/ceph-client.rgw.foo.vm02.pbogjd.log: 76.3% -- replaced with /var/log/ceph/107483ae-1c44-11f1-b530-c1172cd6122a/ceph-client.rgw.foo.vm02.pbogjd.log.gz 2026-03-10T06:00:32.711 INFO:teuthology.orchestra.run.vm02.stderr:gzip -5 --verbose -- /var/log/ceph/107483ae-1c44-11f1-b530-c1172cd6122a/ceph-mgr.y.log 2026-03-10T06:00:32.718 INFO:teuthology.orchestra.run.vm05.stderr:/var/log/ceph/107483ae-1c44-11f1-b530-c1172cd6122a/ceph-mon.b.log: gzip -5 --verbose -- /var/log/ceph/107483ae-1c44-11f1-b530-c1172cd6122a/ceph-osd.7.log 2026-03-10T06:00:32.722 INFO:teuthology.orchestra.run.vm02.stderr:/var/log/ceph/107483ae-1c44-11f1-b530-c1172cd6122a/ceph-osd.1.log: gzip -5 --verbose -- /var/log/ceph/107483ae-1c44-11f1-b530-c1172cd6122a/ceph-mon.a.log 2026-03-10T06:00:32.732 INFO:teuthology.orchestra.run.vm02.stderr:/var/log/ceph/107483ae-1c44-11f1-b530-c1172cd6122a/ceph-mgr.y.log: gzip -5 --verbose -- /var/log/ceph/107483ae-1c44-11f1-b530-c1172cd6122a/ceph-osd.2.log 2026-03-10T06:00:32.735 INFO:teuthology.orchestra.run.vm05.stderr:/var/log/ceph/107483ae-1c44-11f1-b530-c1172cd6122a/ceph-osd.5.log: 89.9% -- replaced with /var/log/ceph/107483ae-1c44-11f1-b530-c1172cd6122a/ceph-mgr.x.log.gz 2026-03-10T06:00:32.737 INFO:teuthology.orchestra.run.vm05.stderr:gzip -5 --verbose -- /var/log/ceph/107483ae-1c44-11f1-b530-c1172cd6122a/ceph-osd.6.log 2026-03-10T06:00:32.746 INFO:teuthology.orchestra.run.vm05.stderr:/var/log/ceph/107483ae-1c44-11f1-b530-c1172cd6122a/ceph-osd.7.log: gzip -5 --verbose -- /var/log/ceph/107483ae-1c44-11f1-b530-c1172cd6122a/ceph.audit.log 2026-03-10T06:00:32.747 INFO:teuthology.orchestra.run.vm02.stderr:/var/log/ceph/107483ae-1c44-11f1-b530-c1172cd6122a/ceph-mon.a.log: gzip -5 --verbose -- /var/log/ceph/107483ae-1c44-11f1-b530-c1172cd6122a/ceph.audit.log 2026-03-10T06:00:32.749 INFO:teuthology.orchestra.run.vm05.stderr:/var/log/ceph/107483ae-1c44-11f1-b530-c1172cd6122a/ceph-osd.6.log: gzip -5 --verbose -- /var/log/ceph/107483ae-1c44-11f1-b530-c1172cd6122a/ceph-volume.log 2026-03-10T06:00:32.751 INFO:teuthology.orchestra.run.vm02.stderr:/var/log/ceph/107483ae-1c44-11f1-b530-c1172cd6122a/ceph-osd.2.log: gzip -5 --verbose -- /var/log/ceph/107483ae-1c44-11f1-b530-c1172cd6122a/ceph-volume.log 2026-03-10T06:00:32.757 INFO:teuthology.orchestra.run.vm02.stderr:/var/log/ceph/107483ae-1c44-11f1-b530-c1172cd6122a/ceph.audit.log: gzip -5 --verbose -- /var/log/ceph/107483ae-1c44-11f1-b530-c1172cd6122a/ceph.cephadm.log 2026-03-10T06:00:32.758 INFO:teuthology.orchestra.run.vm02.stderr:/var/log/ceph/107483ae-1c44-11f1-b530-c1172cd6122a/ceph-volume.log: 94.3% -- replaced with /var/log/ceph/107483ae-1c44-11f1-b530-c1172cd6122a/ceph.audit.log.gz 2026-03-10T06:00:32.761 INFO:teuthology.orchestra.run.vm05.stderr:/var/log/ceph/107483ae-1c44-11f1-b530-c1172cd6122a/ceph.audit.log: 90.7% -- replaced with /var/log/ceph/107483ae-1c44-11f1-b530-c1172cd6122a/ceph.audit.log.gz 2026-03-10T06:00:32.766 INFO:teuthology.orchestra.run.vm05.stderr:gzip -5 --verbose -- /var/log/ceph/107483ae-1c44-11f1-b530-c1172cd6122a/ceph.cephadm.log 2026-03-10T06:00:32.767 INFO:teuthology.orchestra.run.vm02.stderr:gzip -5 --verbose -- /var/log/ceph/107483ae-1c44-11f1-b530-c1172cd6122a/tcmu-runner.log 2026-03-10T06:00:32.768 INFO:teuthology.orchestra.run.vm02.stderr:/var/log/ceph/107483ae-1c44-11f1-b530-c1172cd6122a/ceph.cephadm.log: 90.4% -- replaced with /var/log/ceph/107483ae-1c44-11f1-b530-c1172cd6122a/ceph.cephadm.log.gz 2026-03-10T06:00:32.775 INFO:teuthology.orchestra.run.vm02.stderr:gzip -5 --verbose -- /var/log/ceph/107483ae-1c44-11f1-b530-c1172cd6122a/ceph-osd.0.log 2026-03-10T06:00:32.778 INFO:teuthology.orchestra.run.vm05.stderr:/var/log/ceph/107483ae-1c44-11f1-b530-c1172cd6122a/ceph-volume.log: gzip -5 --verbose -- /var/log/ceph/107483ae-1c44-11f1-b530-c1172cd6122a/ceph-osd.4.log 2026-03-10T06:00:32.778 INFO:teuthology.orchestra.run.vm02.stderr:/var/log/ceph/107483ae-1c44-11f1-b530-c1172cd6122a/tcmu-runner.log: 82.9% -- replaced with /var/log/ceph/107483ae-1c44-11f1-b530-c1172cd6122a/tcmu-runner.log.gz 2026-03-10T06:00:32.782 INFO:teuthology.orchestra.run.vm05.stderr:/var/log/ceph/107483ae-1c44-11f1-b530-c1172cd6122a/ceph.cephadm.log: 83.3% -- replaced with /var/log/ceph/107483ae-1c44-11f1-b530-c1172cd6122a/ceph.cephadm.log.gz 2026-03-10T06:00:32.802 INFO:teuthology.orchestra.run.vm05.stderr:/var/log/ceph/107483ae-1c44-11f1-b530-c1172cd6122a/ceph-osd.4.log: 94.2% -- replaced with /var/log/ceph/107483ae-1c44-11f1-b530-c1172cd6122a/ceph-volume.log.gz 2026-03-10T06:00:32.811 INFO:teuthology.orchestra.run.vm02.stderr:/var/log/ceph/107483ae-1c44-11f1-b530-c1172cd6122a/ceph-osd.0.log: 94.2% -- replaced with /var/log/ceph/107483ae-1c44-11f1-b530-c1172cd6122a/ceph-volume.log.gz 2026-03-10T06:00:33.056 INFO:teuthology.orchestra.run.vm05.stderr: 92.7% -- replaced with /var/log/ceph/107483ae-1c44-11f1-b530-c1172cd6122a/ceph-mon.b.log.gz 2026-03-10T06:00:33.233 INFO:teuthology.orchestra.run.vm02.stderr: 89.4% -- replaced with /var/log/ceph/107483ae-1c44-11f1-b530-c1172cd6122a/ceph-mgr.y.log.gz 2026-03-10T06:00:33.302 INFO:teuthology.orchestra.run.vm02.stderr: 92.5% -- replaced with /var/log/ceph/107483ae-1c44-11f1-b530-c1172cd6122a/ceph-mon.c.log.gz 2026-03-10T06:00:33.935 INFO:teuthology.orchestra.run.vm02.stderr: 94.1% -- replaced with /var/log/ceph/107483ae-1c44-11f1-b530-c1172cd6122a/ceph-osd.2.log.gz 2026-03-10T06:00:33.937 INFO:teuthology.orchestra.run.vm05.stderr: 93.9% -- replaced with /var/log/ceph/107483ae-1c44-11f1-b530-c1172cd6122a/ceph-osd.6.log.gz 2026-03-10T06:00:33.978 INFO:teuthology.orchestra.run.vm02.stderr: 91.4% -- replaced with /var/log/ceph/107483ae-1c44-11f1-b530-c1172cd6122a/ceph-mon.a.log.gz 2026-03-10T06:00:34.145 INFO:teuthology.orchestra.run.vm05.stderr: 93.9% -- replaced with /var/log/ceph/107483ae-1c44-11f1-b530-c1172cd6122a/ceph-osd.5.log.gz 2026-03-10T06:00:34.176 INFO:teuthology.orchestra.run.vm05.stderr: 94.3% -- replaced with /var/log/ceph/107483ae-1c44-11f1-b530-c1172cd6122a/ceph-osd.7.log.gz 2026-03-10T06:00:34.385 INFO:teuthology.orchestra.run.vm05.stderr: 94.1% -- replaced with /var/log/ceph/107483ae-1c44-11f1-b530-c1172cd6122a/ceph-osd.4.log.gz 2026-03-10T06:00:34.386 INFO:teuthology.orchestra.run.vm05.stderr: 2026-03-10T06:00:34.386 INFO:teuthology.orchestra.run.vm05.stderr:real 0m1.700s 2026-03-10T06:00:34.386 INFO:teuthology.orchestra.run.vm05.stderr:user 0m3.040s 2026-03-10T06:00:34.386 INFO:teuthology.orchestra.run.vm05.stderr:sys 0m0.144s 2026-03-10T06:00:34.447 INFO:teuthology.orchestra.run.vm02.stderr: 94.1% -- replaced with /var/log/ceph/107483ae-1c44-11f1-b530-c1172cd6122a/ceph-osd.1.log.gz 2026-03-10T06:00:34.484 INFO:teuthology.orchestra.run.vm02.stderr: 93.9% -- replaced with /var/log/ceph/107483ae-1c44-11f1-b530-c1172cd6122a/ceph-osd.0.log.gz 2026-03-10T06:00:34.565 INFO:teuthology.orchestra.run.vm02.stderr: 94.1% -- replaced with /var/log/ceph/107483ae-1c44-11f1-b530-c1172cd6122a/ceph-osd.3.log.gz 2026-03-10T06:00:34.566 INFO:teuthology.orchestra.run.vm02.stderr: 2026-03-10T06:00:34.566 INFO:teuthology.orchestra.run.vm02.stderr:real 0m1.882s 2026-03-10T06:00:34.566 INFO:teuthology.orchestra.run.vm02.stderr:user 0m3.465s 2026-03-10T06:00:34.567 INFO:teuthology.orchestra.run.vm02.stderr:sys 0m0.208s 2026-03-10T06:00:34.567 INFO:tasks.cephadm:Archiving logs... 2026-03-10T06:00:34.567 DEBUG:teuthology.misc:Transferring archived files from vm02:/var/log/ceph to /archive/kyr-2026-03-10_01:00:38-orch-squid-none-default-vps/919/remote/vm02/log 2026-03-10T06:00:34.567 DEBUG:teuthology.orchestra.run.vm02:> sudo tar c -f - -C /var/log/ceph -- . 2026-03-10T06:00:34.795 DEBUG:teuthology.misc:Transferring archived files from vm05:/var/log/ceph to /archive/kyr-2026-03-10_01:00:38-orch-squid-none-default-vps/919/remote/vm05/log 2026-03-10T06:00:34.795 DEBUG:teuthology.orchestra.run.vm05:> sudo tar c -f - -C /var/log/ceph -- . 2026-03-10T06:00:34.950 INFO:tasks.cephadm:Removing cluster... 2026-03-10T06:00:34.950 DEBUG:teuthology.orchestra.run.vm02:> sudo /home/ubuntu/cephtest/cephadm rm-cluster --fsid 107483ae-1c44-11f1-b530-c1172cd6122a --force 2026-03-10T06:00:35.572 INFO:teuthology.orchestra.run.vm02.stderr:Traceback (most recent call last): 2026-03-10T06:00:35.572 INFO:teuthology.orchestra.run.vm02.stderr: File "/home/ubuntu/cephtest/cephadm", line 8634, in 2026-03-10T06:00:35.572 INFO:teuthology.orchestra.run.vm02.stderr: main() 2026-03-10T06:00:35.572 INFO:teuthology.orchestra.run.vm02.stderr: File "/home/ubuntu/cephtest/cephadm", line 8622, in main 2026-03-10T06:00:35.573 INFO:teuthology.orchestra.run.vm02.stderr: r = ctx.func(ctx) 2026-03-10T06:00:35.573 INFO:teuthology.orchestra.run.vm02.stderr: File "/home/ubuntu/cephtest/cephadm", line 6538, in command_rm_cluster 2026-03-10T06:00:35.573 INFO:teuthology.orchestra.run.vm02.stderr: with open(files[0]) as f: 2026-03-10T06:00:35.573 INFO:teuthology.orchestra.run.vm02.stderr:IsADirectoryError: [Errno 21] Is a directory: '/etc/ceph/ceph.conf' 2026-03-10T06:00:35.584 DEBUG:teuthology.orchestra.run:got remote process result: 1 2026-03-10T06:00:35.585 INFO:tasks.cephadm:Teardown complete 2026-03-10T06:00:35.585 ERROR:teuthology.run_tasks:Manager failed: cephadm Traceback (most recent call last): File "/home/teuthos/teuthology/teuthology/run_tasks.py", line 160, in run_tasks suppress = manager.__exit__(*exc_info) File "/home/teuthos/.local/share/uv/python/cpython-3.10.19-linux-x86_64-gnu/lib/python3.10/contextlib.py", line 142, in __exit__ next(self.gen) File "/home/teuthos/src/github.com_kshtsk_ceph_75a68fd8ca3f918fe9466b4c0bb385b7fc260a9b/qa/tasks/cephadm.py", line 2216, in task with contextutil.nested( File "/home/teuthos/.local/share/uv/python/cpython-3.10.19-linux-x86_64-gnu/lib/python3.10/contextlib.py", line 142, in __exit__ next(self.gen) File "/home/teuthos/teuthology/teuthology/contextutil.py", line 54, in nested raise exc[1] File "/home/teuthos/.local/share/uv/python/cpython-3.10.19-linux-x86_64-gnu/lib/python3.10/contextlib.py", line 153, in __exit__ self.gen.throw(typ, value, traceback) File "/home/teuthos/src/github.com_kshtsk_ceph_75a68fd8ca3f918fe9466b4c0bb385b7fc260a9b/qa/tasks/cephadm.py", line 1845, in initialize_config yield File "/home/teuthos/teuthology/teuthology/contextutil.py", line 46, in nested if exit(*exc): File "/home/teuthos/.local/share/uv/python/cpython-3.10.19-linux-x86_64-gnu/lib/python3.10/contextlib.py", line 142, in __exit__ next(self.gen) File "/home/teuthos/src/github.com_kshtsk_ceph_75a68fd8ca3f918fe9466b4c0bb385b7fc260a9b/qa/tasks/cephadm.py", line 229, in download_cephadm _rm_cluster(ctx, cluster_name) File "/home/teuthos/src/github.com_kshtsk_ceph_75a68fd8ca3f918fe9466b4c0bb385b7fc260a9b/qa/tasks/cephadm.py", line 383, in _rm_cluster remote.run(args=[ File "/home/teuthos/teuthology/teuthology/orchestra/remote.py", line 575, in run r = self._runner(client=self.ssh, name=self.shortname, **kwargs) File "/home/teuthos/teuthology/teuthology/orchestra/run.py", line 461, in run r.wait() File "/home/teuthos/teuthology/teuthology/orchestra/run.py", line 161, in wait self._raise_for_status() File "/home/teuthos/teuthology/teuthology/orchestra/run.py", line 181, in _raise_for_status raise CommandFailedError( teuthology.exceptions.CommandFailedError: Command failed on vm02 with status 1: 'sudo /home/ubuntu/cephtest/cephadm rm-cluster --fsid 107483ae-1c44-11f1-b530-c1172cd6122a --force' 2026-03-10T06:00:35.585 DEBUG:teuthology.run_tasks:Unwinding manager clock 2026-03-10T06:00:35.587 INFO:teuthology.task.clock:Checking final clock skew... 2026-03-10T06:00:35.587 DEBUG:teuthology.orchestra.run.vm02:> PATH=/usr/bin:/usr/sbin ntpq -p || PATH=/usr/bin:/usr/sbin chronyc sources || true 2026-03-10T06:00:35.588 DEBUG:teuthology.orchestra.run.vm05:> PATH=/usr/bin:/usr/sbin ntpq -p || PATH=/usr/bin:/usr/sbin chronyc sources || true 2026-03-10T06:00:35.643 INFO:teuthology.orchestra.run.vm02.stdout: remote refid st t when poll reach delay offset jitter 2026-03-10T06:00:35.644 INFO:teuthology.orchestra.run.vm02.stdout:============================================================================== 2026-03-10T06:00:35.644 INFO:teuthology.orchestra.run.vm02.stdout: 0.ubuntu.pool.n .POOL. 16 p - 64 0 0.000 +0.000 0.000 2026-03-10T06:00:35.644 INFO:teuthology.orchestra.run.vm02.stdout: 1.ubuntu.pool.n .POOL. 16 p - 64 0 0.000 +0.000 0.000 2026-03-10T06:00:35.644 INFO:teuthology.orchestra.run.vm02.stdout: 2.ubuntu.pool.n .POOL. 16 p - 64 0 0.000 +0.000 0.000 2026-03-10T06:00:35.644 INFO:teuthology.orchestra.run.vm02.stdout: 3.ubuntu.pool.n .POOL. 16 p - 64 0 0.000 +0.000 0.000 2026-03-10T06:00:35.644 INFO:teuthology.orchestra.run.vm02.stdout: ntp.ubuntu.com .POOL. 16 p - 64 0 0.000 +0.000 0.000 2026-03-10T06:00:35.644 INFO:teuthology.orchestra.run.vm02.stdout:-ntp1.uni-ulm.de 129.69.253.1 2 u 8 64 377 28.032 +0.769 0.533 2026-03-10T06:00:35.644 INFO:teuthology.orchestra.run.vm02.stdout:-185.252.140.125 216.239.35.4 2 u 10 64 377 25.148 +0.697 0.566 2026-03-10T06:00:35.644 INFO:teuthology.orchestra.run.vm02.stdout:-static.215.156. 35.73.197.144 2 u 9 64 377 23.526 +1.003 0.498 2026-03-10T06:00:35.644 INFO:teuthology.orchestra.run.vm02.stdout:-185.252.140.126 218.73.139.35 2 u 5 64 377 25.156 +1.098 0.285 2026-03-10T06:00:35.644 INFO:teuthology.orchestra.run.vm02.stdout:+ns1.blazing.de 213.172.96.14 3 u 7 64 377 31.902 +0.506 0.533 2026-03-10T06:00:35.644 INFO:teuthology.orchestra.run.vm02.stdout:+ec2-18-192-244- 216.239.35.8 2 u 3 64 377 23.735 -0.660 0.533 2026-03-10T06:00:35.644 INFO:teuthology.orchestra.run.vm02.stdout:*158.101.188.125 189.97.54.122 2 u 1 64 377 20.966 +0.296 0.541 2026-03-10T06:00:35.970 INFO:teuthology.orchestra.run.vm05.stdout: remote refid st t when poll reach delay offset jitter 2026-03-10T06:00:35.970 INFO:teuthology.orchestra.run.vm05.stdout:============================================================================== 2026-03-10T06:00:35.970 INFO:teuthology.orchestra.run.vm05.stdout: 0.ubuntu.pool.n .POOL. 16 p - 64 0 0.000 +0.000 0.000 2026-03-10T06:00:35.970 INFO:teuthology.orchestra.run.vm05.stdout: 1.ubuntu.pool.n .POOL. 16 p - 64 0 0.000 +0.000 0.000 2026-03-10T06:00:35.970 INFO:teuthology.orchestra.run.vm05.stdout: 2.ubuntu.pool.n .POOL. 16 p - 64 0 0.000 +0.000 0.000 2026-03-10T06:00:35.970 INFO:teuthology.orchestra.run.vm05.stdout: 3.ubuntu.pool.n .POOL. 16 p - 64 0 0.000 +0.000 0.000 2026-03-10T06:00:35.970 INFO:teuthology.orchestra.run.vm05.stdout: ntp.ubuntu.com .POOL. 16 p - 64 0 0.000 +0.000 0.000 2026-03-10T06:00:35.970 INFO:teuthology.orchestra.run.vm05.stdout:-static.215.156. 35.73.197.144 2 u 13 64 377 23.533 -6.437 3.396 2026-03-10T06:00:35.970 INFO:teuthology.orchestra.run.vm05.stdout:-185.252.140.125 216.239.35.4 2 u 15 64 377 25.087 -1.436 3.218 2026-03-10T06:00:35.970 INFO:teuthology.orchestra.run.vm05.stdout:-mail.morbitzer. 205.46.178.169 2 u 15 64 377 28.231 -8.966 3.483 2026-03-10T06:00:35.970 INFO:teuthology.orchestra.run.vm05.stdout:-ec2-18-192-244- 216.239.35.8 2 u 15 64 377 23.538 -2.940 3.349 2026-03-10T06:00:35.970 INFO:teuthology.orchestra.run.vm05.stdout:#www.h4x-gamers. 192.53.103.108 2 u 15 64 377 24.952 -5.171 2.121 2026-03-10T06:00:35.970 INFO:teuthology.orchestra.run.vm05.stdout:+ntp1.uni-ulm.de 129.69.253.1 2 u 9 64 377 27.349 -2.651 3.219 2026-03-10T06:00:35.970 INFO:teuthology.orchestra.run.vm05.stdout:#pve2.h4x-gamers 192.53.103.108 2 u 8 64 377 25.068 -6.138 2.761 2026-03-10T06:00:35.970 INFO:teuthology.orchestra.run.vm05.stdout:#82.165.178.31 82.64.45.50 2 u 12 64 377 28.812 -3.586 2.188 2026-03-10T06:00:35.970 INFO:teuthology.orchestra.run.vm05.stdout:#212.132.108.186 131.188.3.221 2 u 18 64 377 29.001 -3.528 1.858 2026-03-10T06:00:35.970 INFO:teuthology.orchestra.run.vm05.stdout:+141.84.43.73 40.33.41.76 2 u 14 64 377 31.382 -3.499 2.286 2026-03-10T06:00:35.971 INFO:teuthology.orchestra.run.vm05.stdout:+139-162-187-236 80.192.165.246 2 u 5 64 377 22.667 -9.300 2.079 2026-03-10T06:00:35.971 INFO:teuthology.orchestra.run.vm05.stdout:+ns1.blazing.de 213.172.96.14 3 u 63 64 377 31.908 -2.461 2.198 2026-03-10T06:00:35.971 INFO:teuthology.orchestra.run.vm05.stdout:#alphyn.canonica 132.163.96.1 2 u 41 64 377 97.284 -1.694 3.138 2026-03-10T06:00:35.971 INFO:teuthology.orchestra.run.vm05.stdout:*158.101.188.125 189.97.54.122 2 u 9 64 377 20.977 -2.051 3.238 2026-03-10T06:00:35.971 INFO:teuthology.orchestra.run.vm05.stdout:+185.125.190.58 145.238.80.80 2 u 38 64 377 35.377 -6.816 2.098 2026-03-10T06:00:35.971 INFO:teuthology.orchestra.run.vm05.stdout:#s1.heeg.it 131.188.3.220 2 u 6 64 377 23.696 -1.524 3.399 2026-03-10T06:00:35.971 DEBUG:teuthology.run_tasks:Unwinding manager ansible.cephlab 2026-03-10T06:00:35.973 INFO:teuthology.task.ansible:Skipping ansible cleanup... 2026-03-10T06:00:35.973 DEBUG:teuthology.run_tasks:Unwinding manager selinux 2026-03-10T06:00:35.975 DEBUG:teuthology.run_tasks:Unwinding manager pcp 2026-03-10T06:00:35.977 DEBUG:teuthology.run_tasks:Unwinding manager internal.timer 2026-03-10T06:00:35.979 INFO:teuthology.task.internal:Duration was 1177.663584 seconds 2026-03-10T06:00:35.979 DEBUG:teuthology.run_tasks:Unwinding manager internal.syslog 2026-03-10T06:00:35.980 INFO:teuthology.task.internal.syslog:Shutting down syslog monitoring... 2026-03-10T06:00:35.981 DEBUG:teuthology.orchestra.run.vm02:> sudo rm -f -- /etc/rsyslog.d/80-cephtest.conf && sudo service rsyslog restart 2026-03-10T06:00:35.982 DEBUG:teuthology.orchestra.run.vm05:> sudo rm -f -- /etc/rsyslog.d/80-cephtest.conf && sudo service rsyslog restart 2026-03-10T06:00:36.009 INFO:teuthology.task.internal.syslog:Checking logs for errors... 2026-03-10T06:00:36.009 DEBUG:teuthology.task.internal.syslog:Checking ubuntu@vm02.local 2026-03-10T06:00:36.010 DEBUG:teuthology.orchestra.run.vm02:> grep -E --binary-files=text '\bBUG\b|\bINFO\b|\bDEADLOCK\b' /home/ubuntu/cephtest/archive/syslog/kern.log | grep -v 'task .* blocked for more than .* seconds' | grep -v 'lockdep is turned off' | grep -v 'trying to register non-static key' | grep -v 'DEBUG: fsize' | grep -v CRON | grep -v 'BUG: bad unlock balance detected' | grep -v 'inconsistent lock state' | grep -v '*** DEADLOCK ***' | grep -v 'INFO: possible irq lock inversion dependency detected' | grep -v 'INFO: NMI handler (perf_event_nmi_handler) took too long to run' | grep -v 'INFO: recovery required on readonly' | grep -v 'ceph-create-keys: INFO' | grep -v INFO:ceph-create-keys | grep -v 'Loaded datasource DataSourceOpenStack' | grep -v 'container-storage-setup: INFO: Volume group backing root filesystem could not be determined' | grep -E -v '\bsalt-master\b|\bsalt-minion\b|\bsalt-api\b' | grep -v ceph-crash | grep -E -v '\btcmu-runner\b.*\bINFO\b' | head -n 1 2026-03-10T06:00:36.060 DEBUG:teuthology.task.internal.syslog:Checking ubuntu@vm05.local 2026-03-10T06:00:36.060 DEBUG:teuthology.orchestra.run.vm05:> grep -E --binary-files=text '\bBUG\b|\bINFO\b|\bDEADLOCK\b' /home/ubuntu/cephtest/archive/syslog/kern.log | grep -v 'task .* blocked for more than .* seconds' | grep -v 'lockdep is turned off' | grep -v 'trying to register non-static key' | grep -v 'DEBUG: fsize' | grep -v CRON | grep -v 'BUG: bad unlock balance detected' | grep -v 'inconsistent lock state' | grep -v '*** DEADLOCK ***' | grep -v 'INFO: possible irq lock inversion dependency detected' | grep -v 'INFO: NMI handler (perf_event_nmi_handler) took too long to run' | grep -v 'INFO: recovery required on readonly' | grep -v 'ceph-create-keys: INFO' | grep -v INFO:ceph-create-keys | grep -v 'Loaded datasource DataSourceOpenStack' | grep -v 'container-storage-setup: INFO: Volume group backing root filesystem could not be determined' | grep -E -v '\bsalt-master\b|\bsalt-minion\b|\bsalt-api\b' | grep -v ceph-crash | grep -E -v '\btcmu-runner\b.*\bINFO\b' | head -n 1 2026-03-10T06:00:36.071 INFO:teuthology.task.internal.syslog:Gathering journactl... 2026-03-10T06:00:36.071 DEBUG:teuthology.orchestra.run.vm02:> sudo journalctl > /home/ubuntu/cephtest/archive/syslog/journalctl.log 2026-03-10T06:00:36.103 DEBUG:teuthology.orchestra.run.vm05:> sudo journalctl > /home/ubuntu/cephtest/archive/syslog/journalctl.log 2026-03-10T06:00:36.195 INFO:teuthology.task.internal.syslog:Compressing syslogs... 2026-03-10T06:00:36.195 DEBUG:teuthology.orchestra.run.vm02:> find /home/ubuntu/cephtest/archive/syslog -name '*.log' -print0 | sudo xargs -0 --max-args=1 --max-procs=0 --verbose --no-run-if-empty -- gzip -5 --verbose -- 2026-03-10T06:00:36.196 DEBUG:teuthology.orchestra.run.vm05:> find /home/ubuntu/cephtest/archive/syslog -name '*.log' -print0 | sudo xargs -0 --max-args=1 --max-procs=0 --verbose --no-run-if-empty -- gzip -5 --verbose -- 2026-03-10T06:00:36.201 INFO:teuthology.orchestra.run.vm02.stderr:gzip -5 --verbose -- /home/ubuntu/cephtest/archive/syslog/misc.log 2026-03-10T06:00:36.201 INFO:teuthology.orchestra.run.vm02.stderr:gzip -5 --verbose -- /home/ubuntu/cephtest/archive/syslog/kern.log 2026-03-10T06:00:36.201 INFO:teuthology.orchestra.run.vm02.stderr:/home/ubuntu/cephtest/archive/syslog/misc.log: 0.0% -- replaced with /home/ubuntu/cephtest/archive/syslog/misc.log.gz 2026-03-10T06:00:36.202 INFO:teuthology.orchestra.run.vm02.stderr:gzip -5 --verbose -- /home/ubuntu/cephtest/archive/syslog/journalctl.log 2026-03-10T06:00:36.202 INFO:teuthology.orchestra.run.vm02.stderr:/home/ubuntu/cephtest/archive/syslog/kern.log: /home/ubuntu/cephtest/archive/syslog/journalctl.log: 0.0% -- replaced with /home/ubuntu/cephtest/archive/syslog/kern.log.gz 2026-03-10T06:00:36.203 INFO:teuthology.orchestra.run.vm05.stderr:gzip -5 --verbose -- /home/ubuntu/cephtest/archive/syslog/misc.log 2026-03-10T06:00:36.203 INFO:teuthology.orchestra.run.vm05.stderr:gzip -5 --verbose -- /home/ubuntu/cephtest/archive/syslog/kern.log 2026-03-10T06:00:36.204 INFO:teuthology.orchestra.run.vm05.stderr:/home/ubuntu/cephtest/archive/syslog/misc.log: 0.0% -- replaced with /home/ubuntu/cephtest/archive/syslog/misc.log.gz 2026-03-10T06:00:36.204 INFO:teuthology.orchestra.run.vm05.stderr:gzip -5 --verbose -- /home/ubuntu/cephtest/archive/syslog/journalctl.log 2026-03-10T06:00:36.204 INFO:teuthology.orchestra.run.vm05.stderr:/home/ubuntu/cephtest/archive/syslog/kern.log: 0.0%/home/ubuntu/cephtest/archive/syslog/journalctl.log: -- replaced with /home/ubuntu/cephtest/archive/syslog/kern.log.gz 2026-03-10T06:00:36.221 INFO:teuthology.orchestra.run.vm05.stderr: 89.1% -- replaced with /home/ubuntu/cephtest/archive/syslog/journalctl.log.gz 2026-03-10T06:00:36.221 INFO:teuthology.orchestra.run.vm02.stderr: 91.2% -- replaced with /home/ubuntu/cephtest/archive/syslog/journalctl.log.gz 2026-03-10T06:00:36.222 DEBUG:teuthology.run_tasks:Unwinding manager internal.sudo 2026-03-10T06:00:36.225 INFO:teuthology.task.internal:Restoring /etc/sudoers... 2026-03-10T06:00:36.225 DEBUG:teuthology.orchestra.run.vm02:> sudo mv -f /etc/sudoers.orig.teuthology /etc/sudoers 2026-03-10T06:00:36.273 DEBUG:teuthology.orchestra.run.vm05:> sudo mv -f /etc/sudoers.orig.teuthology /etc/sudoers 2026-03-10T06:00:36.280 DEBUG:teuthology.run_tasks:Unwinding manager internal.coredump 2026-03-10T06:00:36.283 DEBUG:teuthology.orchestra.run.vm02:> sudo sysctl -w kernel.core_pattern=core && sudo bash -c 'for f in `find /home/ubuntu/cephtest/archive/coredump -type f`; do file $f | grep -q systemd-sysusers && rm $f || true ; done' && rmdir --ignore-fail-on-non-empty -- /home/ubuntu/cephtest/archive/coredump 2026-03-10T06:00:36.315 DEBUG:teuthology.orchestra.run.vm05:> sudo sysctl -w kernel.core_pattern=core && sudo bash -c 'for f in `find /home/ubuntu/cephtest/archive/coredump -type f`; do file $f | grep -q systemd-sysusers && rm $f || true ; done' && rmdir --ignore-fail-on-non-empty -- /home/ubuntu/cephtest/archive/coredump 2026-03-10T06:00:36.320 INFO:teuthology.orchestra.run.vm02.stdout:kernel.core_pattern = core 2026-03-10T06:00:36.328 INFO:teuthology.orchestra.run.vm05.stdout:kernel.core_pattern = core 2026-03-10T06:00:36.335 DEBUG:teuthology.orchestra.run.vm02:> test -e /home/ubuntu/cephtest/archive/coredump 2026-03-10T06:00:36.372 DEBUG:teuthology.orchestra.run:got remote process result: 1 2026-03-10T06:00:36.373 DEBUG:teuthology.orchestra.run.vm05:> test -e /home/ubuntu/cephtest/archive/coredump 2026-03-10T06:00:36.379 DEBUG:teuthology.orchestra.run:got remote process result: 1 2026-03-10T06:00:36.379 DEBUG:teuthology.run_tasks:Unwinding manager internal.archive 2026-03-10T06:00:36.382 INFO:teuthology.task.internal:Transferring archived files... 2026-03-10T06:00:36.382 DEBUG:teuthology.misc:Transferring archived files from vm02:/home/ubuntu/cephtest/archive to /archive/kyr-2026-03-10_01:00:38-orch-squid-none-default-vps/919/remote/vm02 2026-03-10T06:00:36.382 DEBUG:teuthology.orchestra.run.vm02:> sudo tar c -f - -C /home/ubuntu/cephtest/archive -- . 2026-03-10T06:00:36.422 DEBUG:teuthology.misc:Transferring archived files from vm05:/home/ubuntu/cephtest/archive to /archive/kyr-2026-03-10_01:00:38-orch-squid-none-default-vps/919/remote/vm05 2026-03-10T06:00:36.422 DEBUG:teuthology.orchestra.run.vm05:> sudo tar c -f - -C /home/ubuntu/cephtest/archive -- . 2026-03-10T06:00:36.431 INFO:teuthology.task.internal:Removing archive directory... 2026-03-10T06:00:36.431 DEBUG:teuthology.orchestra.run.vm02:> rm -rf -- /home/ubuntu/cephtest/archive 2026-03-10T06:00:36.463 DEBUG:teuthology.orchestra.run.vm05:> rm -rf -- /home/ubuntu/cephtest/archive 2026-03-10T06:00:36.476 DEBUG:teuthology.run_tasks:Unwinding manager internal.archive_upload 2026-03-10T06:00:36.478 INFO:teuthology.task.internal:Not uploading archives. 2026-03-10T06:00:36.478 DEBUG:teuthology.run_tasks:Unwinding manager internal.base 2026-03-10T06:00:36.481 INFO:teuthology.task.internal:Tidying up after the test... 2026-03-10T06:00:36.481 DEBUG:teuthology.orchestra.run.vm02:> find /home/ubuntu/cephtest -ls ; rmdir -- /home/ubuntu/cephtest 2026-03-10T06:00:36.507 DEBUG:teuthology.orchestra.run.vm05:> find /home/ubuntu/cephtest -ls ; rmdir -- /home/ubuntu/cephtest 2026-03-10T06:00:36.509 INFO:teuthology.orchestra.run.vm02.stdout: 258076 4 drwxr-xr-x 2 ubuntu ubuntu 4096 Mar 10 06:00 /home/ubuntu/cephtest 2026-03-10T06:00:36.509 INFO:teuthology.orchestra.run.vm02.stdout: 258199 316 -rwxrwxr-x 1 ubuntu ubuntu 320521 Mar 10 05:43 /home/ubuntu/cephtest/cephadm 2026-03-10T06:00:36.510 INFO:teuthology.orchestra.run.vm02.stderr:rmdir: failed to remove '/home/ubuntu/cephtest': Directory not empty 2026-03-10T06:00:36.518 DEBUG:teuthology.orchestra.run:got remote process result: 1 2026-03-10T06:00:36.518 ERROR:teuthology.run_tasks:Manager failed: internal.base Traceback (most recent call last): File "/home/teuthos/teuthology/teuthology/task/internal/__init__.py", line 48, in base yield File "/home/teuthos/teuthology/teuthology/run_tasks.py", line 160, in run_tasks suppress = manager.__exit__(*exc_info) File "/home/teuthos/.local/share/uv/python/cpython-3.10.19-linux-x86_64-gnu/lib/python3.10/contextlib.py", line 142, in __exit__ next(self.gen) File "/home/teuthos/src/github.com_kshtsk_ceph_75a68fd8ca3f918fe9466b4c0bb385b7fc260a9b/qa/tasks/cephadm.py", line 2216, in task with contextutil.nested( File "/home/teuthos/.local/share/uv/python/cpython-3.10.19-linux-x86_64-gnu/lib/python3.10/contextlib.py", line 142, in __exit__ next(self.gen) File "/home/teuthos/teuthology/teuthology/contextutil.py", line 54, in nested raise exc[1] File "/home/teuthos/.local/share/uv/python/cpython-3.10.19-linux-x86_64-gnu/lib/python3.10/contextlib.py", line 153, in __exit__ self.gen.throw(typ, value, traceback) File "/home/teuthos/src/github.com_kshtsk_ceph_75a68fd8ca3f918fe9466b4c0bb385b7fc260a9b/qa/tasks/cephadm.py", line 1845, in initialize_config yield File "/home/teuthos/teuthology/teuthology/contextutil.py", line 46, in nested if exit(*exc): File "/home/teuthos/.local/share/uv/python/cpython-3.10.19-linux-x86_64-gnu/lib/python3.10/contextlib.py", line 142, in __exit__ next(self.gen) File "/home/teuthos/src/github.com_kshtsk_ceph_75a68fd8ca3f918fe9466b4c0bb385b7fc260a9b/qa/tasks/cephadm.py", line 229, in download_cephadm _rm_cluster(ctx, cluster_name) File "/home/teuthos/src/github.com_kshtsk_ceph_75a68fd8ca3f918fe9466b4c0bb385b7fc260a9b/qa/tasks/cephadm.py", line 383, in _rm_cluster remote.run(args=[ File "/home/teuthos/teuthology/teuthology/orchestra/remote.py", line 575, in run r = self._runner(client=self.ssh, name=self.shortname, **kwargs) File "/home/teuthos/teuthology/teuthology/orchestra/run.py", line 461, in run r.wait() File "/home/teuthos/teuthology/teuthology/orchestra/run.py", line 161, in wait self._raise_for_status() File "/home/teuthos/teuthology/teuthology/orchestra/run.py", line 181, in _raise_for_status raise CommandFailedError( teuthology.exceptions.CommandFailedError: Command failed on vm02 with status 1: 'sudo /home/ubuntu/cephtest/cephadm rm-cluster --fsid 107483ae-1c44-11f1-b530-c1172cd6122a --force' During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/home/teuthos/teuthology/teuthology/run_tasks.py", line 160, in run_tasks suppress = manager.__exit__(*exc_info) File "/home/teuthos/.local/share/uv/python/cpython-3.10.19-linux-x86_64-gnu/lib/python3.10/contextlib.py", line 153, in __exit__ self.gen.throw(typ, value, traceback) File "/home/teuthos/teuthology/teuthology/task/internal/__init__.py", line 53, in base run.wait( File "/home/teuthos/teuthology/teuthology/orchestra/run.py", line 485, in wait proc.wait() File "/home/teuthos/teuthology/teuthology/orchestra/run.py", line 161, in wait self._raise_for_status() File "/home/teuthos/teuthology/teuthology/orchestra/run.py", line 181, in _raise_for_status raise CommandFailedError( teuthology.exceptions.CommandFailedError: Command failed on vm02 with status 1: 'find /home/ubuntu/cephtest -ls ; rmdir -- /home/ubuntu/cephtest' 2026-03-10T06:00:36.518 DEBUG:teuthology.run_tasks:Unwinding manager console_log 2026-03-10T06:00:36.521 DEBUG:teuthology.run_tasks:Exception was not quenched, exiting: CommandFailedError: Command failed on vm02 with status 1: 'sudo /home/ubuntu/cephtest/cephadm rm-cluster --fsid 107483ae-1c44-11f1-b530-c1172cd6122a --force' 2026-03-10T06:00:36.521 INFO:teuthology.run:Summary data: description: orch/cephadm/upgrade/{1-start-distro/1-start-ubuntu_22.04 2-repo_digest/repo_digest 3-upgrade/simple 4-wait 5-upgrade-ls agent/off mon_election/connectivity} duration: 1177.6635837554932 failure_reason: 'Command failed on vm02 with status 1: ''sudo /home/ubuntu/cephtest/cephadm rm-cluster --fsid 107483ae-1c44-11f1-b530-c1172cd6122a --force''' owner: kyr status: fail success: false 2026-03-10T06:00:36.521 DEBUG:teuthology.report:Pushing job info to http://localhost:8080 2026-03-10T06:00:36.523 INFO:teuthology.orchestra.run.vm05.stdout: 258079 4 drwxr-xr-x 2 ubuntu ubuntu 4096 Mar 10 06:00 /home/ubuntu/cephtest 2026-03-10T06:00:36.523 INFO:teuthology.orchestra.run.vm05.stdout: 258199 316 -rwxrwxr-x 1 ubuntu ubuntu 320521 Mar 10 05:43 /home/ubuntu/cephtest/cephadm 2026-03-10T06:00:36.523 INFO:teuthology.orchestra.run.vm05.stderr:rmdir: failed to remove '/home/ubuntu/cephtest': Directory not empty 2026-03-10T06:00:36.540 INFO:teuthology.run:FAIL